Deploying a JBOD in the T2 Hadoop

Building the list of target disks

The first step is to identify in the JBOD which disks are going to be included for storage (you might not want SDA, or treat it by hand as it's more complex).

For that you probably want to run the following :

    ls /dev/sd* | grep -v -P '.*[0-9]{1,3}' > /tmp/disksf.txt

Now make sure that you get the same amount of desired disks from :

 fdisk -l | grep "Disk /" 

In the file. Make sure to exclude SDA if it makes sense to you (usually Rocks/Foreman take good care of this).

If all looks good in /tmp/disksf.txt is time to create the input for the script that will handle it. The input format it expects is at each line :

 <disk_device>  <index of /dataX mountpoint 

For example :

/dev/sde   5
/dev/sdf   6
/dev/sdg   7
/dev/sdh   8

See that I don't mention anything up to SDD. Because of this, the script will realize there's no index for these and leave them alone.

An useful one-liner to build such a list from /tmp/disksf.txt is this :

 perl -nle 'BEGIN{$i=2}print $_." ",$i; $i++;' /tmp/disksf.txt 

Then you probably want to redirect the output to /tmp/disks.txt

Formatting/Labeling relevant disks

Now that we have our input list ready, and looked through it carefully, cross-checking with what is already mounted, it is time to actually act on it. In general all disks will have the same size, so you want to change the hardcoded disk size in the script below, which is also on the given file on the T2 :

/share/apps/misc/storage/prepare-disks.sh

for disk in $(ls /dev/sd* | grep -v -P '.*[0-9]{1,3}' ) ; do
        INDEX=$(grep $disk /tmp/disks.txt | awk '{print $2}')
        echo "$disk   $INDEX"
        if [ ! -z $INDEX ] ; then
                parted $disk rm 1 -s
                parted $disk mklabel gpt -s
                parted $disk mkpart primary xfs 0% 100% -s
                mkfs -t xfs $disk"1"
                xfs_admin -L "/data$INDEX" $disk"1"
        fi
done

If something weird seems to be happening, don't hesitate to ctrl+C the script, because as you can see, it can run over and over in the same machine, and the results shouldn't be too harmful. Just some re-work for disks that were already done.

When this is done successfully, a pretty cool feature will be in place : even if the server is reinstalled, foreman will take care of adding them into fstab, and they will be included automatically in hadoop. All it takes is the partitions created and labeled correctly as their mountpoints.

Including prepared disks into fstab

This step is important to include the fresh-prepared partitions into the system, and by doing that, they will be picked up by the automatic hadoop configuration scripts. Use the script or the file at the T2 :

/share/apps/misc/storage/insert-fstab.sh

for i in /dev/sd?? ; do
    label=`xfs_admin -l $i 2>/dev/null | grep -v '(null)' | sed -e 's/^.*"\(.*\)"$/\1/'`
    if [ "$label" != "" ] ; then
        if ! grep -q $label /etc/fstab ; then
            mkdir "${label}"
            echo "LABEL=${label}                ${label}                  xfs      defaults        0 0" >> /etc/fstab
        fi
    fi
done

'''WARNING :''' This will probably not work in machines that exceed the [a-z] alphabet of disks (seen before). The bug is the for loop "for i in /dev/sd??". This needs to be improved.

After this runs, go ahead, have a look in /etc/fstab. It should look like this :

UUID=a70a72b5-c513-458e-93ca-e4eba00c060c /                       ext4    defaults        1 1
UUID=4be01269-3098-4af3-b4ec-7cb99e0a474b /data1                  xfs     defaults        1 2
UUID---+385ad3ee-4c09-4914-8b15-dd9af89a8cf0 /var                    ext4    defaults        1 2
UUID---+1a0c679f-cc11-40de-a18a-6461a7d2fbab swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid---+5,mode---+620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL---+/data2                /data2                  xfs      defaults        0 0
LABEL---+/data3                /data3                  xfs      defaults        0 0
LABEL---+/data4                /data4                  xfs      defaults        0 0
LABEL---+/data5                /data5                  xfs      defaults        0 0
LABEL---+/data6                /data6                  xfs      defaults        0 0
LABEL---+/data7                /data7                  xfs      defaults        0 0
LABEL---+/data8                /data8                  xfs      defaults        0 0
LABEL---+/data9                /data9                  xfs      defaults        0 0
LABEL---+/data10                /data10                  xfs      defaults        0 0
LABEL---+/data11                /data11                  xfs      defaults        0 0
LABEL---+/data12                /data12                  xfs      defaults        0 0
LABEL---+/data13                /data13                  xfs      defaults        0 0
LABEL---+/data14                /data14                  xfs      defaults        0 0
LABEL---+/data15                /data15                  xfs      defaults        0 0
LABEL---+/data16                /data16                  xfs      defaults        0 0
LABEL---+/data17                /data17                  xfs      defaults        0 0
LABEL---+/data18                /data18                  xfs      defaults        0 0
LABEL---+/data19                /data19                  xfs      defaults        0 0
LABEL---+/data20                /data20                  xfs      defaults        0 0
LABEL---+/data21                /data21                  xfs      defaults        0 0
LABEL---+/data22                /data22                  xfs      defaults        0 0
LABEL---+/data23                /data23                  xfs      defaults        0 0
LABEL---+/data24                /data24                  xfs      defaults        0 0
LABEL---+/data25                /data25                  xfs      defaults        0 0

Grand finale

Make sure all looks good after :

mount -a


[root@datanode-15-2 ~]# df -kh
Filesystem                Size  Used Avail Use% Mounted on
/dev/sda2                  20G  1.2G   18G   7% /
tmpfs                     3.9G     0  3.9G   0% /dev/shm
/dev/sda5                 7.8G   33M  7.7G   1% /data1
/dev/sda1                  34G  274M   32G   1% /var
puppet.tier2:/share/apps   50G  3.0G   44G   7% /share/apps
/dev/sdb1                 1.9T   33M  1.9T   1% /data2
/dev/sdc1                 1.9T   33M  1.9T   1% /data3
/dev/sdd1                 1.9T   33M  1.9T   1% /data4
/dev/sde1                 1.9T   33M  1.9T   1% /data5
/dev/sdf1                 1.9T   33M  1.9T   1% /data6
/dev/sdg1                 1.9T   33M  1.9T   1% /data7
/dev/sdh1                 1.9T   33M  1.9T   1% /data8
/dev/sdi1                 1.9T   33M  1.9T   1% /data9
/dev/sdj1                 1.9T   33M  1.9T   1% /data10
/dev/sdk1                 1.9T   33M  1.9T   1% /data11
/dev/sdl1                 1.9T   33M  1.9T   1% /data12
/dev/sdm1                 1.9T   33M  1.9T   1% /data13
/dev/sdn1                 1.9T   33M  1.9T   1% /data14
/dev/sdo1                 1.9T   33M  1.9T   1% /data15
/dev/sdp1                 1.9T   33M  1.9T   1% /data16
/dev/sdq1                 1.9T   33M  1.9T   1% /data17
/dev/sdr1                 1.9T   33M  1.9T   1% /data18
/dev/sds1                 1.9T   33M  1.9T   1% /data19
/dev/sdt1                 1.9T   33M  1.9T   1% /data20
/dev/sdu1                 1.9T   33M  1.9T   1% /data21
/dev/sdv1                 1.9T   33M  1.9T   1% /data22
/dev/sdw1                 1.9T   33M  1.9T   1% /data23
/dev/sdx1                 1.9T   33M  1.9T   1% /data24
/dev/sdy1                 1.9T   33M  1.9T   1% /data25

HDFS Configurations

This is just the very basics. I put some scripts in place to ease everyone's lives. So after everything is fine interms of partitions mounted run :

/share/apps/hadoop/migration20/datanode.sh

Then use this guy to generate the configuration :

/share/apps/hadoop/generate-conf.sh

And cp/paste it in /etc/hadoop/conf/hdfs-site.xml

From this point on is the standard hadoop / puppet procedure. Out of scope of this documentation. Enjoy!

-- Main.samir - 2014-08-07

Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r5 - 2014-10-02 - samir
 
This site is powered by the TWiki collaboration platform Powered by Perl This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback