Creating a NFS CENTOS 6 Linux target server


We have some old kit and and I have a Friday free now and we need a development environment to get a few things working.

So I decided to embark on  a vmware server and storage build. All of course without spending any money 🙂 (well on software anyway)

So I have 2 HP GL something servers for the vmware environment, a DL server to act as the iscii storage server and a 14 disk array that can be connected via a direct HBA fibre connect. All sounds good.

The idea is to software RAID the disk array, present the array to the storage server which will server this out as an iscii connection via direct connected bonded Ethernet cards.

The first thing that I though would be the easiest bit was to create the RAID array. This turned out not to be the case, but when you see the solution DOH!!!!!

Linux OS is CENTOS 6.3

I know you may wonder why I did not use FreeNAS or NAS4Free. I did try, but the hardware was not compatible. It would not even boot, so I could not even go after some fixes.

So onto the install.

First find out what the disks are called.

fdisk -l

Will display a long list look for something like

/dev/sd*

Armed with this information

fdisk -l | grep /dev/sd

This will list all of the disks. In my case /dev/sda —> /dev/sdm

To make sure all of the disks have no partition you can use various tools such as fdisk. Good tutorial How to delete a partion here. I used cfdisk. There is a menu with it so you do not have to remember the switches 🙂

cfdisk /dev/sd*

Delete any partitions and save and exit. Do this for all the disks that you want to put into the array.

Now to create the array. I am creating a RAID 6 array with 2 spare disks. They are old disk. Need a bit of guarantee regarding the condition.

A good mdam write up is actually in wikipedia of all places Mdadm

mdadm --create --verbose /dev/md0 --level=6 --raid-devices=12 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk --spare-devices=2 /dev/sdl /dev/sdm

I got this error mdadm: super1.x cannot open /dev/sdb1: Device or resource busy mdadm: ddf: Cannot use /dev/sdb1: Device or resource busy Not sure why you can get this, but basically it looks like some of the disks have a RAID configuration and are being held by the OS. To discover if this is the case

cat /proc/mdstat

This will list what RAID has got hold of the disks. A good chance if will be md127. To get rid of this

mdadm --stop /dev/<mdxxx>

Where xxx is the RAID number found in the cat command above. Once you have stopped the RAID, issue the mdadm –create command

This will start the creation of the RAID array. Depending on the size and speed this could take hours. To determine if the RAID is creating then

cat /proc/mdstat

You will get something along the lines of

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdm[12](S) sdl[11](S) sdk[10] sdj[9] sdi[8] sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1] sda[0]
1290359808 blocks super 1.2 level 6, 512k chunk, algorithm 2 [11/11] [UUUUUUUUUUU]
[======>…………..] resync = 32.7% (46917760/143373312) finish=175.3min speed=9168K/sec

unused devices: <none>

Once you have the RAID array created its time to create a file system and mount it and also make sure that the RAID starts when the system is re-booted

mdadm --detail --scan >> /etc/mdadm.conf

OK another area of investigation, what is the best Linux file system. Well as will all things it depends on what you need it for. The storage is for VMware files. These will be 10GB–50GB flat files and therefore I am going to try out the xfs file system which is supposed to be good and fast for the larger files.

Ok some thing strange happened, I had to reboot the server and the /dev/md0 became /dev/md127, no idea why, and I just wanted to get it finished.

You now need to create a logical volume, this will allow you to mount the disk

#create a volume group named “vg_target00”

vgcreate -s 32M vg_target00 /dev/md0

# create a logical volume named “lv_target00”

lvcreate -L 130G -n lv_target00 vg_target00

This creates it for all of the volume

lvcreate -l 100%VG -n lv_target00 vg_target00

This area got a bit fuzzy and the exact order may need to change, I tried to keep notes around this element, but I tried many different bits. One this is for sure though if you use xfs files system then to avoid the following error you need to install the correct packages

You may get the following error

bash: mkfs.xfs: command not found

Some bits and pieces were needed

yum install xfsdump xfsprogs
mkdir /mnt/storage

To make the file system

mount -t xfs /dev/vg_target00/lv_target00 /mnt/storage

mkfs -t xfs -f /dev/vg_target00/lv_target00

This will now have mounted it. To make this a permanent mount then you need to add a line to fstab

vi /etc/fstab
/dev/vg_target00/lv_target00 /mnt/storage       xfs     defaults        0 0

OK we now have the disk array prepared, we now need the connectivity to sort out. I installed a 4 port NIC card in the server, I will bond two port together to give me direct access to the two servers. Bonding the ports together will I hope give me enough throughput for some decent performance. To determine what nic you have

ls sysconfig/network-scripts/

This will list something along the lines of

ifcfg-eth0 ifcfg-eth5 ifdown-ippp ifdown-routes ifup-bnep ifup-plip ifup-sit network-functions
ifcfg-eth1 ifcfg-lo ifdown-ipv6 ifdown-sit ifup-eth ifup-plusb ifup-tunnel network-functions-ipv6
ifcfg-eth2 ifdown ifdown-isdn ifdown-tunnel ifup-ippp ifup-post ifup-wireless
ifcfg-eth3 ifdown-bnep ifdown-post ifup ifup-ipv6 ifup-ppp init.ipv6-global
ifcfg-eth4 ifdown-eth ifdown-ppp ifup-aliases ifup-isdn ifup-routes net.hotplug

The NIC cards I am interested in are ifcfg-eth2, ifcfg-eth3, ifcfg-eth4 and ifcfg-eth5. There is one note, many of the tutorials that I read specified changing the modprobe.conf file. This appears to be have been depreciated and now all the .conf files can be created in /etc/modprobe.d

So lets start we will have bond0 and bond1.

Just a note on the IP addresses that we will use. As we are direct connecting the servers, I am going to to configure the network too two IP hosts per connection

Therefore

bond0  details (server)

IPADDR=192.168.100.1
NETWORK=192.168.100.0
NETMASK=255.255.255.252

bond0  details (client)

IPADDR=192.168.100.2
NETWORK=192.168.100.0
NETMASK=255.255.255.252

bond1  details (server)

IPADDR=192.168.101.1
NETWORK=192.168.101.0
NETMASK=255.255.255.252

bond1  details (client)

IPADDR=192.168.101.2
NETWORK=192.168.101.0
NETMASK=255.255.255.252

You need ethtool installed to allow bonding NIC to work

yum install ethtool

Bond0 creation

 echo "DEVICE=bond0"  >/etc/sysconfig/network-scripts/ifcfg-bond0
 echo "IPADDR=192.168.100.1"  >> /etc/sysconfig/network-scripts/ifcfg-bond0
 echo "NETWORK=192.168.100.0"  >> /etc/sysconfig/network-scripts/ifcfg-bond0
 echo "NETMASK=255.255.255.252"  >> /etc/sysconfig/network-scripts/ifcfg-bond0
 echo "USERCTL=no"  >> /etc/sysconfig/network-scripts/ifcfg-bond0
 echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-bond0
 echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-bond0

Bond1 creation

echo "DEVICE=bond1"  >/etc/sysconfig/network-scripts/ifcfg-bond1
echo "IPADDR=192.168.101.1"  >> /etc/sysconfig/network-scripts/ifcfg-bond1
echo "NETWORK=192.168.101.0"  >> /etc/sysconfig/network-scripts/ifcfg-bond1
echo "NETMASK=255.255.255.252"  >> /etc/sysconfig/network-scripts/ifcfg-bond1
echo "USERCTL=no"  >> /etc/sysconfig/network-scripts/ifcfg-bond1
echo "BOOTPROTO=none" >> /etc/sysconfig/network-scripts/ifcfg-bond1
echo "ONBOOT=yes" >> /etc/sysconfig/network-scripts/ifcfg-bond1

OK we now need to tell the NIC ports to belong to the bond

vi /etc/sysconfig/network-scripts/ifcfg-eth3

My current config looks like this

DEVICE=”eth3″
BOOTPROTO=”dhcp”
HWADDR=”00:0E:0C:C5:1C:25″
NM_CONTROLLED=”yes”
ONBOOT=”no”
TYPE=”Ethernet”
UUID=”13378c1e-98aa-4e18-9aee-b973b0e7b49e”

We need to change it to the following

DEVICE=”eth3″
USERCTL=”no”
BOOTPROTO=”none”
MASTER=bond0
SLAVE=”yes”
HWADDR=”00:0E:0C:C5:1C:25″
NM_CONTROLLED=”yes”
ONBOOT=”yes”
TYPE=”Ethernet”
UUID=”13378c1e-98aa-4e18-9aee-b973b0e7b49e”

This needs to be done for all of the ports and bonds.

Once this is done, you need to start the bonds, to do this create a bonding.conf file. There are many difference bonding modes these are described in the Tips and tricks for bonding interfaces.

I want to balance my NIC to get the best throughput so I an using mode 5

echo "alias bond0 bonding" > /etc/modprobe.d/bonding.conf
echo "options bond0 mode=5 miimon=100" >> /etc/modprobe.d/bonding.conf
echo "alias bond1 bonding" >> /etc/modprobe.d/bonding.conf
echo "options bond1 mode=5 miimon=100" >> /etc/modprobe.d/bonding.conf

We now need to add this to the kernel (not sure exactly what this means but taken from all the tutorials read this is what you have to do)

modprobe bonding

Okay the last this that I want to is present the disk so that the servers can use it. I tried iscii at the start but this really did not work in our environment, so I opted for NFS. So this is how I think I got this working, there was a mix of iscii and NFS going on so this maybe a little inaccurate, but it will get you most of the way there :).

yum -y install nfs-utils rpcbind

chkconfig nfs on
chkconfig rpcbind on
chkconfig nfslock on

The two line below referrer to the networks that servers will be allowed to access the NFS shares from. For security I would keep this as small as possible and due to the namture an amount of traffic should be on a separate network.

vi /etc/exports
/mnt/storage 192.168.100.0/255.255.255.252(rw,sync,no_root_squash)
/mnt/storage 192.168.101.0/255.255.255.252(rw,sync,no_root_squash)

exportfs -a

I have compiled this almost how to guide from the resources below

Advertisements


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s