Официален блог на WebEKM EKM очаквайте сайта онлайн скоро.

Download Free Templates http://bigtheme.net/ free full Wordpress, Joomla, Mgento - premium themes.

iSCSI protocol in Linux environment – configuration of Initiator and Target
sharing storage server resources based on NFS and SMB

Thanks to this lab with simple topology we will study, how to configure iSCSI protocol in Linux and SAN environment. We gonna pass through iSCSI, Network File System and Server Message Block (SMB) also known as CIFS and at the end of the day we will be able to use  resources in SAN connected to the file server by iSCSI from Linux and Windows hosts.




Before we go over the lab let’s recall what iSCSI actually is and how we gonna use its in the lab.

ISCSI enables us exchanging SCSI commands via ethernet, also deployment block based level storage what makes this protocol great solution in Storage Area Network owing to efficiency. What is not less meaningful, we don’t need any additional expensive devices (as we need in Fibre Channel technology: FC switches, Host Bus Adapters etc) on the path between the Storage and the Server. We may use 10GB NICs if we need high throughput or if we want we may use InfiniBand that also can carry iSCSI commands but via serial link. Block based level storage compares to file level storage is much more efficient. With file level storage the OS that is responsible for communication between the server and storage is placed on the Storage, in case of iSCSI on the Server. In iSCSI protocol we have 2 kind of devices : the INITIATOR that initiates the connection (the server) and the TARGET that shares disk space ( the storage).

There are 2 main network filesystems in Linux and Windows world, Network File System and SMB (CIFS), both of them will be installed on the the Linux server cause we want that Linux and Windows based hosts get access to the disk space, moreover we will be sharing exactly the same resources for both clients.

Let’s have a look on the lab topology

For the sake of the lab simplicity I turned off SELinux and Firewalld daemons on Initiator and Target. 

setenforce 0 – to turn off SELinux
systemctl stop firewalld – to turn off the firewall

Firstly we configure the TARGET (storage)

1. We have to install iSCSI target package and enable its by default after server restart

yum install -y targetcli
systemctl enable target

2. Now we need to get into ‘target cli’ to make configuration, just run

targetcli

3. We run the command ‘backstores/block/‘ in order to create storage block, we give a name (iscsisdb) and point out the interesting resource that we want to share with Initiator. The resource might be just a disk, LVM or RAID array. I used just a single disk /sdb/

backstores/block/ create iscsisdb /dev/sdb

4. Next we have to name shared resource, we do that by giving IQN (iSCSI Qualified Name) – ‘iqn.2018-05.itbundle.com’ (Type-Date-Authority) with a name of the target resource (/sdb/) ‘disk1’ at the end. Then TPG (Target Port Group) will be created for shared resource.

cd /iscsi
iscsi/ create iqn.2018-05.itbundle.com:disk1

5. TPG consists of 3 parts :

ACLs – provides restriction to our resources (who has an access to them)
LUNs – (Logical Unit Number) define exported resources (what we want to share)
PORTALs – define the socket (IP address and port ) that our resource will be accessible at

We have to configure TPG, each part separately:

Creation of LUN
luns/ create /backstores/block/iscsisdb 

Applying ACL with IQN and identifier ‘client’ (we may choose any)
acls/ create iqn.2018-05.itbundle.com:client

Inside ACLs we may apply credentials (username and password) for the sake of security, but we don’t have to

cd acls
iqn.2018-05.itbundle.com:client/
set auth userid=marcin
set auth password=itbundle

6. Now we may leave ‘targetcli’ with ‘exit‘ command

7. Entire configuration has been written in /etc/target/saveconfig.json file

Here is the whole configuration from my terminal

 

Now we will configure INITIATOR (server)

1. We have to instal package ‘iscsi-initiator-utils’ 

yum install -y iscsi-initiator-utils

2. Next we edit file /etc/iscsi/initiatorname.iscsi and put in configured before initiator name IQN and identifier

InitiatorName=iqn.2018-05.itbundle.com:client

3. If we have configured in ACLs username and password we have to put into a file /etc/iscsi/iscsid.conf below lines:

node.session.auth.authmethod = CHAP
node.session.auth.username = marcin
node.session.auth.password = itbundle

4. We may start iscsi  service

systemctl start iscsi

5. Now we will use ‘iscsiadm‘ command twice . 

The first time in “discovery mode” we specify target IP address, default port of iscsi protocol is 3260, if we didn’t set up differently on the Target side we don’t have to specify port here.

iscsiadm –mode discovery –type sendtargets –portal 10.0.0.100

The second time in “node mode” where we specify target name and  IP address. Again we don’t have to change default port 3260

iscsiadm –mode node –targetname iqn.2018-05.itbundle.com:disk1 –portal 10.0.0.100 –login

Ok we are ready to go. Let’s verify if iSCSI protocol works fine with a couple of commands.

It seems that Server sees /dev/sdb on the Storage, so there is nothing more to do than format that resource. You may mount /dev/sdb resource in fstab file of course.

So far we’ve run iSCSI protocol between Storage Server and Linux Server, but end hosts still don’t have access to that resource, cause we didn’t run file system server service on the Server. Because we are going to share the same resource (/dev/sdb) with hosts with different operating systems, we have to run 2 instances of file system server, one for Linux host and second for Windows, cause Linux uses NFS protocol and Windows SMB/CIFS.

 

Firstly let’s configure access from Linux host, i order to do that we have to instal NFS server on the Server and NFS client on the host. Let’s start with the NFS Server

Before we start we have to install NFS packages and start the service

yum install nfs-utils nfs-utils-lib
systemctl start nfs

Then we have to mount Target /dev/sdb on the local file system, but be carefull NFS requires mounting its in /srv folder, otherwise will be inaccessible!

mkdir /srv/SHARE
mount /dev/sdb /srv/SHARE

Next we have to edit below configuration file to point out resources that we want to export

nano /etc/exports

and add below line  

/srv/SHARE 192.168.0.100(rw,sync,no_root_squash)

and we restart the service

systemctl restart nfs

 

Now we may go over the NFS Client configuration on the host

Let’s create the folder that we will mount the Server resource in a while

mkdir /mnt/share

Let’s check if we see the list of exported resources from the Server

showmount -e 192.168.0.1
we should get the output

Export list for 192.168.0.1:
/srv/SHARE 192.168.0.100

and finally we will mount /srv/SHARE from the Server on the Client under /mnt/share

mount -t nfs 192.168.0.1:/srv/SHARE /mnt/share

 

Now, let’s configure access for Windows host, in order to do that we have to install SAMBA (SMB) server on the Server

yum install samba

Let’s run the daemon

systemctl start smbd

For any user that has account in the system we may create Samba account, the password doesn’t have to match. Samba account is separate. I assigned password itbundle for user konix

smbpasswd -a konix
itbundle

The most important things happen in Samba configuration file where we create the name, that our resource will be visible at, the path to its and a couple of other things

nano /etc/samba/smb.conf

[Share]
path = /srv/SHARE
available = yes
valid users = konix
read only = no
browsable = yes
public = yes
writable = yes

Finally we restart the service

systemctl restart smb

We don’t have to install anything on the Windows host, just map the network drive with UNC path \\192.168.1.1\SHARE , type credentials konix/itbundle and thats it!

From now on we should be able to get to the resources placed on the Target from Windows host.

,

Onlain bookmaker bet365.com - the best bokie

Menu