Prerequisites

Servers

To test replication across servers, 3 VMs are being used.

Hostname 

IP Address 

 ebdp-po-dkr10d.sys.comcast.net

 147.191.72.175

 ebdp-po-dkr11d.sys.comcast.net

 147.191.72.176

 ebdp-po-dkr12d.sys.comcast.net

 147.191.74.184

Installation

All the VMs should have the following packaged to be installed.
# rpm -qa | grep gluster
glusterfs-3.8.15-2.el7.x86_64
glusterfs-server-3.8.15-2.el7.x86_64
glusterfs-libs-3.8.15-2.el7.x86_64
glusterfs-api-3.8.15-2.el7.x86_64
glusterfs-cli-3.8.15-2.el7.x86_64
glusterfs-client-xlators-3.8.15-2.el7.x86_64
glusterfs-fuse-3.8.15-2.el7.x86_64
cs


Other Settings

To simplify the test, SELinux and firewalld are disabled.

Setup

Prepare Partitions

The VMs cannot have more partitions, loopback devices are used.

Please refer to http://nunojun.tistory.com/17 for more details of loopback device.
3 image files were created. The size of each is 20GB.
# losetup -l
NAME         SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop101         0      0         0  0 /app/work/glusterfs/disks/001.img
/dev/loop102         0      0         0  0 /app/work/glusterfs/disks/002.img
/dev/loop103         0      0         0  0 /app/work/glusterfs/disks/003.img
cs

Mount Partitions

All 3 VMs should have the same result.
# df -h
Filesystem                          Size  Used Avail Use% Mounted on
....
/dev/loop101                         20G   33M   20G   1% /app/work/glusterfs/bricks/brick1
/dev/loop102                         20G   33M   20G   1% /app/work/glusterfs/bricks/brick2
/dev/loop103                         20G   33M   20G   1% /app/work/glusterfs/bricks/brick3
cs

Connect

ebdp-po-dkr10d.sys.comcast.net is used as the main server.
The following commands are done on dkr10d VM only.
# gluster peer probe ebdp-po-dkr11d.sys.comcast.net
peer probe: success.
 
# gluster peer probe ebdp-po-dkr12d.sys.comcast.net
peer probe: success.
 
# gluster peer status
Number of Peers: 2
 
Hostname: ebdp-po-dkr11d.sys.comcast.net
Uuid: 868b4330-5667-46ba-9dad-ec4181b4c623
State: Peer in Cluster (Connected)
 
Hostname: ebdp-po-dkr12d.sys.comcast.net
Uuid: 55c36364-0d44-4359-9e58-a23f5b89c79e
State: Peer in Cluster (Connected)
cs


Create Volume

Note that this is also done on dkr10d server only.
# gluster volume create gluster-volume-001 \
    replica 3 \
    transport tcp \
    ebdp-po-dkr10d.sys.comcast.net:/app/work/glusterfs/bricks/brick1/brick \
    ebdp-po-dkr11d.sys.comcast.net:/app/work/glusterfs/bricks/brick1/brick \
    ebdp-po-dkr12d.sys.comcast.net:/app/work/glusterfs/bricks/brick1/brick
volume create: gluster-volume-001: success: please start the volume to access data
 
# gluster volume start gluster-volume-001
volume start: gluster-volume-001: success
 
# gluster volume status
Status of volume: gluster-volume-001
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick ebdp-po-dkr10d.sys.comcast.net:/app/w
ork/glusterfs/bricks/brick1/brick           49152     0          Y       4169
Brick ebdp-po-dkr11d.sys.comcast.net:/app/w
ork/glusterfs/bricks/brick1/brick           49152     0          Y       4657
Brick ebdp-po-dkr12d.sys.comcast.net:/app/w
ork/glusterfs/bricks/brick1/brick           49152     0          Y       4588
Self-heal Daemon on localhost               N/A       N/A        Y       4189
Self-heal Daemon on ebdp-po-dkr12d.sys.comc
ast.net                                     N/A       N/A        Y       4611
Self-heal Daemon on ebdp-po-dkr11d.sys.comc
ast.net                                     N/A       N/A        Y       4678
 
Task Status of Volume gluster-volume-001
------------------------------------------------------------------------------
There are no active volume tasks
 
# gluster volume info all
Volume Name: gluster-volume-001
Type: Replicate
Volume ID: 5f00bb5a-b977-4cad-8afe-df4abfbd1f35
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ebdp-po-dkr10d.sys.comcast.net:/app/work/glusterfs/bricks/brick1/brick
Brick2: ebdp-po-dkr11d.sys.comcast.net:/app/work/glusterfs/bricks/brick1/brick
Brick3: ebdp-po-dkr12d.sys.comcast.net:/app/work/glusterfs/bricks/brick1/brick
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
cs


Mount GlusterFS Volume from another server

The client server should have the following packages to be installed.
# rpm -qa | grep gluster
glusterfs-client-xlators-3.8.4-18.4.el7.centos.x86_64
glusterfs-fuse-3.8.4-18.4.el7.centos.x86_64
glusterfs-libs-3.8.4-18.4.el7.centos.x86_64
glusterfs-3.8.4-18.4.el7.centos.x86_64
cs


Then, the glusterfs volume can be mounted as below.

# mount -t glusterfs ebdp-po-dkr10d.sys.comcast.net:/gluster-volume-001 /mnt/gluster-volume/
cs


Once a file is created in the mounted directory, it's found from the original directory of 3 VMs.


Tear Down

# gluster volume stop gluster-volume-001
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: gluster-volume-001: success
 
# gluster volume delete gluster-volume-001
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: gluster-volume-001: success
 
# gluster peer detach ebdp-po-dkr11d.sys.comcast.net
peer detach: success
# gluster peer detach ebdp-po-dkr12d.sys.comcast.net
peer detach: success
# gluster peer status
Number of Peers: 0
cs


+ Recent posts