본문 바로가기
Linux

GlusterFS 구축

by journes 2019. 3. 15.


GlusterFS 구축 





시스템 정보


1
2
3
4
5
6
7
 
OS : CentOS6.3 64bit
vm  :
 glusterFS1 : 192.168.0.184
 glusterFS2 : 192.168.0.185
 glusterFS-Client : 192.168.0.183
 
cs



구축 설정 정보


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
서버노드 : 192.168.0.184~5
* 하나의 서버당 2대의 brick 으로 구성 (brick1/vol0 ,  brick1/vol1 brick1/vol2 )
--------------------------------------
 
pool :  b
 
brick : /brick1/vol0  -  폴더생성
          /brick1/vol1  -  폴더생성
          /brick1/vol2  -  폴더생성
 
volume name  vol_dep 
 
volume type : replicated 2 
 
client volume 마운트 : glusterfs
 
cs



glusterFS (192.168.0.184) - 설정

time 동기화 (필수사항)

1
2
rdate -s time.bora.net && /sbin/hwclock --systohc
 
cs
 

glusterFS repo설정

1
2
3
4
5
cd /etc/yum.repos.d/
wget http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/EPEL.repo/glusterfs-epel.repo
cs


패키지 설치 & 서비스 시작

1
2
3
4
yum install glusterfs-server glusterfs-geo-replication  -y
chkconfig --level 235 glusterd on
service glusterd start
 
cs



brick 디렉토리 생성


1
2
mkdir -p /brick1/{vol0,vol1,vol2}  -> 디렉토리로 생성하였음.
 
cs




gluster pool 등록


1
2
3
gluster peer probe 192.168.0.185  -> glusterFS1번을 제외한 나머지 서버노드 등록
cs




glusterFS 상태 확인


1
2
3
4
5
6
7
8
[root@glusterfs1 vol1]# gluster peer status
Number of Peers: 1
 
Hostname: 192.168.0.185
Uuid: 045ec7b0-241c-497b-9851-e2d75e49aa18
State: Peer in Cluster (Connected)
 
 
cs



glusterFS volume 설정 - glusterfs1 번에서만 설정 (master)


1
2
3
# gluster volume create vol_dep replica 2 transport tcp 192.168.0.184:/brick1/vol0/ 192.168.0.185:/brick1/vol0/ 192.168.0.184:/brick1/vol1/ 192.168.0.185:/brick1/vol1/ 192.168.0.184:/brick1/vol2/ 192.168.0.185:/brick1/vol2/ force
  -> volume name : vol_dep  /  type : replica 2
 
cs



부팅 후에도 마운트 될수 있게 rc.local에 등록


1
2
3
# vi /etc/rc.local
gluster volume create vol_dep replica 2 transport tcp 192.168.0.184:/brick1/vol0/ 192.168.0.185:/brick1/vol0/ 192.168.0.184:/brick1/vol1/ 192.168.0.185:/brick1/vol1/ 192.168.0.184:/brick1/vol2/ 192.168.0.185:/brick1/vol2/ force
 
cs




Gluser volume 사용 시작

1
2
#gluster volume start vol_dep
 
cs



glusterFS volume  상태 확인



1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@glusterfs1 brick1]# gluster volume info all
 
Volume Name: vol_dep
Type: Distributed-Replicate
Volume ID: 8a8d0b9d-5403-4bba-a479-509ff0e8b961
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 192.168.0.184:/brick1/vol0
Brick2: 192.168.0.185:/brick1/vol0
Brick3: 192.168.0.184:/brick1/vol1
Brick4: 192.168.0.185:/brick1/vol1
Brick5: 192.168.0.184:/brick1/vol2
Brick6: 192.168.0.185:/brick1/vol2
 
cs




glusterFS (192.168.0.185) - 설정

time 동기화 (필수사항)

1
2
rdate -s time.bora.net && /sbin/hwclock --systohc
 
cs



glusterFS repo설정


1
2
3
cd /etc/yum.repos.d/
wget http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.0/EPEL.repo/glusterfs-epel.repo
 
cs



패키지 설치 & 서비스 시작


1
2
3
4
yum install glusterfs-server glusterfs-geo-replication  -y
chkconfig --level 235 glusterd on
service glusterd start
 
cs



brick 디렉토리 생성

1
2
mkdir -p /brick1/{vol0,vol1,vol2}  -> 디렉토리로 생성하였음.
 
cs



gluster pool 등록


1
2
gluster peer probe 192.168.0.184  -> glusterFS2번을 제외한 나머지 서버노드 등록
 
cs



glusterFS 상태 확인


1
2
3
4
5
6
7
8
[root@glusterfs2 ~]# gluster peer status
Number of Peers: 1
 
Hostname: 192.168.0.184
Uuid: 5a0c8ad9-8aae-418a-998c-b7298fa7fc6d
State: Peer in Cluster (Connected)
 
 
cs



glusterFS  상태 확인


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@glusterfs2 brick1]# gluster volume info all
 
Volume Name: vol_dep
Type: Distributed-Replicate
Volume ID: 8a8d0b9d-5403-4bba-a479-509ff0e8b961
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 192.168.0.184:/brick1/vol0
Brick2: 192.168.0.185:/brick1/vol0
Brick3: 192.168.0.184:/brick1/vol1
Brick4: 192.168.0.185:/brick1/vol1
Brick5: 192.168.0.184:/brick1/vol2
Brick6: 192.168.0.185:/brick1/vol2
 
cs




glusterFS (192.168.0.183) - 설정

time 동기화 (필수사항)

1
2
rdate -s time.bora.net && /sbin/hwclock --systohc
 
cs




패키지 설치

1
2
#yum install glusterfs-fuse
 
cs




volume 마운트

1
2
3
# mkdir /mnt/gluster
#mount -t glusterfs 192.168.0.184:vol_dep /mnt/gluster/
 
cs




mount 상태 확인


1
2
3
4
5
6
7
8
9
10
[root@glusterfs-client gluster]# mount
/dev/sda3 on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
192.168.0.184:vol_dep on /mnt/gluster type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
 
cs



부팅시 마운드될수 있게 rc.local에 등록

1
2
3
# vi /etc/rc.local
mount -t glusterfs 192.168.0.184:vol_dep /mnt/gluster/
 
cs




기타

volume 삭제 방법
1
2
3
#gluster volume stop <volume_name>
#gluster volume delete <volume_name>
 
cs
 


pool 추가 방법

1
#gluster peer probe <pool_name>
cs




volume 추가 방법

1
#gluster volume add-brick <volume_name><add_server>:<add_directory>
cs




volume 추가 후 용량 재 분배

1
2
# gluster volume rebalance <vol_name> start
# gluster volume rebalance <vol_name> status
cs



brick 교체 방법

ex) server3:/exp3 의 brick을 server5:/exp3으로 변경
1
#gluster volume replace-brick <volume_name> server3:/exp3 server5:/exp5 start
cs
 


brick 제거 방법

1
#gluster volume remove-brick <vol_name><server>:<directory>
cs



성능튜닝

1
2
#gluster volume set <volume_name><option><parameter>
 
cs


* disk 성능측정은 iozone 으로 수행