GlusterFS性能优化
运维的同学前端时间搭建了一个4台Server的GlusterFS集群
按照其默认的配置,没有使用replicate,读性能还是不错的,每秒大概100M
但写性能相当比较糟糕,每秒大概只有5M左右,几乎比NFS慢了一个数量级
GlusterFS的官方文档比较恶心,在其首页没有具体配置文档的链接,费了好大劲才找到
其Translators的说明文档,见这里,可以参考下
使用一些Translators进行优化后,写性能有了不少的提升,基本能达到25M-30M每秒,这个速度还是基本可以接受的
优化后,读的性能有了一定的下降,不过下降不太明显,每秒在70M-100M之间,也还是可以的
在Client端和Server端都是需要做一些优化的,主要是增加io-cache,write-behind,quick-read,io-threads这些选项
其中对性能提高最大的应该是write-behind,它相当于是个异步写
我们利用cluster/replicate和cluster/distribute就可以搭建一个分布式,高可用,可扩展的存储系统
server端的部分配置如下:
volume posix
type storage/posix
option directory /data-b
end-volume
volume locks
type features/locks
subvolumes posix
end-volume
volume brick
type performance/io-threads
option thread-count 8 # default is 16
subvolumes locks
end-volume
volume server
type protocol/server
option transport-type tcp
subvolumes brick
option auth.addr.brick.allow *
end-volume
client的部分配置如下
volume client1
type protocol/client
option transport-type tcp
option remote-host 10.0.0.1
option remote-subvolume brick # name of the remote volume
end-volume
......
volume replicate1
type cluster/replicate
subvolumes client1 client2
end-volume
volume distribute
type cluster/distribute
subvolumes replicate1 replicate2
end-volume
volume iocache
type performance/io-cache
option cache-size 1024MB # default is 32MB
option cache-timeout 1 # default is 1 second
subvolumes distribute
end-volume
volume readahead
type performance/read-ahead
option page-count 16 # cache per file = (page-count x page-size)
subvolumes iocache
end-volume
volume writebehind
type performance/write-behind
option cache-size 512MB # default is equal to aggregate-size
option flush-behind on # default is 'off'
subvolumes readahead
end-volume
volume quickread
type performance/quick-read
option cache-timeout 1 # default 1 second
option max-file-size 256KB # default 64Kb
subvolumes writebehind
end-volume
volume iothreads
type performance/io-threads
option thread-count 8 # default is 16
subvolumes quickread
end-volume
有测过小文件没?4K,40K大小的文件,glusterfs就是一悲剧
请问,你们用的glusterfs3.3.xx吗,这个是在/var/lib/glusterd/glustershd/glustershd-server.vol 末尾添加的吗?
谢谢
你好 我遇到一个问题是客户端挂载失败:日志如下 我的服务器端是 centos6.0 客户端是centos5.8
一对一客户端可以挂载成功 在测试 多存储空间聚合时 客户端挂载失败 日志如下:
[2013-01-23 17:45:07.433922] I [glusterfsd.c:1666:main] 0-glusterfs: Started running glusterfs version 3.3.1
[2013-01-23 17:45:07.577579] W [xlator.c:202:xlator_dynload] 0-xlator: /usr/local/lib/glusterfs/3.3.1/xlator/cluster/unify.so: cannot open shared object file: No such file or directory
[2013-01-23 17:45:07.577632] E [graph.y:221:volume_type] 0-parser: Volume 'bricks', line 30: type 'cluster/unify' is not valid or not found on this machine
[2013-01-23 17:45:07.577655] E [graph.y:330:volume_end] 0-parser: "type" not specified for volume bricks
[2013-01-23 17:45:07.577691] E [glusterfsd.c:1551:glusterfs_process_volfp] 0-: failed to construct the graph
[2013-01-23 17:45:07.577888] W [glusterfsd.c:831:cleanup_and_exit] (-->glusterfs(main+0x4bc) [0x4065ac] (-->glusterfs(glusterfs_volumes_init+0x18b) [0x40502b] (-->glusterfs(glusterfs_process_volfp+0x17a) [0x404e8a]))) 0-: received signum (0), shutting down
[2013-01-23 17:45:07.577935] I [fuse-bridge.c:4648:fini] 0-fuse: Unmounting '/mnt'.
[2013-01-23 18:15:38.853272] I [glusterfsd.c:1666:main] 0-glusterfs: Started running glusterfs version 3.3.1
[2013-01-23 18:15:38.853619] E [fuse-bridge.c:4436:init] 0-fuse: Mountpoint /mnt seems to have a stale mount, run 'umount /mnt' and try again.
[2013-01-23 18:15:38.853638] E [xlator.c:385:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
[2013-01-23 18:17:23.631652] I [glusterfsd.c:1666:main] 0-glusterfs: Started running glusterfs version 3.3.1
[2013-01-23 18:17:23.631990] E [fuse-bridge.c:4436:init] 0-fuse: Mountpoint /mnt seems to have a stale mount, run 'umount /mnt' and try again.
[2013-01-23 18:17:23.632009] E [xlator.c:385:xlator_init] 0-fuse: Initialization of volume 'fuse' failed, review your volfile again
[2013-01-23 18:17:36.384016] I [glusterfsd.c:1666:main] 0-glusterfs: Started running glusterfs version 3.3.1
[2013-01-23 18:17:36.389887] W [xlator.c:202:xlator_dynload] 0-xlator: /usr/local/lib/glusterfs/3.3.1/xlator//legacy/cluster/unify.so: cannot open shared object file: No such file or directory
[2013-01-23 18:17:36.389922] E [graph.y:221:volume_type] 0-parser: Volume 'bricks', line 30: type '/legacy/cluster/unify' is not valid or not found on this machine
[2013-01-23 18:17:36.389945] E [graph.y:330:volume_end] 0-parser: "type" not specified for volume bricks
[2013-01-23 18:17:36.389980] E [glusterfsd.c:1551:glusterfs_process_volfp] 0-: failed to construct the graph
[2013-01-23 18:17:36.390117] W [glusterfsd.c:831:cleanup_and_exit] (-->glusterfs(main+0x4bc) [0x4065ac] (-->glusterfs(glusterfs_volumes_init+0x18b) [0x40502b] (-->glusterfs(glusterfs_process_volfp+0x17a) [0x404e8a]))) 0-: received signum (0), shutting down
[2013-01-23 18:17:36.390155] I [fuse-bridge.c:4648:fini] 0-fuse: Unmounting '/mnt'.
[2013-01-23 18:22:20.932637] I [glusterfsd.c:1666:main] 0-glusterfs: Started running glusterfs version 3.3.1
[2013-01-23 18:22:20.939303] W [xlator.c:202:xlator_dynload] 0-xlator: /usr/local/lib/glusterfs/3.3.1/xlator//legacy/cluster/unify.so: cannot open shared object file: No such file or directory
[2013-01-23 18:22:20.939339] E [graph.y:221:volume_type] 0-parser: Volume 'bricks', line 30: type '/legacy/cluster/unify' is not valid or not found on this machine
[2013-01-23 18:22:20.939367] E [graph.y:330:volume_end] 0-parser: "type" not specified for volume bricks
[2013-01-23 18:22:20.939400] E [glusterfsd.c:1551:glusterfs_process_volfp] 0-: failed to construct the graph
[2013-01-23 18:22:20.939541] W [glusterfsd.c:831:cleanup_and_exit] (-->glusterfs(main+0x4bc) [0x4065ac] (-->glusterfs(glusterfs_volumes_init+0x18b) [0x40502b] (-->glusterfs(glusterfs_process_volfp+0x17a) [0x404e8a]))) 0-: received signum (0), shutting down
[2013-01-23 18:22:20.939574] I [fuse-bridge.c:4648:fini] 0-fuse: Unmounting '/mnt'.
~
总结一下吧,
性能调优示例:
gluster volume start file-backup
gluster volume quota file-backup enable
gluster volume quota file-backup limit-usage / 800TB
gluster volume set file-backup auth.allow 192.168.10.31,192.168.12.27
gluster volume set file-backup performance.cache-size 4GB
gluster volume set file-backup performance.flush-behind on
gluster volume info file-backup
参考:
http://gluster.org/community/documentation/index.php/Translators
无需改配置文件,看着累!
# gluster volume set v3_upload performance.cache-size 4GB
volume set: success
# gluster volume set v3_upload performance.io-thread-count 32
volume set: success
# gluster volume info
Volume Name: v3_upload
Type: Striped-Replicate
Volume ID: 401b5343-df8f-4c5d-a1c2-0363fa9d4591
Status: Started
Number of Bricks: 1 x 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.16.1.103:/data/dsrv1/v1
Brick2: 10.16.1.97:/data/dsrv1/v2
Brick3: 10.16.1.103:/data/dsrv2/v3
Brick4: 10.16.1.97:/data/dsrv2/v4
Options Reconfigured:
performance.io-thread-count: 32
performance.cache-size: 4GB