MFS 高可用存储分布式文件系统

£神魔★判官ぃ 2022-09-29 15:56 361阅读 0赞

MFS分布式文件系统

系统:redhat

机器:192.168.1.248(Master)

  1. 192.168.1.249Backup
  2. 192.168.1.250Chunkserver 1
  3. 192.168.1.238(Chunkserver2)
  4. 192.168.1.251 (client)

Master安装MFS:

配置之前确保5台机器的selinux关闭,iptables关闭

# useradd mfs

# tar zvxf mfs-1.6.17.tar.gz

# cd mfs-1.6.17

# ./configure —prefix=/usr/local/mfs —with-default-user=mfs —with-default-group=mfs —disable-mfschunkserver —disable-mfsmount

# make

# make install

# cd /usr/local/mfs/etc

# mv mfsexports.cfg.dist mfsexports.cfg

# mv mfsmaster.cfg.dist mfsmaster.cfg

# mv mfsmetalogger.cfg.dist mfsmetalogger.cfg

# cd /usr/local/mfs/var/mfs/

# mv metadata.mfs.empty metadata.mfs

# echo “192.168.1.248 mfsmaster” >> /etc/hosts

Mfsmaster.cfg 配置文件包含主控服务器master 相关的设置

mfsexports.cfg 指定那些客户端主机可以远程挂接 MooseFS 文件系统,以及授予

挂接客户端什么样的访问权。默认是所有主机共享 /

试着运行master 服务(服务将以安装配置configure 指定的用户运行mfs):

# /usr/local/mfs/sbin/mfsmaster start

working directory: /usr/local/mfs/var/mfs

lockfile created and locked

initializing mfsmaster modules …

loading sessions … ok

sessions file has been loaded

exports file has been loaded

loading metadata …

create new empty filesystemmetadata file has been loaded

no charts data file - initializing empty charts

master <-> metaloggers module: listen on *:9419

master <-> chunkservers module: listen on *:9420

main master server module: listen on *:9421

mfsmaster daemon initialized properly

为了监控MooseFS 当前运行状态,我们可以运行CGI 监控服务,这样就可以用浏览器查看整个MooseFS 的运行情况:

# /usr/local/mfs/sbin/mfscgiserv

starting simple cgi server (host: any , port: 9425 , rootpath: /usr/local/mfs/share/mfscgi)

现在,我们在浏览器地址栏输入 http://192.168.1.248:9425 即可查看master 的运行情况(这个时候,是不能看见chunk server 的数据)。

Backup服务器配置 (作用是故障了替代Master):

# useradd mfs

# tar zvxf mfs-1.6.17.tar.gz

# cd mfs-1.6.17

# ./configure —prefix=/usr/local/mfs —with-default-user=mfs —with-default-group=mfs —disable-mfschunkserver —disable-mfsmount

# make

# make install

# cd /usr/local/mfs/etc

# cp mfsmetalogger.cfg.dist mfsmetalogger.cfg

# cp mfsexports.cfg.dist mfsexports.cfg

# cp mfsmaster.cfg.dist mfsmaster.cfg

# echo “192.168.1.248 mfsmaster” >> /etc/hosts

# /usr/local/mfs/sbin/mfsmetalogger start

working directory: /usr/local/mfs/var/mfs

lockfile created and locked

initializing mfsmetalogger modules …

mfsmetalogger daemon initialized properly

Chunkserver 服务器配置(存储数据块,每台Chunkserver配置都一样):

# useradd mfs

# tar zvxf mfs-1.6.17.tar.gz

# cd mfs-1.6.17

# ./configure —prefix=/usr/local/mfs —with-default-user=mfs —with-default-group=mfs —disable-mfsmaster

# make

# make install

# cd /usr/local/mfs/etc

# cp mfschunkserver.cfg.dist mfschunkserver.cfg

# cp mfshdd.cfg.dist mfshdd.cfg

# echo “192.168.1.248 mfsmaster” >> /etc/hosts

建议在chunk server 上划分单独的空间给 MooseFS 使用,这样做的好处是便于管理剩余空间 ,这里使用的共享点是/mfs1和/mfs2

在配置文件mfshdd.cfg 中,我们给出了用于客户端挂接MooseFS 分布式文件系统根分区所使用的共享空间位置

# vi /usr/local/mfs/etc/mfshdd.cfg

#加入下面 2行

/mfs1

/mfs2

# chown -R mfs:mfs /mfs*

开始启动chunk server

# /usr/local/mfs/sbin/mfschunkserver start

working directory: /usr/local/mfs/var/mfs

lockfile created and locked

initializing mfschunkserver modules …

hdd space manager: scanning folder /mfs2/ …

hdd space manager: scanning folder /mfs1/ …

hdd space manager: /mfs1/: 0 chunks found

hdd space manager: /mfs2/: 0 chunks found

hdd space manager: scanning complete

main server module: listen on *:9422

no charts data file - initializing empty charts

mfschunkserver daemon initialized properly

现在再通过浏览器访问 http://192.168.1.248:9425 可以看见这个MooseFS 系统的全部信息,包括主控master 和存储服务chunkserver 。

client配置(客户端挂载mfs共享目录):

前提环境:

所有的client都需要安装fuse,内核版本为2.6.18-128.el5需要按照fuse-2.7.6.tar.gz,如果是2.6.18-194.11.3.el5内核则需要安装fuse-2.8.4否则报错)

在/etc/profile文件最后面加上:PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH

再执行 source /etc/profile 使之生效

tar xzvf fuse-2.7.6.tar.gz

cd fuse-2.7.6

./configure —enable-kernel-module

make;make install

如果安装成功会找到/lib/modules/2.6.18-128.el5/kernel/fs/fuse/fuse.ko这个内核模块

再执行modprobe fuse

查看是否加载成功 :lsmod|grep “fuse”

# useradd mfs

# tar zvxf mfs-1.6.17.tar.gz

# cd mfs-1.6.17

#./configure —prefix=/usr/local/mfs —with-default-user=mfs —with-default-group=mfs —disable-mfsmaster —disable-mfschunkserver

# make

# make install

# echo “192.168.1.248 mfsmaster” >> /etc/hosts

挂载操作

# mkdir -p /data/mfs

# /usr/local/mfs/bin/mfsmount /data/mfs -H mfsmaster

mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root

进行副本建立,只要在1台client上进行操作就可以了

# cd /data/mfs/

副本数为1

# mkdir floder1

副本数为2

# mkdir floder2

副本数为3

# mkdir floder3

使用命令mfssetgoal – r 设定目录里文件的副本数:

# /usr/local/mfs/bin/mfssetgoal -r 1 /data/mfs/floder1

/data/mfs/floder1:

inodes with goal changed: 0

inodes with goal not changed: 1

inodes with permission denied: 0

# /usr/local/mfs/bin/mfssetgoal -r 2 /data/mfs/floder2

/data/mfs/floder2:

inodes with goal changed: 1

inodes with goal not changed: 0

inodes with permission denied: 0

# /usr/local/mfs/bin/mfssetgoal -r 3 /data/mfs/floder3

/data/mfs/floder3:

inodes with goal changed: 1

inodes with goal not changed: 0

inodes with permission denied: 0

拷贝文件测试

# cp /root/mfs-1.6.17.tar.gz /data/mfs/floder1

# cp /root/mfs-1.6.17.tar.gz /data/mfs/floder2

# cp /root/mfs-1.6.17.tar.gz /data/mfs/floder3

命令mfschunkfile 用来检查给定的文件以多少副本数来存储

目录folder1 有一个副本存储在一个chunk 里:

# /usr/local/mfs/bin/mfscheckfile /data/mfs/floder1/mfs-1.6.17.tar.gz

/data/mfs/floder1/mfs-1.6.17.tar.gz:

1 copies: 1 chunks

在目录folder2 中,是以两个副本保存的

# /usr/local/mfs/bin/mfscheckfile /data/mfs/floder2/mfs-1.6.17.tar.gz

/data/mfs/floder2/mfs-1.6.17.tar.gz:

2 copies: 1 chunks

在目录folder3 中,是以三个副本保存的

# /usr/local/mfs/bin/mfscheckfile /data/mfs/floder3/mfs-1.6.17.tar.gz

/data/mfs/floder3/mfs-1.6.17.tar.gz:

3 copies: 1 chunks

停止 MooseFS

为了安全停止MooseFS 集群,建议执行如下的步骤:

·在所有客户端用Unmount 命令先卸载文件系统(本例将是: umount /data/mfs)

·停止chunk server 进程: /usr/local/mfs/sbin/mfschunkserver stop

·停止 metalogger 进程: /usr/local/mfs/sbin/mfsmetalogger stop

·停止主控 master server 进程: /usr/local/mfs/sbin/mfsmaster stop

注意说明:

1、定时备份/usr/local/mfs/var/mfs/metadata.mfs.back文件

失败恢复

拷贝备份文件到 备份服务器 Backup server (metalogger)

# scp metadata.mfs.back root@192.168.1.249:/usr/local/mfs/var/mfs/

在备份服务器 Backup server (metalogger) 操作

# /usr/local/mfs/sbin/mfsmetarestore -a

loading objects (files,directories,etc.) … ok

loading names … ok

loading deletion timestamps … ok

checking filesystem consistency … ok

loading chunks data … ok

connecting files and chunks … ok

applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.1.mfs

meta data version: 23574

version after applying changelog: 23574

applying changes from file: /usr/local/mfs/var/mfs/changelog_ml.0.mfs

meta data version: 23574

version after applying changelog: 23583

store metadata into file: /usr/local/mfs/var/mfs/metadata.mfs

修改hosts文件,变更mfsmaster的ip

192.168.1.249 mfsmaster

启动master

# /usr/local/mfs/sbin/mfsmaster start

working directory: /usr/local/mfs/var/mfs

lockfile created and locked

initializing mfsmaster modules …

loading sessions … ok

sessions file has been loaded

exports file has been loaded

loading metadata …

loading objects (files,directories,etc.) … ok

loading names … ok

loading deletion timestamps … ok

checking filesystem consistency … ok

loading chunks data … ok

connecting files and chunks … ok

all inodes: 2381

directory inodes: 104

file inodes: 2277

chunks: 2185

metadata file has been loaded

no charts data file - initializing empty charts

master <-> metaloggers module: listen on *:9419

master <-> chunkservers module: listen on *:9420

main master server module: listen on *:9421

mfsmaster daemon initialized properly

所有客户机执行

# umount /data/mfs

所有存储节点停止mfschunkserver服务

#/usr/local/mfs/sbin/mfschunkserver stop

所有客户机, chunkserver修改hosts文件,更改mfsmaster的ip

192.168.1.249 mfsmaster

所有存储节点启动服务

#/usr/local/mfs/sbin/mfschunkserver start

所有客户机进行挂载

# /usr/local/mfs/bin/mfsmount /data/mfs -H mfsmaster

mfs会把所有chunkserver的共享目录容量累积为一个总的容量存储空间,client看到的目录大小就是所有chunkserver共享的目录空间总和。

MFS Master和backup高可用这块,可以用DRBD来解决Master单点故障,后面我会发布这块的文档。

挂载权限这块可以在Master里设置运行挂载的ip段(mfsexport.cfg) 也可以通过iptables控制。

恢复误删除文件:

/usr/local/mfs/bin/mfsmount /data/reco -H mfsmaster -p -m (/data/reco建立目录用于恢复删除的文件,如果弹出passwod就输入mfsmaster的机器密码),进入/data/reco/trash

mv 00* ./undel/

再去挂载的目录查看删除的文件都已经恢复完成。

发表评论

表情:
评论列表 (有 0 条评论,361人围观)

还没有评论,来说两句吧...

相关阅读