一、概念:
https://github.com/happyfish100/fastdfs
FastDFS is an open source high performance distributed file system (DFS). It‘s major functions include: file storing, file syncing and file accessing, and design for high capacity and load balance.
https://github.com/happyfish100/fastdfs-client-java
https://github.com/happyfish100/fastdfs-nginx-module
FastDFS(fast distribution filesystem)是一款开源的轻量级分布式FS,纯C实现,支持linux、freeBSD等类unix系统,类似googleFS,不是通用的FS,只能通过专有API访问,目前提供了C、java、php、.NET的API,为互联网应用量身定做,追求高性能和高扩展性,可看作是基于文件的key-valu pair存储系统,称作分布式文件存储服务更为合适,更适合存储中小文件,如4KB-500MB;最早在ali,是易道用车架构师余庆写的,现应用在jd、taobao、58、uc、51cto;
分布式FS(支持FUSE;不支持FUSE);
分布式FS(把文件分片存放;按原本文件存放);
注:
fuse,FS in userspace,用户空间文件系统,是Linux中用于挂载某些网络空间,如SSH,到本地文件系统的模块,在SourceForge上可以找到相关内容;三种模式:内核模式、用户空间模式、mount工具;
fastdfs两种角色(tracker、storage):
tracker server(追踪服务器,主要做调度工作,在访问中起LB作用,在内存中记录集群中group和storage server的状态信息,是连接client和storage server的枢纽,因为相关信息全部在内存中,tracker server的性能非常高(它本身所需负载很小),一个较大的集群中(如上百个group)有3台就足够);
storage server(存储服务器,文件和文件属性(如metadata)都保存在该server上);
注:
storage里同组内的两个server里存储的内容完全相同(相当于raid1);
storage集群中,组(或叫卷)和组之间不通信是相互独立的,storage主动向tracker汇报状态,所有组的容量累加就是整个存储系统中的文件容量,一个组可由一台或多台storage server组成,一个组内的所有单个存储服务器中的文件都是相同的,一组中的多台存储服务器起到了冗余备份和负载均衡的作用,在组中增加服务器时,同步已有的文件由系统自动完成,同步完成后,系统自动将新增服务器切换到线上提供服务,当存储空间不足或即将耗尽时,可动态添加组,只需要增加一台或多台服务器,并将它们配置为一个新的组,这样就扩大了存储系统的容量;
client与tracker联系,由tracker分配storage给client(类似http的301跳转);
扩展整个fastdfs集群容量,在storage里加group即可;
fastdfs上传机制:
上传文件时,file_id由storage server生成并返回给client,file_id包含了组名、磁盘、目录、文件名,storage server可直接根据file_id定位到文件;
fastdfs存储文件位置file_id:
file_id包含了组名、磁盘、目录、文件名,storageserver可直接根据file_id定位到文件;
fastdfs下载机制:
fastdfs同步机制:
fastdfs查找文件:
二、操作:
node1(test5;192.168.23.133);
node2(test6;192.168.23.134);
node{1,2}既作为tracker server也作为storageserver;
https://github.com/happyfish100/libfastcommon
[[email protected] ~]# git clone https://github.com/happyfish100/libfastcommon #(或下载release版本,libfastcommon-1.0.7.tar.gz)
Initialized empty Git repository in/root/libfastcommon/.git/
remote: Counting objects: 1839, done.
remote: Compressing objects: 100% (12/12),done.
remote: Total 1839 (delta 4), reused 0(delta 0), pack-reused 1827
Receiving objects: 100% (1839/1839), 559.00KiB | 250 KiB/s, done.
Resolving deltas: 100% (1299/1299), done.
[[email protected] ~]# cd libfastcommon/
[[email protected] libfastcommon]# ./make.sh
[[email protected] libfastcommon]# ./make.sh install
https://github.com/happyfish100/fastdfs/
fastdfs-5.08.tar.gz
[[email protected] libfastcommon]# cd
[[email protected] ~]# tar xf fastdfs-5.08.tar.gz
[[email protected] ~]# cd fastdfs-5.08
[[email protected] fastdfs-5.08]# ./make.sh
[[email protected] fastdfs-5.08]# ./make.sh install
[[email protected] fastdfs-5.08]# cd
[[email protected] ~]# ll /etc/fdfs/ #(三个样例配置文件,client.conf.sample,storage.conf.sample,tracker.conf.sample)
total 20
-rw-r--r--. 1 root root 1461 Jan 12 16:57client.conf.sample
-rw-r--r--. 1 root root 7927 Jan 12 16:57storage.conf.sample
-rw-r--r--. 1 root root 7200 Jan 12 16:57tracker.conf.sample
[[email protected] ~]# ls /usr/bin/fdfs_* #(众多的操作命令)
/usr/bin/fdfs_appender_test /usr/bin/fdfs_delete_file /usr/bin/fdfs_storaged /usr/bin/fdfs_upload_appender
/usr/bin/fdfs_appender_test1 /usr/bin/fdfs_download_file /usr/bin/fdfs_test /usr/bin/fdfs_upload_file
/usr/bin/fdfs_append_file /usr/bin/fdfs_file_info /usr/bin/fdfs_test1
/usr/bin/fdfs_crc32 /usr/bin/fdfs_monitor /usr/bin/fdfs_trackerd
[[email protected] ~]# ll/etc/init.d/{fdfs_storaged,fdfs_trackerd} #(启动脚本)
-rwxr-xr-x. 1 root root 918 Jan 12 16:57/etc/init.d/fdfs_storaged
-rwxr-xr-x. 1 root root 920 Jan 12 16:57/etc/init.d/fdfs_trackerd
[[email protected] ~]# mkdir-pv /data/{fdfs_tracker,fdfs_store/{base,store}}
mkdir: created directory `/data‘
mkdir: created directory`/data/fdfs_tracker‘
mkdir: created directory`/data/fdfs_storage‘
mkdir: created directory`/data/fdfs_storage/base‘
mkdir: created directory `/data/fdfs_storage/store‘
配置tracker server:
[[email protected] ~]# cd /etc/fdfs
[[email protected] fdfs]# cp tracker.conf.sample tracker.conf
[[email protected] fdfs]# vim tracker.conf
disabled=false
bind_addr=
port=22122
connect_timeout=30
network_timeout=60
base_path=/data/fdfs_tracker
max_connections=256
accept_threads=1
store_lookup=2 #(0: round robin; 1:specify group; 2: load balance, select the max free space group to upload file)
store_server=0 #(0: round robin (default);1: the first server order by ip address; 2: the first server order by priority(the minimal))
store_path=0 #(0: round robin; 2: loadbalance, select the max free space path to upload file)
download_server=0 #(0: round robin (default);1: the source storage server which the current file uploaded to)
reserved_storage_space =10%
run_by_group=
run_by_user= #(不指定,则用哪个用户运行的就默认是谁)
allow_hosts=*
sync_log_buff_interval = 10
check_active_interval = 120
thread_stack_size = 64KB
storage_ip_changed_auto_adjust = true
[[email protected] ~]# /etc/init.d/fdfs_trackerd start
Starting FastDFS tracker server:
[[email protected] ~]# lsof -i:22122
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
fdfs_trac 2379 root 5u IPv4 16322 0t0 TCP *:22122 (LISTEN)
配置storage server:
[[email protected] fdfs]# cp storage.conf.samplestorage.conf
[[email protected] fdfs]# vim storage.conf
group_name=group1
port=23000
network_timeout=60
heart_beat_interval=30
stat_report_interval=60
base_path=/data/fdfs_storage/base
max_connections=256
buff_size = 256KB
accept_threads=1
work_threads=4
disk_rw_separated = true
disk_reader_threads = 1
disk_writer_threads = 1
store_path_count=1 #(path(disk or mount point) count, defaultvalue is 1)
store_path0=/data/fdfs_storage/store
subdir_count_per_path=256
tracker_server=192.168.23.133:22122
tracker_server=192.168.23.134:22122
allow_hosts=*
[[email protected] ~]# /etc/init.d/fdfs_storaged start
Starting FastDFS storage server:
[[email protected] ~]# lsof -i:23000
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
fdfs_stor 2435 root 5u IPv4 85931 0t0 TCP *:inovaport1 (LISTEN)
fdfs_stor 2435 root 22u IPv4 85947 0t0 TCP test5:38563->test6:inovaport1 (ESTABLISHED)
以上在node{1,2}都配置;
在node1配置client:
[[email protected] fdfs]# cp client.conf.sample client.conf
[[email protected] fdfs]# vim client.conf
base_path=/tmp
tracker_server=192.168.23.133:22122
tracker_server=192.168.23.134:22122
上传文件:
[[email protected] fdfs]# fdfs_upload_file --help
Usage: fdfs_upload_file<config_file> <local_filename> [storage_ip:port] [store_path_index]
[[email protected] fdfs]# fdfs_upload_file /etc/fdfs/client.conf /etc/hosts #(会返回file_id)
group1/M00/00/00/wKgXhVh4i9qAbesQAAAAyL1v7g09358326
[[email protected] fdfs]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.23.134 test6
192.168.23.133 test5
[[email protected] fdfs]# cat /data/fdfs_storage/store/data/00/00/wKgXhVh4i9qAbesQAAAAyL1v7g09358326
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.23.134 test6
192.168.23.133 test5
[[email protected] fdfs]# md5sum /etc/hosts
dec92ea815e2828dde8b607c599b54d5 /etc/hosts
[[email protected] fdfs]# md5sum /data/fdfs_storage/store/data/00/00/wKgXhVh4i9qAbesQAAAAyL1v7g09358326
dec92ea815e2828dde8b607c599b54d5 /data/fdfs_storage/store/data/00/00/wKgXhVh4i9qAbesQAAAAyL1v7g09358326
[[email protected] fdfs]# fdfs_file_info --help
Usage: fdfs_file_info<config_file> <file_id>
[[email protected] fdfs]# fdfs_file_info /etc/fdfs/client.conf group1/M00/00/00/wKgXhVh4i9qAbesQAAAAyL1v7g09358326
source storage id: 0
source ip address: 192.168.23.133
file create timestamp: 2017-01-13 00:12:10
file size: 200
file crc32: 3178229261 (0xBD6FEE0D)
[[email protected] fdfs]# fdfs_delete_file --help
Usage: fdfs_delete_file<config_file> <file_id>
[[email protected] fdfs]# fdfs_delete_file /etc/fdfs/client.conf group1/M00/00/00/wKgXhVh4i9qAbesQAAAAyL1v7g09358326
[[email protected] fdfs]# fdfs_upload_appender --help
Usage:fdfs_upload_appender <config_file> <local_filename>
[[email protected] fdfs]# vim /tmp/append.txt
1
2
3
[[email protected] fdfs]# fdfs_upload_appender /etc/fdfs/client.conf /tmp/append.txt
group1/M00/00/00/wKgXhlh4llOET8jHAAAAAMmGRic964.txt
[[email protected] fdfs]# vim /tmp/append2.txt
4
5
6
[[email protected] fdfs]# fdfs_append_file /etc/fdfs/client.conf group1/M00/00/00/wKgXhlh4llOET8jHAAAAAMmGRic964.txt/tmp/append2.txt
[[email protected] fdfs]# cat/data/fdfs_storage/store/data/00/00/wKgXhlh4llOET8jHAAAAAMmGRic964.txt
1
2
3
4
5
6
[[email protected] ~]# fdfs_download_file --help
Usage: fdfs_download_file<config_file> <file_id> [local_filename] [<download_offset><download_bytes>]
[[email protected] ~]# fdfs_download_file/etc/fdfs/client.conf group1/M00/00/00/wKgXhlh4llOET8jHAAAAAMmGRic964.txt
[[email protected] ~]# catwKgXhlh4llOET8jHAAAAAMmGRic964.txt
1
2
3
4
5
6
[[email protected] ~]# fdfs_monitor /etc/fdfs/client.conf
[2017-01-13 01:01:57] DEBUG -base_path=/tmp, connect_timeout=30, network_timeout=60, tracker_server_count=2,anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0,g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server idcount: 0
server_count=2, server_index=0
tracker server is 192.168.23.133:22122
group count: 1
Group 1:
group name = group1
disk total space = 17909 MB
disk free space = 5493 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0
Storage1:
id= 192.168.23.133
ip_addr= 192.168.23.133 (test5) ACTIVE
httpdomain =
version= 5.08
jointime = 2017-01-12 23:21:24
uptime = 2017-01-12 23:44:11
totalstorage = 17909 MB
freestorage = 5493 MB
uploadpriority = 10
store_path_count= 1
subdir_count_per_path= 256
storage_port= 23000
storage_http_port= 8888
current_write_path= 0
sourcestorage id = 192.168.23.134
if_trunk_server= 0
connection.alloc_count= 256
connection.current_count= 1
connection.max_count= 2
total_upload_count= 1
success_upload_count= 1
total_append_count= 0
success_append_count= 0
total_modify_count= 0
success_modify_count= 0
total_truncate_count= 0
success_truncate_count= 0
total_set_meta_count= 0
success_set_meta_count= 0
total_delete_count= 1
success_delete_count= 1
total_download_count= 0
success_download_count= 0
total_get_meta_count= 0
success_get_meta_count= 0
total_create_link_count= 0
success_create_link_count= 0
total_delete_link_count= 0
success_delete_link_count= 0
total_upload_bytes= 200
success_upload_bytes= 200
total_append_bytes= 0
success_append_bytes= 0
total_modify_bytes= 0
success_modify_bytes= 0
stotal_download_bytes= 0
success_download_bytes= 0
total_sync_in_bytes= 12
success_sync_in_bytes= 12
total_sync_out_bytes= 0
success_sync_out_bytes= 0
total_file_open_count= 3
success_file_open_count= 3
total_file_read_count= 0
success_file_read_count= 0
total_file_write_count= 3
success_file_write_count= 3
last_heart_beat_time= 2017-01-13 01:01:56
last_source_update= 2017-01-13 00:29:41
last_sync_update= 2017-01-13 00:58:39
last_synced_timestamp= 2017-01-13 00:58:39 (0s delay)
Storage2:
id= 192.168.23.134
ip_addr= 192.168.23.134 (test6) ACTIVE
……
[[email protected] ~]# /etc/init.d/fdfs_storaged stop #(在node2上停storageserver)
waiting for pid [2759] exit ...
pid [2759] exit.
[[email protected] ~]# fdfs_monitor/etc/fdfs/client.conf
Storage2:
id= 192.168.23.134
ip_addr= 192.168.23.134 (test6) OFFLINE
[[email protected] ~]# fdfs_monitor /etc/fdfs/client.conf delete group1 192.168.23.134 #(删除有问题node时要先停止该node服务)
[2017-01-13 01:12:28] DEBUG -base_path=/tmp, connect_timeout=30, network_timeout=60, tracker_server_count=2,anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0,g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server idcount: 0
server_count=2, server_index=1
tracker server is 192.168.23.134:22122
delete storage servergroup1::192.168.23.134 success
[[email protected] ~]# fdfs_monitor /etc/fdfs/client.conf
Storage2:
id= 192.168.23.134
ip_addr= 192.168.23.134 (test6) DELETED
[[email protected] ~]# /etc/init.d/fdfs_storaged start #(启动node2的storaged服务)
Starting FastDFS storage server:
[[email protected] ~]# fdfs_monitor/etc/fdfs/client.conf
Storage2:
id= 192.168.23.134
ip_addr= 192.168.23.134 (test6) ACTIVE
三、使用php_client、fastdfs-client-java、fastdfs-nginx-module:
1、php_client:
配置LNMP环境;
[[email protected] ~]# cd fastdfs-5.08/php_client/
[[email protected] php_client]# /usr/local/php/bin/phpize
Configuring for:
PHP Api Version: 20131106
Zend Module Api No: 20131226
Zend Extension Api No: 220131226
[[email protected] php_client]# ./configure --with-php-config=/usr/local/php/bin/php-config
[[email protected] php_client]# make && make install
Build complete.
Don‘t forget to run ‘make test‘.
Installing shared extensions: /usr/local/php/lib/php/extensions/no-debug-non-zts-20131226/
[[email protected] php_client]# ll/usr/local/php/lib/php/extensions/no-debug-non-zts-20131226/
total 2004
-rwxr-xr-x. 1 root root 346393 Jan 13 22:40fastdfs_client.so
-rwxr-xr-x. 1 root root 1112480 Jan 1322:15 opcache.a
-rwxr-xr-x. 1 root root 589060 Jan 13 22:15 opcache.so
[[email protected] php_client]# cat fastdfs_client.ini >> /etc/php.ini #([[email protected]_client]# /usr/local/php/bin/php -i | grep php.ini,可用此查看php.ini位置)
[[email protected] php_client]# vimfastdfs_test.php #(php操作fastdfs的命令)
[[email protected] php_client]# /usr/local/php/bin/php fastdfs_test.php
5.08
fastdfs_tracker_make_all_connectionsresult: 1
……
delete file group1/M00/00/00/wKgXhlh5yi-AZL9sAAAAD61kmgs188.binreturn: 1
bool(true)
tracker_close_all_connections result: 1
2、fastdfs-client-java:
https://github.com/happyfish100/fastdfs-client-java
[[email protected] ~]# java -version #(安装java环境)
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build25.111-b14, mixed mode)
[[email protected] ~]# git clone https://github.com/happyfish100/fastdfs-client-java
Initialized empty Git repository in/root/fastdfs-client-java/.git/
remote: Counting objects: 60, done.
remote: Total 60 (delta 0), reused 0 (delta0), pack-reused 60
Unpacking objects: 100% (60/60), done.
[[email protected] ~]# cd fastdfs-client-java/src
[[email protected] src]# yum -y install ant
[[email protected] src]# which ant
/usr/bin/ant
[[email protected] src]# ant
Buildfile: build.xml
init:
compile:
[mkdir] Created dir: /root/fastdfs-client-java/src/build/classes
[javac] Compiling 32 source files to/root/fastdfs-client-java/src/build/classes
[javac] This version of java does not support the classic compiler;upgrading to modern
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
jar:
[jar] Building jar: /root/fastdfs-client-java/src/build/fastdfs_client.jar
BUILD SUCCESSFUL
Total time: 4 seconds
[[email protected] src]# ll build #(生成build/fastdfs_client.jar)
total 100
drwxr-xr-x. 3 root root 4096 Jan 13 23:12 classes
-rw-r--r--. 1 root root 94216 Jan 13 23:12fastdfs_client.jar
[[email protected] src]# cd build
[[email protected] build]# java -cp fastdfs_client.jar org.csource.fastdfs.test.TestClient /etc/fdfs/client.conf /etc/resolv.conf #(-cp classpath)
java.version=1.8.0_111
network_timeout=60000ms
charset=ISO8859-1
file length: 14
store storage servers count: 2
1. 192.168.23.133:23000
2. 192.168.23.134:23000
upload_file time used: 75 ms
group_name: group1, remote_filename: M00/00/00/wKgXhlh50OaAIv-2AAAADv4ZzcQ659.txt
source_ip_addr = 192.168.23.134, file_size= 14, create_timestamp = 2017-01-13 23:19:02, crc32 = -31863356
storage servers count: 1
1. 192.168.23.134:23000
set_metadata time used: 6 ms
set_metadata success
author Mike
bgcolor #000000
heigth 768
title Untitle
width 1024
download_file time used: 2 ms
file length:14
this is a test
upload_file time used: 46 ms
slave file group_name: group1,remote_filename: M00/00/00/wKgXhlh50OaAIv-2AAAADv4ZzcQ659-part1.txt
source_ip_addr = 192.168.23.134, file_size= 20, create_timestamp = 2017-01-13 23:19:02, crc32 = -31863356
delete_file time used: 2 ms
Delete file success
group_name: group1, remote_filename:M00/00/00/wKgXhVh50MmAQEs9AAAAXJXZQnk07.conf
source_ip_addr = 192.168.23.133, file_size= 92, create_timestamp = 2017-01-13 23:18:33, crc32 = -1780923783
file url:http://192.168.23.134/group1/M00/00/00/wKgXhVh50MmAQEs9AAAAXJXZQnk07.conf
Download file success
Download file success
upload_file time used: 57 ms
slave file group_name: group1,remote_filename: M00/00/00/wKgXhVh50MmAQEs9AAAAXJXZQnk07-part2.conf
source_ip_addr = 192.168.23.133, file_size= 92, create_timestamp = 2017-01-13 23:18:33, crc32 = -1780923783
group name: group1, remote filename:M00/00/00/wKgXhlh50OeAH48FAAAAXJXZQnk39.conf
source_ip_addr = 192.168.23.134, file_size= 92, create_timestamp = 2017-01-13 23:19:03, crc32 = -1780923783
upload_file time used: 59 ms
slave file group_name: group1,remote_filename: M00/00/00/wKgXhlh50OeAH48FAAAAXJXZQnk39-part3.conf
source_ip_addr = 192.168.23.134, file_size= 92, create_timestamp = 2017-01-13 23:19:03, crc32 = -1780923783
active test to storage server: true
active test to tracker server: true
[[email protected] build]# java -cp fastdfs_client.jar org.csource.fastdfs.test.Monitor /etc/fdfs/client.conf
java.version=1.8.0_111
network_timeout=60000ms
charset=ISO8859-1
group count: 1
Group 1:
group name = group1
disk total space = 17909MB
disk free space = 3229 MB
trunk free space = 0 MB
storage server count = 2
active server count = 2
storage server port = 23000
storage HTTP port = 8888
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0
Storage1:
storageid = 192.168.23.133
ip_addr= 192.168.23.133 ACTIVE
httpdomain =
version= 5.08
jointime = 2017-01-12 23:21:24
uptime = 2017-01-12 23:44:11
totalstorage = 17909MB
freestorage = 3229MB
uploadpriority = 10
store_path_count= 1
subdir_count_per_path= 256
storage_port= 23000
storage_http_port= 8888
current_write_path= 0
sourceip_addr =
if_trunk_server= false
conntion.alloc_count = 256
conntion.current_count = 1
conntion.max_count = 3
total_upload_count= 13
success_upload_count= 13
total_append_count= 0
success_append_count= 0
total_modify_count= 0
success_modify_count= 0
total_truncate_count= 0
success_truncate_count= 0
total_set_meta_count= 6
success_set_meta_count= 6
total_delete_count= 11
success_delete_count= 11
total_download_count= 6
success_download_count= 6
total_get_meta_count= 2
success_get_meta_count= 2
total_create_link_count= 0
success_create_link_count= 0
total_delete_link_count= 0
success_delete_link_count= 0
total_upload_bytes= 189858
success_upload_bytes= 189858
total_append_bytes= 0
success_append_bytes= 0
total_modify_bytes= 0
success_modify_bytes= 0
total_download_bytes= 214
success_download_bytes= 214
total_sync_in_bytes= 405
success_sync_in_bytes= 405
total_sync_out_bytes= 0
success_sync_out_bytes= 0
total_file_open_count= 29
success_file_open_count= 29
total_file_read_count= 8
success_file_read_count= 8
total_file_write_count= 21
success_file_write_count= 21
last_heart_beat_time= 2017-01-13 23:22:26
last_source_update= 2017-01-13 23:18:32
last_sync_update= 2017-01-13 23:18:40
last_synced_timestamp= 2017-01-13 23:19:02 (0s delay)
Storage2:
storageid = 192.168.23.134
ip_addr= 192.168.23.134 ACTIVE
……
3、fastdfs-nginx-module:
https://github.com/happyfish100/fastdfs-nginx-module
注:生产中每个storage server上都要装fastdfs-nginx-module;
[[email protected] ~]# git clone https://github.com/happyfish100/fastdfs-nginx-module
Initialized empty Git repository in/root/fastdfs-nginx-module/.git/
remote: Counting objects: 52, done.
remote: Total 52 (delta 0), reused 0 (delta0), pack-reused 52
Unpacking objects: 100% (52/52), done.
在LNMP环境上操作:
[[email protected] nginx-1.8.0]# ./configure --prefix=/usr --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx/nginx.pid --lock-path=/var/lock/nginx.lock --user=nginx --group=nginx --with-http_ssl_module --with-http_flv_module --with-http_stub_status_module --with-http_gzip_static_module --http-client-body-temp-path=/var/tmp/nginx/client/ --http-proxy-temp-path=/var/tmp/nginx/proxy/ --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi --http-scgi-temp-path=/var/tmp/nginx/scgi --with-pcre--add-module=../fastdfs-nginx-module/src/
[[email protected] nginx-1.8.0]# make #(只make不make install,make install后会覆盖之前的安装,仅把nginx-1.8.0/objs/nginx的二进制命令覆盖掉之前的/usr/sbin/nginx)
[[email protected] nginx-1.8.0]# ll -h objs/nginx
-rwxr-xr-x. 1 root root 5.4M Jan 14 00:10objs/nginx
[[email protected] nginx-1.8.0]# ll -h/usr/sbin/nginx
-rwxr-xr-x. 1 root root 5.3M Jan 13 21:33/usr/sbin/nginx
[[email protected] nginx-1.8.0]# cp objs/nginx /usr/sbin/nginx
cp: overwrite `/usr/sbin/nginx‘? y
[[email protected] nginx-1.8.0]# cd ../fastdfs-nginx-module/src
[[email protected] src]# cp mod_fastdfs.conf /etc/fdfs/
[[email protected] src]# cp /root/fastdfs-5.08/conf/{anti-steal.jpg,http.conf,mime.types} /etc/fdfs/
[[email protected] src]# touch /var/log/mod_fastdfs.log
[[email protected] src]# chown nginx.nginx /var/log/mod_fastdfs.log
[[email protected] src]# vim /etc/nginx/nginx.conf #(location/group1/M00对应mod_fastdfs.conf中url_have_group_name = true,默认为false)
location / {
root html;
index index.php index.htmlindex.htm;
}
location /group1/M00 {
root /data/fdfs_storage/store;
ngx_fastdfs_module;
}
[[email protected] src]# vim /etc/fdfs/mod_fastdfs.conf #(url_have_group_name= true对应nginx.conf中location /group1/M00)
tracker_server=192.168.23.133:22122
tracker_server=192.168.23.134:22122
url_have_group_name = true
store_path0=/data/fdfs_storage/store
log_filename=/var/log/mod_fastdfs.log
response_mode=proxy #(response mode when the file not exist inthe local file system. proxy: get the content from other storage server, thensend to client. redirect: redirect to the original storage server (HTTP Headeris Location))
[[email protected] src]# /etc/init.d/nginx restart
ngx_http_fastdfs_set pid=112824
nginx: the configuration file/etc/nginx/nginx.conf syntax is ok
nginx: configuration file/etc/nginx/nginx.conf test is successful
Stopping nginx: [ OK ]
Starting nginx: ngx_http_fastdfs_setpid=112918
[ OK ]
[[email protected] src]# cd
[[email protected] ~]# fdfs_upload_file /etc/fdfs/client.conf DSC_0171.JPG
group1/M00/00/00/wKgXhVh545iAP6ETADyH-kmtQ2U888.JPG
http://192.168.23.133/group1/M00/00/00/wKgXhVh545iAP6ETADyH-kmtQ2U888.JPG
注:生产中每个storage server上都要装fastdfs-nginx-module;
附:
Copyright (C) 2008 Happy Fish / YuQing
FastDFS may be copied only under the termsof the GNU General Public License V3, which may be found in the FastDFS sourcekit. Please visit the FastDFS Home Page for more detail. English language:http://english.csource.org/ Chinese language: http://www.csource.org/
FastDFS is an open source high performancedistributed file system. It‘s major functions include: file storing, filesyncing and file accessing (file uploading and file downloading), and it canresolve the high capacity and load balancing problem. FastDFS should meet therequirement of the website whose service based on files such as photo sharingsite and video sharing site.
FastDFS has two roles: tracker and storage.The tracker takes charge of scheduling and load balancing for file access. Thestorage store files and it‘s function is file management including: filestoring, file syncing, providing file access interface. It also manage the metadata which are attributes representing as key value pair of the file. Forexample: width=1024, the key is "width" and the value is"1024".
The tracker and storage contain one or moreservers. The servers in the tracker or storage cluster can be added to orremoved from the cluster by any time without affecting the online services. Theservers in the tracker cluster are peer to peer.
The storarge servers organizing by the filevolume/group to obtain high capacity. The storage system contains one or morevolumes whose files are independent among these volumes. The capacity of thewhole storage system equals to the sum of all volumes‘ capacity. A file volumecontains one or more storage servers whose files are same among these servers.The servers in a file volume backup each other, and all these servers are loadbalancing. When adding a storage server to a volume, files already existing inthis volume are replicated to this new server automatically, and when thisreplication done, system will switch this server online to providing storageservices.
When the whole storage capacity isinsufficiency, you can add one or more volumes to expand the storage capacity.To do this, you need to add one or more storage servers.
The identification of a file is composed oftwo parts: the volume name and the file name.
Client test code use client library pleaserefer to the directory: client/test.