leofs 对象存储中一匹黑马

leofs centos 集群搭建和测试(二)

centos  leofs 配置

leofs  cluster  基本规划

Manager

IP: 10.39.1.23, 10.39.1.24

Name: [email protected], [email protected]

Gateway

IP: 10.39.1.28

Name: [email protected]

Storage

IP: 10.39.1.25 .. 10.39.1.26  ... 10.39.1.27

Name: [email protected] .. [email protected][email protected]

LeoFS是一个非结构化的Web对象存储和高可用的,分布式的,最终一致的存储系统

10.39.1.23 上配置

yum install epel-release -y

yum install ansible -y

配置ansible

执行ansible

ansible leofs -m shell -a "yum install gcc gcc-c++ glibc-devel make ncurses-devel openssl-devel autoconf  libuuid-devel cmake check check-devel   wget curl git gcc* vim nc -y "

修改主机名以及hosts

10.39.1.23       leofs_01

10.39.1.24       leofs_02

10.39.1.25       storage_01

10.39.1.26       storage_02

10.39.1.27       storage_03

10.39.1.28       gw_01

生成key

ssh-keygen

ansible 推送key 到其他服务器

ansible all -m authorized_key -a "user=root  key=‘{{ lookup(‘file‘,‘~/.ssh/id_rsa.pub‘)}}‘"

编辑所有服务器配置主机名

vim /etc/hostname

vim /etc/hosts

验证

ansible leofs -m shell  -a "hostname"

ansible leofs -m shell  -a "cat /etc/hosts"

下载leofs 包

http://leo-project.net/leofs/download.html

下载centos 7 1.3.2.1 最新rpm包

将rpm copy 到其他服务器

ansible all -m copy -a ‘src=~/leofs-1.3.2.1-1.erl-18.3.el7.x86_64.rpm dest=/opt‘

ansible all -m shell -a ‘ls -l /opt‘

ansible all -m shell -a ‘cd /opt/; yum install leofs-1.3.2.1-1.erl-18.3.el7.x86_64.rpm -y‘

# 存储节点安装

ansible leofs_storage -m shell -a "yum --enablerepo=centosplus install kmod-xfs xfsprogs xfsprogs-devel -y"

查看安装目录是否全部安装

ansible all -m shell -a ‘ls -l /usr/local/leofs‘

配置leofs

10.39.1.23  配置manger0

manager0 中会配置一致性的级别和集群的id 以及名称

leofs 一致性级别配置参考

#leofs 集群一致性级别配置

相关资料参考 http://leo-project.net/leofs/docs/configuration/configuration_1.html#system-configuration-label

A reference consistency level

LevelConfiguration

Lown = 3, r = 1, w = 1, d = 1

Middlen = 3, [r = 1 | r = 2], w = 2, d = 2

Highn = 3, [r = 2 | r = 3], w = 3, d = 3

cp /usr/local/leofs/1.3.2.1/leo_manager_0/etc/leo_manager.conf /usr/local/leofs/1.3.2.1/leo_manager_0/etc/leo_manager.conf.bak

vim  /usr/local/leofs/1.3.2.1/leo_manager_0/etc/leo_manager.conf

manager.partner = [email protected]

system.dc_id = dc_1

system.cluster_id = leofs_cluster

consistency.num_of_replicas = 3

consistency.write = 1

consistency.read = 1

consistency.delete = 1

consistency.rack_aware_replicas = 0

nodename = [email protected]

10.39.1.24 manger1 配置

cp /usr/local/leofs/1.3.2.1/leo_manager_1/etc/leo_manager.conf /usr/local/leofs/1.3.2.1/leo_manager_1/etc/leo_manager.conf.bak

vim  /usr/local/leofs/1.3.2.1/leo_manager_1/etc/leo_manager.conf

manager.partner = [email protected]

nodename = [email protected]

10.39.1.25   storage_01 配置

返回到10.39.1.23 上使用ansible 对配置文件进行备份

ansible leofs_storage -m shell -a "cp /usr/local/leofs/1.3.2.1/leo_storage/etc/leo_storage.conf /usr/local/leofs/1.3.2.1/leo_storage/etc/leo_storage.conf.bak "

leofs 建议使用xfs,因为xfs I/O对大文件支持比较好

添加一块硬盘并且格式化为xfs 文件系统

fdisl /dev/vdc

n

p

w

mkfs.xfs /dev/vdc1

vim   /usr/local/leofs/1.3.2.1/leo_storage/etc/leo_storage.conf

managers = [[email protected], [email protected]]

obj_containers.path = [/data/leofs]

obj_containers.num_of_containers = [8]

#读写参数的一些设置

磁盘的一些设置

##  Watchdog.DISK

##

## Is disk-watchdog enabled - default:false

watchdog.disk.is_enabled = false

## disk - raised error times

watchdog.disk.raised_error_times = 5

## disk - watch interval - default:1sec

watchdog.disk.interval = 10

## Threshold use(%) of a target disk‘s capacity - defalut:85%

watchdog.disk.threshold_disk_use = 85

## Threshold disk utilization(%) - defalut:90%

watchdog.disk.threshold_disk_util = 90

## Threshold disk read kb/sec - defalut:98304(KB) = 96MB

#watchdog.disk.threshold_disk_rkb = 98304

#131072kb=128MB

watchdog.disk.threshold_disk_rkb = 131072

## Threshold disk write kb/sec - defalut:98304(KB) = 96MB

#watchdog.disk.threshold_disk_wkb = 98304

#131072(kb)=128MB

watchdog.disk.threshold_disk_wkb = 131072

nodename = [email protected]

配置也可以参考

leofs 存储默认可以设置多个

## e.g. Case of plural pathes

## obj_containers.path = [/var/leofs/avs/1, /var/leofs/avs/2]

## obj_containers.num_of_containers = [32, 64]

10.39.1.26   storage02 配置

mkfs.xfs /dev/vdc1

vim  /usr/local/leofs/1.3.2.1/leo_storage/etc/leo_storage.conf

managers = [[email protected], [email protected]]

obj_containers.path = [/data/leofs]

obj_containers.num_of_containers = [8]

nodename = [email protected]

10.39.1.27   storage03 配置

mkfs.xfs /dev/vdc1

vim   /usr/local/leofs/1.3.2.1/leo_storage/etc/leo_storage.conf

managers = [[email protected], [email protected]]

obj_containers.path = [/data/leofs]

obj_containers.num_of_containers = [8]

nodename = [email protected]

返回到10.39.1.23 上使用ansible 批量执行

ansible leofs_storage -m shell -a "mkdir /data/leofs -p "

ansible leofs_storage -m shell -a "ls /data/"

ansible leofs_storage -m shell -a "mount /dev/vdc1 /data/leofs"

ansible leofs_storage -m shell -a "df -h"

echo "/dev/vdc1   /data/leofs   xfs   noatime,nodiratime,osyncisdsync 0 0"   >> /etc/fstab

ansible leofs_storage -m shell -a ‘echo "/dev/vdc1   /data/leofs   xfs   noatime,nodiratime,osyncisdsync 0 0"   >> /etc/fstab‘

将storage01 02  03 重启

查看存储是否挂载成功

ansible leofs_storage -m shell -a "df -h"

gw_01 10.39.1.28 配置

网关配置协议  端口   网关的缓存大小

vim   /usr/local/leofs/1.3.2.1/leo_gateway/etc/leo_gateway.conf

managers = [[email protected], [email protected]]

protocol = s3

http.port = 8080

cache.cache_ram_capacity = 268435456

cache.cache_disc_capacity = 0

cache.cache_expire = 300

cache.cache_max_content_len = 1048576

nodename = [email protected]

# 网关配置线程池

## Large Object Handler - put worker pool size

large_object.put_worker_pool_size = 64

## Large Object Handler - put worker buffer size

large_object.put_worker_buffer_size = 32

## Memory cache capacity in bytes

cache.cache_ram_capacity = 0

## Disk cache capacity in bytes

cache.cache_disc_capacity = 0

## When the length of the object exceeds this value, store the object on disk

cache.cache_disc_threshold_len = 1048576

## Directory for the disk cache data

cache.cache_disc_dir_data = ./cache/data

## Directory for the disk cache journal

cache.cache_disc_dir_journal = ./cache/journal

启动顺序

Order of server launch

Manager-master

Manager-slave

Storage nodes

Gateway(s)

10.39.1.23  manager0

/usr/local/leofs/1.3.2.1/leo_manager_0/bin/leo_manager start

/usr/local/leofs/1.3.2.1/leo_manager_0/bin/leo_manager ping

pong

10.39.1.24  manager1

/usr/local/leofs/1.3.2.1/leo_manager_1/bin/leo_manager start

/usr/local/leofs/1.3.2.1/leo_manager_1/bin/leo_manager ping

pong

10.39.1.25  storage01

/usr/local/leofs/1.3.2.1/leo_storage/bin/leo_storage start

/usr/local/leofs/1.3.2.1/leo_storage/bin/leo_storage ping

pong

10.39.1.26  storage02

/usr/local/leofs/1.3.2.1/leo_storage/bin/leo_storage start

/usr/local/leofs/1.3.2.1/leo_storage/bin/leo_storage ping

pong

10.39.1.27  storage03

/usr/local/leofs/1.3.2.1/leo_storage/bin/leo_storage start

/usr/local/leofs/1.3.2.1/leo_storage/bin/leo_storage ping

pong

查看存储的启动信息

ansible leofs_storage -m shell -a "ls -l /data/leofs"

10.39.1.26 | SUCCESS | rc=0 >>

总用量 12

drwxr-xr-x  2 root root 4096 3月  30 16:00 log

drwxr-xr-x 10 root root 4096 3月  30 15:24 metadata

drwxr-xr-x  2 root root  310 3月  30 15:18 object

drwxr-xr-x  2 root root 4096 3月  30 15:24 state

10.39.1.25 | SUCCESS | rc=0 >>

总用量 12

drwxr-xr-x  2 root root 4096 3月  30 16:00 log

drwxr-xr-x 10 root root 4096 3月  30 15:23 metadata

drwxr-xr-x  2 root root  310 3月  30 15:04 object

drwxr-xr-x  2 root root 4096 3月  30 15:23 state

10.39.1.27 | SUCCESS | rc=0 >>

总用量 4

drwxr-xr-x  2 root root 4096 3月  30 16:00 log

drwxr-xr-x 10 root root  246 3月  30 15:19 metadata

drwxr-xr-x  2 root root  310 3月  30 15:19 object

在10.39.1.23 上执行查看集群的状态

集群的状态查询是经过nmap 通信查询的, 需要在centos 下安装nc

ansible leofs -m shell -a "yum install nc -y "

/usr/local/leofs/1.3.2.1/leofs-adm status

/usr/local/leofs/1.3.2.1/leofs-adm status

[System Confiuration]

-----------------------------------+----------

Item                              | Value

-----------------------------------+----------

Basic/Consistency level

-----------------------------------+----------

system version | 1.3.2

cluster Id | leofs_cluster

DC Id | dc_1

Total replicas | 3

number of successes of R | 1

number of successes of W | 1

number of successes of D | 1

number of rack-awareness replicas | 0

ring size | 2^128

-----------------------------------+----------

Multi DC replication settings

-----------------------------------+----------

max number of joinable DCs | 2

number of replicas a DC | 1

-----------------------------------+----------

Manager RING hash

-----------------------------------+----------

current ring-hash |

previous ring-hash |

-----------------------------------+----------

[State of Node(s)]

-------+----------------------------+--------------+----------------+----------------+----------------------------

type  |            node            |    state     |  current ring  |   prev ring    |          updated at

-------+----------------------------+--------------+----------------+----------------+----------------------------

S    | [email protected]      | attached     |                |                | 2017-03-30 15:24:56 +0800

S    | [email protected]      | attached     |                |                | 2017-03-30 15:24:43 +0800

S    | [email protected]      | attached     |                |                | 2017-03-30 15:19:22 +0800

-------+----------------------------+--------------+----------------+----------------+----------------------------

10.39.1.28

启动网关

/usr/local/leofs/1.3.2.1/leo_gateway/bin/leo_gateway start

/usr/local/leofs/1.3.2.1/leo_gateway/bin/leo_gateway ping

pong

10.39.1.23 leofs 存储系统启动

/usr/local/leofs/1.3.2.1/leofs-adm start

Generating RING...

Generated RING

OK  33% - [email protected]

OK  67% - [email protected]

OK 100% - [email protected]

OK

/usr/local/leofs/1.3.2.1/leofs-adm status

[System Confiuration]

-----------------------------------+----------

Item                              | Value

-----------------------------------+----------

Basic/Consistency level

-----------------------------------+----------

system version | 1.3.2

cluster Id | leofs_cluster

DC Id | dc_1

Total replicas | 3

number of successes of R | 1

number of successes of W | 1

number of successes of D | 1

number of rack-awareness replicas | 0

ring size | 2^128

-----------------------------------+----------

Multi DC replication settings

-----------------------------------+----------

max number of joinable DCs | 2

number of replicas a DC | 1

-----------------------------------+----------

Manager RING hash

-----------------------------------+----------

current ring-hash |

previous ring-hash |

-----------------------------------+----------

[State of Node(s)]

-------+----------------------------+--------------+----------------+----------------+----------------------------

type  |            node            |    state     |  current ring  |   prev ring    |          updated at

-------+----------------------------+--------------+----------------+----------------+----------------------------

S    | [email protected]      | running      | 79e0dbc4       | 79e0dbc4       | 2017-03-30 15:29:17 +0800

S    | [email protected]      | running      | 79e0dbc4       | 79e0dbc4       | 2017-03-30 15:29:17 +0800

S    | [email protected]      | running      | 79e0dbc4       | 79e0dbc4       | 2017-03-30 15:29:17 +0800

G    | [email protected]      | running      | 79e0dbc4       | 79e0dbc4       | 2017-03-30 15:29:18 +0800

-------+----------------------------+--------------+----------------+----------------+----------------------------

leofs 对象存储的配置和查询

s3-api 命令使用

http://leo-project.net/leofs/docs/admin_guide/admin_guide_8.html

用户查询

get-users

/usr/local/leofs/1.3.2.1/leofs-adm get-users

user_id     | role_id | access_key_id          | created_at

------------+---------+------------------------+---------------------------

_test_leofs | 9       | 05236                  | 2017-03-30 15:03:54 +0800

删除默认的用户

delete-user <user-id>

/usr/local/leofs/1.3.2.1/leofs-adm delete-user _test_leofs

OK

创建用户

create-user <user-id> <password>

/usr/local/leofs/1.3.2.1/leofs-adm create-user test test

access-key-id: 919ca38e3fb34085b94a

secret-access-key: 387d7f32546982131e41355e1adbcae1a9b08bec

/usr/local/leofs/1.3.2.1/leofs-adm get-users

user_id | role_id | access_key_id          | created_at

--------+---------+------------------------+---------------------------

test    | 1       | 919ca38e3fb34085b94a   | 2017-03-30 15:45:41 +0800

endpoint

/usr/local/leofs/1.3.2.1/leofs-adm add-endpoint 10.39.1.28

OK

/usr/local/leofs/1.3.2.1/leofs-adm get-endpoints

endpoint         | created at

-----------------+---------------------------

10.39.1.28       | 2017-03-30 15:49:14 +0800

localhost        | 2017-03-30 15:03:54 +0800

s3.amazonaws.com | 2017-03-30 15:03:54 +0800

add-bucket <bcuket> <access-key-id>

/usr/local/leofs/1.3.2.1/leofs-adm add-bucket abc 919ca38e3fb34085b94a

OK

/usr/local/leofs/1.3.2.1/leofs-adm get-buckets

cluster id    | bucket   | owner  | permissions      | redundancy method            | created at

--------------+----------+--------+------------------+------------------------------+---------------------------

leofs_cluster | abc      | test   | Me(full_control) | copy, {n:3, w:1, r:1, d:1}   | 2017-03-30 15:51:43 +0800

get-bucket <access-key-id>

权限

/usr/local/leofs/1.3.2.1/leofs-adm  update-acl abc 919ca38e3fb34085b94a public-read-write

OK

查看存储节点的磁盘信息

/usr/local/leofs/1.3.2.1/leofs-adm  du detail [email protected]

[du(storage stats)]

file path: /data/leofs/object/0.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

file path: /data/leofs/object/1.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

file path: /data/leofs/object/2.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

file path: /data/leofs/object/3.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

file path: /data/leofs/object/4.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

file path: /data/leofs/object/5.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

file path: /data/leofs/object/6.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

file path: /data/leofs/object/7.avs

active number of objects: 0

total number of objects: 0

active size of objects: 0

total size of objects: 0

ratio of active size: 0%

last compaction start: ____-__-__ __:__:__

last compaction end: ____-__-__ __:__:__

duration: 0s

result: ‘‘

/usr/local/leofs/1.3.2.1/leofs-adm  status  [email protected]

--------------------------------------+--------------------------------------

Item                  |                 Value

--------------------------------------+--------------------------------------

Config-1: basic

--------------------------------------+--------------------------------------

version | 1.3.1

number of vnodes | 168

object containers | - path:[/data/leofs], # of containers:8

log directory | ./log/erlang

log level | info

--------------------------------------+--------------------------------------

Config-2: watchdog

--------------------------------------+--------------------------------------

[rex(rpc-proc)]                      |

check interval(s) | 10

threshold mem capacity | 33554432

--------------------------------------+--------------------------------------

[cpu]                                |

enabled/disabled | disabled

check interval(s) | 10

threshold cpu load avg | 5.0

threshold cpu util(%) | 100

--------------------------------------+--------------------------------------

[disk]                               |

enabled/disalbed | disabled

check interval(s) | 10

threshold disk use(%) | 85

threshold disk util(%) | 90

threshold rkb(kb) | 131072

threshold wkb(kb) | 131072

--------------------------------------+--------------------------------------

Config-3: message-queue

--------------------------------------+--------------------------------------

number of procs/mq | 8

number of batch-procs of msgs | max:3000, regular:1600

interval between batch-procs (ms)  | max:3000, regular:500

--------------------------------------+--------------------------------------

Config-4: autonomic operation

--------------------------------------+--------------------------------------

[auto-compaction]                    |

enabled/disabled | disabled

warning active size ratio (%) | 70

threshold active size ratio (%) | 60

number of parallel procs | 1

exec interval | 3600

--------------------------------------+--------------------------------------

Config-5: data-compaction

--------------------------------------+--------------------------------------

limit of number of compaction procs | 4

number of batch-procs of objs | max:1500, regular:1000

interval between batch-procs (ms)  | max:3000, regular:500

--------------------------------------+--------------------------------------

Status-1: RING hash

--------------------------------------+--------------------------------------

current ring hash | 79e0dbc4

previous ring hash | 79e0dbc4

--------------------------------------+--------------------------------------

Status-2: Erlang VM

--------------------------------------+--------------------------------------

vm version | 7.3.1.2

total mem usage | 37035408

system mem usage | 20786136

procs mem usage | 16258552

ets mem usage | 5229136

procs | 560/1048576

kernel_poll | true

thread_pool_size | 32

--------------------------------------+--------------------------------------

Status-3: Number of messages in MQ

--------------------------------------+--------------------------------------

replication messages | 0

vnode-sync messages | 0

rebalance messages | 0

--------------------------------------+--------------------------------------

性能压测

leofs 性能测试工具  basho_bench

https://github.com/basho/basho_bench

10.39.1.23  安装

安装依赖

1. Install libatomic

##

$ wget http://www.ivmaisoft.com/_bin/atomic_ops/libatomic_ops-7.4.4.tar.gz

$ tar xzvf libatomic_ops-7.4.4.tar.gz

$ cd libatomic_ops-7.4.4

$ ./configure --prefix=/usr/local

$ make

$ sudo make install

##

## 2. Install Erlang (18.3)

##

$ wget http://erlang.org/download/otp_src_18.3.tar.gz

$ tar xzf otp_src_18.3.tar.gz

$ cd otp_src_18.3

$ ./configure --prefix=/usr/local/erlang/18.3 \

--enable-smp-support \

--enable-m64-build \

--enable-halfword-emulator \

--enable-kernel-poll \

--without-javac \

--disable-native-libs \

--disable-hipe \

--disable-sctp \

--enable-threads \

--with-libatomic_ops=/usr/local

$ make

$ sudo make install

##

## 3. Set PATH

##

$ vi ~/.profile

## append the follows:

export ERL_HOME=/usr/local/erlang/18.3

export PATH=$PATH:$ERL_HOME/bin

$ source ~/.profile

安装性能测试工具

git clone https://github.com/leo-project/basho_bench.git

cd basho_bench/

make all

# make all 的时候使用的git 有时候拉取依赖会报错,使用https 就好

批量替换git deps 目录下的所有的rebar.confg 配置文件git 修改为https

find deps -name rebar.config -type f -exec sed -i ‘s/git:\/\//https:\/\//g‘ {} +

make all

性能压测测试文件

16kb 文件压测

vim   16file.conf

{mode,      max}.

{duration,   10}.

{concurrent, 50}.

{driver, basho_bench_driver_leofs}.

{code_paths, ["deps/ibrowse"]}.

{http_raw_ips, ["10.39.1.28"]}. %% able to set plural nodes

{http_raw_port, 8080}. %% default: 8080

{http_raw_path, "/abc"}.

%% {http_raw_path, "/${BUCKET}"}.

{key_generator,   {partitioned_sequential_int, 660000}}. %% 请求的次数

{value_generator, {fixed_bin, 16384}}. %% 16KB

{operations, [{put,1}]}.               %% PUT:100%

%%{operations, [{put,1}, {get, 4}]}.   %% PUT:20%, GET:80%

{check_integrity, false}.

660000x16kb=10GB

安装nmon 收集信息

ansible leofs -m shell -a "yum install wget https://raw.githubusercontent.com/hambuergaer/nmon-packages/master/nmon-rhel7-1.0-1.x86_64.rpm -y "

ansible leofs -m shell -a "mkdir /nmon "

10 分钟

nmon 使用参数

-s  表示秒级采集一次数据

-c  表示采集收的次数

-m  表示生成数据文件的路径

-f  表示生成数据文件名中有时间

nmon  -f  -s 1 -c 360 -m /nmon

一秒采集一次,总共采集6分钟,1x360/3600=6分钟

先检查一下磁盘的大小

[[email protected]_01 basho_bench]# ansible leofs_storage -m shell -a  "du -sh  /data/leofs "

10.39.1.26 | SUCCESS | rc=0 >>

332K/data/leofs

10.39.1.25 | SUCCESS | rc=0 >>

332K/data/leofs

10.39.1.27 | SUCCESS | rc=0 >>

132K/data/leofs

同时写入9.7GB 16KB大小的文件 ,用时5分钟

用时5分钟

ansible leofs_storage -m shell -a  "du -sh /data/leofs "

10.39.1.26 | SUCCESS | rc=0 >>

9.7G/data/leofs

10.39.1.25 | SUCCESS | rc=0 >>

9.7G/data/leofs

10.39.1.27 | SUCCESS | rc=0 >>

9.7G/data/leofs

把网关的线程池打开设置为64

# 网关配置线程池

## Large Object Handler - put worker pool size

large_object.put_worker_pool_size = 16

## Large Object Handler - put worker buffer size

large_object.put_worker_buffer_size = 32

## Memory cache capacity in bytes

cache.cache_ram_capacity = 0

## Disk cache capacity in bytes

cache.cache_disc_capacity = 0

## When the length of the object exceeds this value, store the object on disk

cache.cache_disc_threshold_len = 1048576

## Directory for the disk cache data

cache.cache_disc_dir_data = ./cache/data

## Directory for the disk cache journal

cache.cache_disc_dir_journal = ./cache/journal

10万 文件用时一分钟 使用比例如下

4096.. 8192: 15%

8192.. 16384: 25%

16384.. 32768: 23%

32768.. 65536: 22%

65536.. 131072: 15%

{mode, max}.

{duration, 1000}.

{concurrent, 64}.

{driver, basho_bench_driver_leofs}.

{code_paths, ["deps/ibrowse"]}.

{http_raw_ips, ["10.39.1.28"]}.

{http_raw_port, 8080}.

{http_raw_path, "/abc"}.

{retry_on_overload, true}.

{key_generator, {partitioned_sequential_int, 1000000}}.

{value_generator, {fixed_bin, 262144}}.

{operations, [{put,1}]}.

{value_generator_source_size, 1048576}.

{http_raw_request_timeout, 30000}. % 30seconds

{value_size_groups, [{15, 4096, 8192},{25, 8192, 16384}, {23, 16384, 32768}, {22, 32768, 65536}, {15, 65536, 131072}]}.

查看存储空间大小

ansible leofs_storage -m shell -a  "du -sh /data/leofs "

10.39.1.25 | SUCCESS | rc=0 >>

132K/data/leofs

10.39.1.26 | SUCCESS | rc=0 >>

132K/data/leofs

10.39.1.27 | SUCCESS | rc=0 >>

132K/data/leofs

[[email protected]_01 basho_bench]# ansible leofs_storage -m shell -a  "ls  /data/leofs "

10.39.1.26 | SUCCESS | rc=0 >>

log

metadata

object

10.39.1.27 | SUCCESS | rc=0 >>

log

metadata

object

10.39.1.25 | SUCCESS | rc=0 >>

log

metadata

object

ansible leofs_storage -m shell -a  "du -sh /data/leofs "

10.39.1.25 | SUCCESS | rc=0 >>

4.1G/data/leofs

10.39.1.26 | SUCCESS | rc=0 >>

4.1G/data/leofs

10.39.1.27 | SUCCESS | rc=0 >>

4.1G/data/leofs

ansible leofs_storage -m shell -a  "ls   /data/leofs "

10.39.1.26 | SUCCESS | rc=0 >>

log

metadata

object

10.39.1.27 | SUCCESS | rc=0 >>

log

metadata

object

10.39.1.25 | SUCCESS | rc=0 >>

log

metadata

object

state

ansible leofs_storage -m shell -a  "nmon  -f  -s 1 -c 1440 -m /nmon "

采集24分钟内的数据

100万  5%写    95% 读

{mode, max}.

{duration,   30}.

{concurrent, 64}.

{driver, basho_bench_driver_leofs}.

{code_paths, ["deps/ibrowse"]}.

{http_raw_ips, ["192.168.100.35"]}.

{http_raw_port, 8080}.

{http_raw_path, "/test"}.

{retry_on_overload, true}.

{key_generator, {uniform_int,1000000}}.

{value_generator, {fixed_bin, 262144}}.

{operations, [{put,5}, {get, 95}]}.

{value_generator_source_size, 1048576}.

{http_raw_request_timeout, 30000}. % 30seconds

{value_size_groups, [{15, 4096, 8192},{25, 8192, 16384}, {23, 16384, 32768}, {22, 32768, 65536}, {15, 65536, 131072}]}.

10.39.1.23 安装LeoCenter

git clone https://github.com/leo-project/leo_center.git

yum install ruby-devel -y

cd leo_center/

gem install bundler

bundle install

修改配置

config.yml

:managers:

- "localhost:10020" # master

- "localhost:10021" # slave

:credential:

:access_key_id: "YOUR_ACCESS_KEY_ID"

:secret_access_key: "YOUR_SECRET_ACCESS_KEY"

启动服务

thin start -a ${HOST} -p ${PORT} > /dev/null 2>&1  &

创建管理员

You need to create an administrator user from LeoFS-Manager’s console.

$ leofs-adm create-user leo_admin password

access-key-id: ab96d56258e0e9d3621a

secret-access-key: 5c3d9c188d3e4c4372e414dbd325da86ecaa8068

$ leofs-adm update-user-role leo_admin 9

OK

leofs-adm create-user leo_admin password

access-key-id: 8d9de6fdfb35f837e9ed

secret-access-key: 7dcc2631493865c7fb7ec7f96dda627f1cbb21eb

leofs-adm update-user-role leo_admin 9

OK

[[email protected]_01 leo_center]# leofs-adm get-users

user_id   | role_id | access_key_id          | created_at

----------+---------+------------------------+---------------------------

leo_admin | 9       | 8d9de6fdfb35f837e9ed   | 2017-03-31 11:40:14 +0800

test      | 1       | 919ca38e3fb34085b94a   | 2017-03-30 15:45:41 +0800

经过测试。

删除用户buckets 是存在的,只有当buckets 删除之后数据才能真正删除

时间: 2024-08-13 13:14:38

leofs 对象存储中一匹黑马的相关文章

对象存储VS块存储

在今天的IT环境中,云计算已经作为一个时代的代名词,而在云的存储基础设施中,对象存储和块存储是两个最基本的存储形式,也是各家云提供商最常提供的两种基础存储服务.那么对象存储与块存储有什么联系和区别呢,下面我将从基础层面为各位看官慢慢道来. 通常意义上来说,对象存储也就是键值存储,一般提供使用HTTP协议通过简单的PUT .GET等接口,适合在云环境中进行大规模的非结构化数据存储使用.而块存储主要指能够模拟或表现为计算机裸盘,能够被计算主机当做硬盘使用的存储形式.从这个角度看,对象存储和块存储并没

010 Ceph RGW对象存储

一.对象存储 1.1 介绍 通过对象存储,将数据存储为对象,每个对象除了包含数据,还包含数据自身的元数据 对象通过Object ID来检索,无法通过普通文件系统操作来直接访问对象,只能通过API来访问,或者第三方客户端(实际上也是对API的封装) 对象存储中的对象不整理到目录树中,而是存储在扁平的命名空间中,Amazon S3将这个扁平命名空间称为bucket.而swift则将其称为容器 无论是bucket还是容器,都不能嵌套 bucket需要被授权才能访问到,一个帐户可以对多个bucket授权

WordPress安装WPCOS插件分离图片至腾讯云对象存储加速网站

我们在前面的文章中已经通过"WordPress配置腾讯云对象存储COS之存储桶创建和设置"和"腾讯云对象存储COS绑定域名/开启CDN/设置免费SSL证书"两篇文章完成对于腾讯云对象存储COS的梳理,我们已经会在腾讯云COS中创建存储桶,以及绑定自己的域名和开启CDN加速.如果我们不绑定域名也是可以用的,只不过理论上用自己的域名连接比较专业一些. 同时我们在上一篇文章中也有知道获取腾讯云API密钥,因为在这篇文章中我们会在WordPress课堂网站安装WPCOS插件

C# CLR via 对象内存中堆的存储【类型对象指针、同步块索引】

最近在看书,看到了对象在内存中的存储方式. 讲到了对象存储在内存堆中,分配的空间除了类型对象的成员所需的内存量,还有额外的成员(类型对象指针. 同步块索引 ),看到这个我就有点不懂了,不知道类型对象指针是什么,指向的什么? 从网上找也没有找到,最后往下看,书中有些描述.说下我的理解: 类型对象指针:指向类型对象存储的地址,假如有一个类型Person,它在堆中有一块区域存储它内部的字段和成员以及两个额外成员(类型对象指针. 同步块索引 ),类型对象的类型对象指针指向的是System.Type的地址

session的官方定义是:Session:在计算机中,尤其是在网络应用中,称为“会话控制”。Session 对象存储特定用户会话所需的属性及配置信息。 说白了session就是一种可以维持服务器端的数据存储技术。session主要有以下的这些特点: 1. session保存的位置是在服务器端 2. session一般来说是要配合cookie使用,如果是浏览器禁用了cookie功

session的官方定义是:Session:在计算机中,尤其是在网络应用中,称为"会话控制".Session 对象存储特定用户会话所需的属性及配置信息. 说白了session就是一种可以维持服务器端的数据存储技术.session主要有以下的这些特点: 1. session保存的位置是在服务器端 2. session一般来说是要配合cookie使用,如果是浏览器禁用了cookie功能,也就只能够使用URL重写来实现session存储的功能 3. 单纯的使用session来维持用户状态的话

七牛云免费对象存储,并绑定到cloudreve中

之前开通了腾讯云的对象存储COS并使用中,不过之前主要将它当作云盘使用,这两天再做博客系统时发现也可以将它作为网站的图库,这样对网站的访问效率也会提高. 今天了解到七牛云有免费的对象存储可以使用,于是自己就是注册了账号,而且需要实名,一切完成后,就可以开通对象存储功能了 下面是对象存储的免费额度,存储空间是10G,月流量是10G,貌似http是免费的,而https流量需要收费 进入七牛云的对象存储功能,首先新建存储空间 完成后点确定,这样就基本完成了,一开始七牛云会给你一个测试域名作为尝试,使用

网易独特的创新机制,能否让网易云成为一匹黑马?

(上图为网易杭州研究院执行院长汪源) 2016年11月10日,网易发布了2016年Q3财报,其中净收入92.12亿元人民币.同比增长38.1%,净利润为27.4亿元,同比增长45.6%,而同期的百度Q3财报显示总营收首次出现下滑,阿里Q3的净利润也下降59%.当然,百度和阿里的总体市值仍然高于网易,但网易近年来神奇的股价表现背后,到底是什么样的内在原因? 2016年11月25日,在北京国家会议中心召开的GITC 2016全球互联网技术大会上,网易杭州研究院(以下简称网易研究院)执行院长汪源回顾了

腾讯对象存储服务COS加密签名上传文件与下载文件的剖析,福利提供给所有使用Android的小伙伴们!

在做一些用户需求的时候,公司往往需要工程师采集到更多有用的关于用户的个人信息,然后对用户群进行分析,今天我不是来分析这些的,今天我主要是说 腾讯推出的款云产品,那就是对象存储服务COS,这个产品面向所有开发者,新用户都有免费享有10G的使用权,10G可能对于做方案的工程师来说可能是微不 足道的,比如后视镜和车载方案,会常常需要用到视频的存储与云分享,当然这里不是只本地存储哦,我指的是用户在使用方案商的方案的时候,比如他开车 的时候录了一段视频需要分享到某个域,共享给大家看,比如微信,这时候他肯定

FreeNAS 11.0 正式发布,提供 S3 兼容的对象存储服务

FreeNAS 11.0 正式版已发布,该版本带来了新的虚拟化和对象存储功能.FreeNAS 11.0 将 bhyve 虚拟机添加到其受欢迎的 SAN / NAS.Jail 和插件中,让用户可以在 FreeNAS box 上使用 host web-scale VMs.它提供 S3 兼容的对象存储服务,可将 FreeNAS box 变成 S3 兼容的服务器,不用再依赖云端.点击此处查看 FreeNAS 11.0 的新功能 FreeNAS 11.0 基于 FreeBSD 11-STABLE ,它增加