部署带有验证的es集群及创建快照

1?? 环境准备
① 关闭防火墙、selinux
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config
setenforce 0
systemctl stop firewalld
systemctl disable firewalld

② 修改系统最大打开文件数和进程数
cat <<EOF >> /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 20480
* hard nproc 40960
EOF
echo vm.max_map_count=655360 >> /etc/sysctl.conf
sysctl -p

③ 配置主机名及互信
④ 配置yum源
yum -y install wget vim git wget unzip telnet lsof
cd /etc/yum.repos.d/
mkdir backup
mv *.repo backup
# 阿里云yun源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# epel源
yum -y install epel-release
yum clean all
yum makecache
# elk源
cat <<EOF > /etc/yum.repos.d/elk.repo
[elk]
name=elk
baseurl=https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-7.x/
enable=1
gpgcheck=0
EOF

⑤ 源码安装java # 此步骤可忽略,高版本的es集成了java环境
mkdir -p /data/apps/
tar -xf jdk-8u11-linux-x64.tar.gz
mv jdk1.8.0_11/ jdk
cat <<EOF > /etc/profile.d/jdk.sh
JAVA_HOME=/data/apps/jdk
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
EOF
source /etc/profile

2?? 安装elasticsearch集群

① 下载
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.0-linux-x86_64.tar.gz
tar -xf elasticsearch-7.3.0-linux-x86_64.tar.gz
mv elasticsearch-7.3.0 /data/apps/elasticsearch
cd /data/apps
useradd es
chown -R es.es elasticsearch
su - es
mkdir -pv /home/es/{data,logs}/es

② 配置 config/elaticsearch.yml
cluster.name: dev-es # 集群名称
node.name: es-node1 # 节点名称
path.data: /home/es/data/es1,/home/es/data/es2 # es索引库的数据存储目录
path.logs: /home/es/logs/es # es进程启动后,对应的日志信息存放目录
path.repo: /data/es_bk # 备份文件
network.host: 0.0.0.0
# 允许跨域请求
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-credentials: true
#discovery.seed_hosts: ["es-node1"]
cluster.initial_master_nodes: ["es-node1"]
transport.tcp.port: 9300 # 节点间交互的tcp端口,默认9300
discovery.zen.minimum_master_nodes: 2 # 防脑裂,集群中至少又2台节点可用,否则集群就瘫痪。计算公式: 节点数/2+1
discovery.zen.ping.unicast.hosts: [‘es-node1‘,‘es-node2‘,‘es-node3‘] #
xpack.security.enabled: true # 开启auth认证
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

*** 调整es堆内存 建议调整为物理内存的50% 但不要超过32G
vim jvm.options
-Xms30g
-Xmx30g

*** es 参数调优
index.merge.scheduler.max_thread_count:1 # 索引 merge 最大线程数
indices.memory.index_buffer_size:30% # 内存
index.translog.durability:async # 这个可以异步写硬盘,增大写的速度
index.translog.sync_interval:120s #translog 间隔时间
discovery.zen.ping_timeout:120s # 心跳超时时间
discovery.zen.fd.ping_interval:120s # 节点检测时间
discovery.zen.fd.ping_timeout:120s #ping 超时时间
discovery.zen.fd.ping_retries:6 # 心跳重试次数
thread_pool.bulk.size:20 # 写入线程个数 由于我们查询线程都是在代码里设定好的,我这里只调节了写入的线程数
thread_pool.bulk.queue_size:1000 # 写入线程队列大小
index.refresh_interval:300s #index 刷新间隔
bootstrap.memory_lock: true

③ 启动
#Running as a daemon
./bin/elasticsearch -d -p pid_file
# shut down Elasticsearch
pkill -F pid_file
# Checking that Elaelasticsearch is running
curl -XGET ‘http://127.0.0.1:9200‘

**** 使用systemd 管理es集群
vim /usr/lib/systemd/system/es.service
[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target

[Service]
Restart=always
Type=simple
PrivateTmp=true
Environment=ES_HOME=/data/apps/elasticsearch-7.3.0
Environment=ES_PATH_CONF=/data/apps/elasticsearch-7.3.0/config
Environment=PID_DIR=/data/apps/elasticsearch-7.3.0
Environment=ES_SD_NOTIFY=true
#EnvironmentFile=/etc/sysconfig/elasticsearch

WorkingDirectory=/data/apps/elasticsearch-7.3.0

User=es
Group=es

ExecStart=/data/apps/elasticsearch-7.3.0/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet

# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535

# Specifies the maximum number of processes
LimitNPROC=20480

LimitMEMLOCK=infinity

# Specifies the maximum size of virtual memory
LimitAS=infinity

# Specifies the maximum file size
LimitFSIZE=infinity

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0

# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM

# Send the signal only to the JVM rather than its control group
KillMode=process

# Java process is never killed
SendSIGKILL=no

# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target

# Built for ${project.name}-${project.version} (${project.name})

systemctl daemon-reload
systemctl enable es
systemctl start es

# 配置tls和身份验证

1?? 创建证书文件(master上执行)
./bin/elasticsearch-certutil ca # 一直回车
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 # 一直回车

mkdir config/certs
mv elastic-*.p12 config/certs/
chown -R es.es config/certs/

2?? 修改配置并重启
cat >> config/elasticsearch.yml <<EOF
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
EOF

3?? 生成客户端证书
bin/elasticsearch-certutil cert --ca \
config/certs/elastic-stack-ca.p12 \
-name "CN=esuser,OU=dev,DC=weqhealth,DC=com"
#回车
client.p12
# 回车

mv client.p12 config/certs/
cd config/certs/
openssl pkcs12 -in client.p12 -nocerts -nodes > client-key.pem
openssl pkcs12 -in client.p12 -clcerts -nokeys > client.crt
openssl pkcs12 -in client.p12 -cacerts -nokeys -chain > client-ca.crt

chown es.es client*

4?? 设置默认密码
bin/elasticsearch-setup-passwords interactive # auto

Changed password for user apm_system
PASSWORD apm_system = ktfrkXe3aA2qz1UgLoBR

Changed password for user kibana
PASSWORD kibana = HQuZIBunJgTRuAnXdXga

Changed password for user logstash_system
PASSWORD logstash_system = BclvBlUd378SSBlJ832x

Changed password for user beats_system
PASSWORD beats_system = gYiAWtiHdMBMsY8Nj86L

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = jaF3jzw08GKFuVBh78Ri

Changed password for user elastic
PASSWORD elastic = IIti4qJDEi6X2LX2iNmd

# 安全重启es

① 禁用分片规则
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "primaries"
}
}
② 重启
③ 开启分片
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": null
}
}

# 查看集群状况
http://192.168.27.157:9200/_cat/nodes?
http://192.168.27.157:9100/?auth_user=elastic&auth_password=IIti4qJDEi6X2LX2iNmd

====> ES-Head Plugin 方便对ES进行各种操作的客户端工具
https://github.com/mobz/elasticsearch-head
*** 插件不能安装在es的plugin目录下
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
yum -y install nodejs npm
npm init -f # 解决 npm WARN enoent ENOENT: no such file or directory, open ‘/soft/elasticsearch/plugins/package.json‘
npm install -g grunt-cli
npm install grunt --save
npm install grunt-contrib-clean
npm install grunt-contrib-concat
npm install grunt-contrib-watch
npm install grunt-contrib-connect
npm install grunt-contrib-copy
npm install [email protected] --ignore-scripts
npm install grunt-contrib-jasmine

# elasticsearch-head 目录下的 Gruntfile.js 文件,在 options 属性内增加 hostname,设置为 0.0.0.0
connect: {
server: {
options: {
hostname: ‘0.0.0.0‘,
port: 9100,
base: ‘.‘,
keepalive: true
}
}
}
# 修改elasticsearch-head/_site/app.js
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://node-1:9200";

# 启动elasticsearch-head
nohup grunt server > /dev/null 2>&1 &

====> kibana Plugin 读取es集群中索引库的type信息,并使用可视化的方式呈现
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gz
shasum -a 512 kibana-7.3.2-linux-x86_64.tar.gz
tar -xzf kibana-7.3.2-linux-x86_64.tar.gz
mv kibana-7.3.2-linux-x86_64 kibana
mv kibana /data/apps/es-plugin
cd /data/apps/es-plugin/kibana

vim config/kibana.yml
i18n.locale: "zh-CN"
server.host: "192.168.27.157" //监听IP地址,建议内网ip
#elasticsearch.url: "http://192.168.27.157:9200" //elasticsearch连接kibana的URL,可任选一个节点
elasticsearch.username: "kibana"
elasticsearch.password: "HQuZIBunJgTRuAnXdXga"

server.port: 5601 //监听端口
server.host: "192.168.14.239" //监听IP地址,建议内网ip
elasticsearch.url: "http://192.168.14.239:9200" //elasticsearch连接kibana的URL,也可以填写192.168.1.32,因为它们是一个集群

useradd -M kibana
chown -r kibana.kibana kibana
su - kibana
nohup ./bin/kibana &

es性能优化
提升段合并速度(固态盘)

PUT /_cluster/settings
{
"persistent" : {
"indices.store.throttle.max_bytes_per_sec" : "100mb"
}
}

# 部署nfs共享存储
服务端:
yum -y install nfs-utils
systemctl enable rpcbind
systemctl enable nfs
systemctl start rpcbind
systemctl start nfs

#firewall-cmd --zone=public --permanet --add-service={rpc-bind,mountd,nfs}
#firewall-cmd --reload

echo ‘/path/ 192.168.1.0/24(rw,sync,root_squash,no_all_squash)‘ > /etc/exports
systemctl restart nfs

# 查看
showmount -e localhost

客户端
yum -y install nfs-utils
systemctl enable rpcbind
systemctl restart rpcbind
查看服务端共享目录
showmount -e 192.168.27.158
# 挂载
mount -t nfs 192.168.27.158:/path /path
# 查看
nount

# 自动挂载
vim /etc/fstab
192.168.27.158:/path /path nfs defaults 0 0
systemctl daemon-reload

# 创建仓库
curl -XPUT -u elastic:IIti4qJDEi6X2LX2iNmd http://192.168.27.157:9200/_snapshot/my_backup -H ‘Content-Type: application/json‘ -d ‘{"type": "fs", "settings": {"location": "/data/es_bk", "compress": true}}‘
# 创建快照
curl -XPUT -u elastic:IIti4qJDEi6X2LX2iNmd http://192.168.27.157:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true
# 恢复快照
curl -XPOST http://127.0.0.1:9200/_snapshot/my_backup/snapshot_1/_restore

# 查看仓库
curl -XGET -u elastic:IIti4qJDEi6X2LX2iNmd "http://192.168.27.157:9200/_snapshot/my_backup?pretty"
# 查看快照
curl -XGET -u elastic:IIti4qJDEi6X2LX2iNmd "http://192.168.27.157:9200/_snapshot/my_backup/_all?pretty"
# 删除快照
curl -XDELETE -u elastic:IIti4qJDEi6X2LX2iNmd "http://192.168.27.157:9200/_snapshot/my_backup/snapshot_1"

原文地址:https://www.cnblogs.com/ray-mmss/p/12127383.html

时间: 2024-11-09 19:30:39

部署带有验证的es集群及创建快照的相关文章

ES集群部署及调优

系统:Centos6ES版本:6.4.0服务器三台172.16.0.8172.16.0.6172.16.0.22 部署jdk解压jdk放在/data目录,/data/jdk配置环境变量,/etc/proifle里面加入如下 export JAVA_HOME=/data/jdk export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH s

ELK简介 es集群部署 es插件应用

Top NSD ARCHITECTURE DAY03 案例1:ES集群安装 案例2:ES集群安装配置 案例3:练习curl命令 案例4:练习插件 案例5:插入,增加,删除查询数据 案例6:安装Kibana 1 案例1:ES集群安装 1.1 问题 本案例要求: 准备1台虚拟机 部署elasticsearch第一个节点 访问9200端口查看是否安装成功 1.2 方案 1)ELK是日志分析平台,不是一款软件,而是一整套解决方案,是三个软件产品的首字母缩写,ELK分别代表: Elasticsearch:

vCenter Server6.5 & SQL Server2014单机部署 - vShpere ESXI6.0-6.5集群管理

vCenter Server6.5 & SQL Server2014单机部署 VMware官方网站提供免费下载vCenter Server6.5试用版: http://www.vmware.com/ 系统版本:Windows Server2012 R2 部署vCenter Server6.5 步骤: 1.安装SQL Server2014 SP2(或者SQL Server2008R2 SP1以上) 2.配置ODBC 64位数据源 3.安装vCenter Server6.5 (VMware-VIM-

再探使用kubeadm部署高可用的k8s集群-01引言

再探使用kubeadm部署高可用的k8s集群-01引言 2018/1/26 提示 仅供测试用途前言:高可用一直是重要的话题,需要持续研究.最近关注到 k8s 官网文档有更新,其中一篇部署高可用集群的文章思路不错,简洁给力,希望能分享给有需要的小伙伴一起研究下. 资源 k8s node master0, 10.222.0.100 master1, 10.222.0.101 master2, 10.222.0.102 LB, 10.222.0.88 master0, master1, master2

ES集群修改index副本数报错 :index read-only / allow delete

ES集群修改index副本数,报错 :index read-only / allow delete (api) 原因: es集群数据量增速过快,导致个别es node节点磁盘使用率在%80以上,接近%90 ,由于ES新节点的数据目录data存储空间不足,导致从master主节点接收同步数据的时候失败,此时ES集群为了保护数据,会自动把索引分片index置为只读read-only. 故障处理办法: 1:集群加节点,简单粗暴: 2:降低集群index副本数量: 3:其它:增加磁盘.删除历史数据等:

ELasticSearch(五)ES集群原理与搭建

一.ES集群原理 查看集群健康状况:URL+ /GET _cat/health (1).ES基本概念名词 Cluster 代表一个集群,集群中有多个节点,其中有一个为主节点,这个主节点是可以通过选举产生的,主从节点是对于集群内部来说的.es的一个概念就是去中心化,字面上理解就是无中心节点,这是对于集群外部来说的,因为从外部来看es集群,在逻辑上是个整体,你与任何一个节点的通信和与整个es集群通信是等价的. Shards 代表索引分片,es可以把一个完整的索引分成多个分片,这样的好处是可以把一个大

Rancher2.x 一键式部署 Prometheus + Grafana 监控 Kubernetes 集群

目录 1.Prometheus & Grafana 介绍 2.环境.软件准备 3.Rancher 2.x 应用商店 4.一键式部署 Prometheus 5.验证 Prometheus + Grafana 1.Prometheus & Grafana 介绍 Prometheus 是一套开源的系统监控.报警.时间序列数据库的组合,Prometheus 基本原理是通过 Http 协议周期性抓取被监控组件的状态,而通过 Exporter Http 接口输出这些被监控的组件信息,而且已经有很多 E

EFK教程(5) - ES集群开启用户认证

基于ES内置及自定义用户实现kibana和filebeat的认证 作者:"发颠的小狼",欢迎转载 目录 ? 用途 ? 关闭服务 ? elasticsearch-修改elasticsearch.yml配置 ? elasticsearch-开启服务 ? elasticsearch-建立本地内置用户 ? kibana-创建私钥库 ? kibana-WEB界面确认用户 ? filebeat-在WEB界面创建角色及用户 ? filebeat-服务器上创建密钥库 ? filebeat-配置file

部署redis主从高可用集群

部署redis主从高可用集群本文部署的redis集群是一主一从,这两台服务器都设置了哨兵进程,另外再加一台哨兵做仲裁,建议哨兵数量为基数172.16.1.187    redis主+哨兵172.16.1.188    redis从+哨兵172.16.1.189    哨兵以上系统均为CentOS6 在187,188,189上部署redis过程如下:(1)redis使用编译安装方式,所以需要安装编译基本组件# yum -y install gcc gcc-c++ make cmake cpp gl