elasticsearch的安装部署

环境部署

1.服务器准备:centos7.4系统,Jdk1.8 cat /etc/redhat-release。Es6.x

如果是自带的openjdk需要先卸载:

rpm -qa|grep java

rpm -e –nodeps *

2.ElasticSearch安装:我采用的版本为6.6.0 https://elasticsearch.cn/download/

(1)地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.6.0.tar.gz

(2)解压:tar -zxvf elasticsearch/elasticsearch-6.6.0.tar.gz

(3)修改系统配置:

  • 设置内核参数 :vim /etc/sysctl.conf

     添加如下内容:
                fs.file-max=65536
                vm.max_map_count=262144
            sysctl -p 刷新下配置,

    sysctl -a查看是否生效  ,

如果不成功的(启动es还是失败,不是所有人都碰得到,好像是在7.6碰到了):

    rm -f /sbin/modprobe ?

    ln -s /bin/true /sbin/modprobe
    rm -f /sbin/sysctl

    ?ln -s /bin/true /sbin/sysctl

  • 设置资源参数
                 vi /etc/security/limits.conf
                 # 添加一下内容:
                 * soft nofile 65536
                 * hard nofile 131072
                 * soft nproc  2048
                * hard nproc  4096
  • 修改进程数
                 vi /etc/security/limits.d/20-nproc.conf
                    *          soft    nproc     4096
    配置完成后 要关掉链接窗口,重新打开一个
    不能用root用户启动
  • 修改es配置文件: vim elasticsearch安装路径/config/elasticsearch.yml
    # 配置es的集群名称, es会自动发现在同一网段下的es,如果在同一网段下有多个集群,就可以用这个属性来区分不同的集群?
    cluster.name: bi-cluster
    # 节点名称
    node.name: node-master
    # 存储数据的目录
    path.data: /home/elasticsearch/data
    # 存储日志的目录
    path.logs: /home/elasticsearch/logs
    # 设置绑定的ip地址还有其它节点和该节点交互的ip地址
    network.host: 0.0.0.0
    # 指定http端口,你使用head?kopf等相关插件使用的端口
    http.port: 9200
    # 设置节点间交互的tcp端口,默认是9300
    transport.tcp.port: 9300
    #设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点?
    discovery.zen.ping.unicast.hosts: ["10.108.4.203:9300", "10.108.4.204:9300", "10.108.4.205:9300"]
    #如果没有这种设置,遭受网络故障的集群就有可能将集群分成两个独立的集群 - 分裂的大脑 - 这将导致数据丢失
    discovery.zen.minimum_master_nodes: 3
    http.cors.enabled: true        #es5.x版本以上需要,head访问
    http.cors.allow-origin: "*"
    bootstrap.memory_lock: false    #某些系统需要 是因为centos6.x操作系统不支持SecComp,而elasticsearch 5.5.2默认 bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动。
    我的配置如下:
    # ======================== Elasticsearch Configuration ====================
    =====
    #
    # NOTE: Elasticsearch comes with reasonable defaults for most settings.
    #       Before you set out to tweak and tune the configuration, make sure y
    ou
    #       understand what are you trying to accomplish and the consequences.
    #
    # The primary way of configuring a node is via this file. This template lis
    ts
    # the most important settings you may want to configure for a production cl
    uster.
    #
    # Please consult the documentation for further information on configuration
     options:
    # https://www.elastic.co/guide/en/elasticsearch/reference/index.html
    #
    # ---------------------------------- Cluster ------------------------------
    -----
    #
    # Use a descriptive name for your cluster:
    #
    #cluster.name: my-application
    #
    # ------------------------------------ Node -------------------------------
    -----
    #
    # Use a descriptive name for the node:
    #
    #node.name: node-1
    #
    # Add custom attributes to the node:
    #
    #node.attr.rack: r1
    #
    # ----------------------------------- Paths -------------------------------
    -----
    #
    # Path to directory where to store the data (separate multiple locations by
     comma):
    #
    #path.data: /path/to/data
    #
    # Path to log files:
    #
    #path.logs: /path/to/logs
    #
    # ----------------------------------- Memory ------------------------------
    -----
    #
    # Lock the memory on startup:
    #
    #bootstrap.memory_lock: true
    #
    # Make sure that the heap size is set to about half the memory available
    # on the system and that the owner of the process is allowed to use this
    # limit.
    #
    # Elasticsearch performs poorly when the system is swapping the memory.
    #
    # ---------------------------------- Network ------------------------------
    -----
    #
    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    #network.host: 192.168.0.1
    #
    # Set a custom port for HTTP:
    #
    #http.port: 9200
    #
    # For more information, consult the network module documentation.
    #
    # --------------------------------- Discovery -----------------------------
    -----
    #
    # Pass an initial list of hosts to perform discovery when new node is start
    ed:
    # The default list of hosts is ["127.0.0.1", "[::1]"]
    #
    #discovery.zen.ping.unicast.hosts: ["host1", "host2"]
    #
    # Prevent the "split brain" by configuring the majority of nodes (total num
    ber of master-eligible nodes / 2 + 1):
    #
    #discovery.zen.minimum_master_nodes:
    #
    # For more information, consult the zen discovery module documentation.
    #
    # ---------------------------------- Gateway ------------------------------
    -----
    #
    # Block initial recovery after a full cluster restart until N nodes are sta
    rted:
    #
    #gateway.recover_after_nodes: 3
    #
    # For more information, consult the gateway module documentation.
    #
    # ---------------------------------- Various ------------------------------
    -----
    #
    # Require explicit names when deleting indices:
    #
    #action.destructive_requires_name: true
    cluster.name: my-es
    node.name: node-128
    network.host: 0.0.0.0
    http.port: 19200
    transport.tcp.port: 19300
    #discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300"]
    #discovery.zen.minimum_master_nodes: 3
    http.cors.enabled: true
    http.cors.allow-origin: "*"
    bootstrap.memory_lock: false
    bootstrap.system_call_filter: false
    

      

  • 启动es:./bin/elasticsearch 后台启动加 –d参数
  • 打开浏览器访问 http://你的ip:19200/查看是否能够正常访问,看到以下界面表示启动ok

如果在外网访问失败的话,请按照以下步骤执行:

  1. 检查你的elasticsearch.yml配置文件的以下三处是否配置正确

    network.host: 0.0.0.0
    bootstrap.memory_lock: false
    bootstrap.system_call_filter: false
  2. 检查防火墙是否关闭(需要关闭防火墙外网才可以访问)

    开启、重启、关闭、firewalld.service服务

    # 开启
    service firewalld start
    # 重启
    service firewalld restart
    # 关闭
    service firewalld stop
    查看防火墙规则:

    firewall-cmd --list-all
    查询、开放、关闭端口:

    # 查询端口是否开放
    firewall-cmd --query-port=8080/tcp
    # 开放80端口
    firewall-cmd --permanent --add-port=80/tcp
    # 移除端口
    firewall-cmd --permanent --remove-port=8080/tcp
    #重启防火墙(修改配置后要重启防火墙)
    firewall-cmd --reload

    执行以上步骤之后就可以在外网访问了。以下是我遇到的启动错误的问题以及解决办法:

  3. (1)max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

     每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量

  4. ulimit -Hn
    ulimit -Sn

      修改/etc/security/limits.conf文件,增加配置,用户退出后重新登录生效

    *               soft    nofile          65536
    *               hard    nofile          65536

    (2)max number of threads [3818] for user [es] is too low, increase to at least [4096]

      问题同上,最大线程个数太低。修改配置文件/etc/security/limits.conf(和问题1是一个文件),增加配置

    *               soft    nproc           4096
    *               hard    nproc           4096

      可通过命令查看

    ulimit -Hu
    ulimit -Su

    修改后的文件:

    如果更改后发现ulimit -Su 的结果没有变化的话,此时需要在这个文件中添加如下信息:(admin是我自己的用户名称,可以根据自己的实际用户更改)

    admin - nproc 4096

  5. max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

      修改/etc/sysctl.conf文件,增加配置vm.max_map_count=262144

    vi /etc/sysctl.confsysctl -p

      执行命令sysctl -p生效

    4、Exception in thread "main" java.nio.file.AccessDeniedException: /usr/local/elasticsearch/elasticsearch-6.2.2-1/config/jvm.options

      elasticsearch用户没有该文件夹的权限,执行命令

    chown -R es:es /usr/local/elasticsearch/

安装kibana:

1、将下载的kibana安装包解压: tar -zxvf kibana-6.6.0-linux-x86_64.tar.gz

2、更改kibana的配置

# Kibana is served by a back end server. This setting specifies the port to
 use.
server.port: 15601

# Specifies the address to which the Kibana server will bind. IP addresses
and host names are both valid values.
# The default is ‘localhost‘, which usually means remote machines will not
be able to connect.
# To allow connections from remote users, set this parameter to a non-loopb
ack address.
#server.host: "localhost"
server.host: "0.0.0.0"

# Enables you to specify a path to mount Kibana at if you are running behin
d a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remo
ve the basePath
# from requests it receives, and to prevent a deprecation warning at startu
p.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse prox
y.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server‘s name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.hosts: ["http://localhost:19200"]

# When this setting‘s value is true Kibana uses the hostname specified in t
he server.host
# setting. When the value of this setting is false, Kibana uses the hostnam
e of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizat
ions and
# dashboards. Kibana creates a new index if the index doesn‘t already exist
.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these setti
ngs provide
# the username and password that the Kibana server uses to perform maintena
nce on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elast
icsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files
, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to
 the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificat
e and key files.
# These files validate that your Elasticsearch backend uses the same key fi
les.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for t
he certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting‘s valu
e to ‘none‘.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defau
lts to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticse
arch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no*
 client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom header
s cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhi
telist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards.
 Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before r
etrying.
#elasticsearch.startupTimeout: 5000

[[email protected] kibana-6.6.0-linux-x86_64]$ cat config/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to
 use.
server.port: 15601                                                         

# Specifies the address to which the Kibana server will bind. IP addresses
and host names are both valid values.
# The default is ‘localhost‘, which usually means remote machines will not
be able to connect.
# To allow connections from remote users, set this parameter to a non-loopb
ack address.
#server.host: "localhost"
server.host: "0.0.0.0"                                                     

# Enables you to specify a path to mount Kibana at if you are running behin
d a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remo
ve the basePath
# from requests it receives, and to prevent a deprecation warning at startu
p.
# This setting cannot end in a slash.
#server.basePath: ""                                                       

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse prox
y.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false                                             

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server‘s name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.hosts: ["http://localhost:19200"]

# When this setting‘s value is true Kibana uses the hostname specified in t
he server.host
# setting. When the value of this setting is false, Kibana uses the hostnam
e of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizat
ions and
# dashboards. Kibana creates a new index if the index doesn‘t already exist
.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these setti
ngs provide
# the username and password that the Kibana server uses to perform maintena
nce on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elast
icsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files
, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to
 the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificat
e and key files.
# These files validate that your Elasticsearch backend uses the same key fi
les.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for t
he certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting‘s valu
e to ‘none‘.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defau
lts to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticse
arch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no*
 client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom header
s cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhi
telist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards.
 Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before r
etrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output othe
r than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system
 usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number
 formats.
#i18n.locale: "en"

3、启动kibana:  ./bin/kibana

4、外网通过ip地址访问:

注意两个点:
(1)修改启动的es链接地址
(2)修改地址绑定不然只有本地才能访问

(3)kibana的版本和elasticsearch的版本必须一致,比如都是6.6

原文地址:https://www.cnblogs.com/yatou-blog/p/12113909.html

时间: 2024-11-04 12:25:04

elasticsearch的安装部署的相关文章

Kibana+Logstash+Elasticsearch+Redis安装部署

最近做日志分析,发现logstash较符合自己的需求, Logstash:做系统log收集,转载的工具.同时集成各类日志插件,对日志查询和分析的效率有很大的帮助.一般使用shipper作为log收集.indexer作为log转载. Logstash shipper收集log 并将log转发给redis 存储 Logstash indexer从redis中读取数据并转发给elasticsearch redis:是一个db,logstash shipper将log转发到redis数据库中存储.Log

Elasticsearch介绍及安装部署

本节内容: Elasticsearch介绍 Elasticsearch集群安装部署 Elasticsearch优化 安装插件:中文分词器ik 一.Elasticsearch介绍 Elasticsearch是一个分布式搜索服务,提供Restful API,底层基于Lucene,采用多shard的方式保证数据安全,并且提供自动resharding的功能,加之github等大型的站点也采用 Elasticsearch作为其搜索服务. 二.Elasticsearch集群安装部署 1. 环境信息 主机名

ElasticSearch集群安装部署

0 集群搭建 1.安装unzip yum install unzip2.所有集群节点创建新用户 useradd el3.所有集群节点给el用户设置密码passwd el方便记忆使用的rootroot4.所有集群节点创建安装目录和赋予使用权限-->并转换用户 mkdir -p /opt/es ll /opt/ chown el:el /opt/es ll /opt/ su el 5.上传安装部署包到master6.解压到刚刚创建的目录unzip elasticsearch-2.2.1.zip -d

elasticsearch 安装部署以及插件head安装,和使用教程

1.环境初始化 最小化安装 Centos 7.3 x86_64操作系统的虚拟机,vcpu 2,内存4G或更多,操作系统盘50G,主机名设置规则为linux-hostX.exmaple.com,其中host1和host2为elasticsearch服务器,为保证效果特额外添加一块单独的数据磁盘大小为50G并格式化挂载到/data. 1.1 主机名和磁盘挂载: 使用blkid /dev/sdb  查看UUID  使用UUID挂载更加直接,更准确. 1 2 3 4 5 [[email protecte

elasticsearch的安装、部署

https://blog.csdn.net/lubin2016/article/details/81606753 1. elasticsearch的安装 1.1 集群规划 上传elasticsearch的tar.gz包至规划的集群各节点的目录下(规划两个节点rc-fhcb-10-es001,rc-fhcb-10-es002),如:本项目安装在/opt/fhcb/目录下 注意:建议elasticsearch的安装包在集群中各节点目录一致 1.2 修改配置文件 修改安装包下config目录下的配置文

ELK部署logstash安装部署及应用(二)

Logstash 安装部署注意事项: Logstash基本概念: logstash收集日志基本流程: input-->codec-->filter-->codec-->output input:从哪里收集日志. filter:发出去前进行过滤 output:输出至Elasticsearch或Redis消息队列 codec:输出至前台,方便边实践边测试 数据量不大日志按照月来进行收集 如果通过logstash来采集日志,那么每个客户端都需要安装logstash 安装需要前置系统环境

ElasticSearch2.2 集群安装部署

一.ElasticSearch 集群安装部署 环境准备 ubuntu虚拟机2台 ip:192.168.1.104 192.168.1.106 jdk:最低要求1.7,本机jdk版本1.7_67 安装 a.安装jdk(这里不赘述) b.从官网下载ES版本 地址https://www.elastic.co/downloads/elasticsearch c.解压ES到本地 d.进入config目录下,用编辑器打开elasticsearch.yml文件 1.cluster.name: ppscore-

Kafka介绍及安装部署

本节内容: 消息中间件 消息中间件特点 消息中间件的传递模型 Kafka介绍 安装部署Kafka集群 安装Yahoo kafka manager kafka-manager添加kafka cluster 一.消息中间件 消息中间件是在消息的传输过程中保存消息的容器.消息中间件在将消息从消息生产者到消费者时充当中间人的作用.队列的主要目的是提供路由并保证消息的传送:如果发送消息时接收者不可用,消息对列会保留消息,直到可以成功地传递它为止,当然,消息队列保存消息也是有期限的. 二.消息中间件特点 1

Storm笔记整理(三):Storm集群安装部署与Topology作业提交

[TOC] Storm分布式集群安装部署 概述 Storm集群表面类似Hadoop集群.但在Hadoop上你运行的是"MapReduce jobs",在Storm上你运行的是"topologies"."Jobs"和"topologies"是大不同的,一个关键不同是一个MapReduce的Job最终会结束,而一个topology永远处理消息(或直到你kill它). Storm集群有两种节点:控制(master)节点和工作者(wo