Kafka单节点及集群配置安装

一.单节点

  1.上传Kafka安装包到Linux系统【当前为Centos7】。

  2.解压,配置conf/server.property。

    2.1配置broker.id

      

    2.2配置log.dirs

      

    2.3配置zookeeper.connect

      

  3.启动Zookeeper集群

    

    

    

    备注:zookeeper集群启动时,先启动的节点因节点启动过少而出现not running这种情况,是正常的,把所有节点都启动之后这个情况就会消失!

  3.启动Kafka服务

    执行:./kafka-server-start.sh ../config/server.properties &

    启动日志:   

[[email protected] bin]# ./kafka-server-start.sh ../config/server.properties &
[1] 2190
[[email protected] bin]# [2019-04-07 10:47:29,730] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2019-04-07 10:47:32,457] INFO starting (kafka.server.KafkaServer)
[2019-04-07 10:47:32,464] INFO Connecting to zookeeper on master:2181 (kafka.server.KafkaServer)
[2019-04-07 10:47:32,571] INFO [ZooKeeperClient] Initializing a new session to master:2181. (kafka.zookeeper.ZooKeeperClient)
[2019-04-07 10:47:32,603] INFO Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 00:39 GMT (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,603] INFO Client environment:host.name=master (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,603] INFO Client environment:java.version=1.8.0_172 (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,604] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,604] INFO Client environment:java.home=/usr/local/soft/jdk1.8.0_172/jre (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,604] INFO Client environment:java.class.path=.:/usr/local/soft/jdk1.8.0_172/jre/lib/rt.jar:/usr/local/soft/jdk1.8.0_172/lib/dt.jar:/usr/local/soft/jdk1.8.0_172/lib/tools.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/activation-1.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/aopalliance-repackaged-2.5.0-b42.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/argparse4j-0.7.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/connect-api-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/connect-basic-auth-extension-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/connect-file-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/connect-json-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/connect-runtime-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/connect-transforms-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/guava-20.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/hk2-api-2.5.0-b42.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/hk2-locator-2.5.0-b42.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/hk2-utils-2.5.0-b42.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jackson-annotations-2.9.8.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jackson-core-2.9.8.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jackson-databind-2.9.8.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jackson-jaxrs-base-2.9.8.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jackson-jaxrs-json-provider-2.9.8.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jackson-module-jaxb-annotations-2.9.8.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/javax.annotation-api-1.2.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/javax.inject-1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/javax.inject-2.5.0-b42.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/javax.ws.rs-api-2.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jersey-client-2.27.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jersey-common-2.27.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jersey-container-servlet-2.27.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jersey-container-servlet-core-2.27.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jersey-hk2-2.27.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jersey-media-jaxb-2.27.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jersey-server-2.27.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-client-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-continuation-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-http-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-io-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-security-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-server-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-servlet-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-servlets-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jetty-util-9.4.12.v20180830.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka_2.11-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka_2.11-2.1.1-sources.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka-clients-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka-log4j-appender-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka-streams-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka-streams-examples-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka-streams-scala_2.11-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka-streams-test-utils-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/kafka-tools-2.1.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/log4j-1.2.17.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/lz4-java-1.5.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/maven-artifact-3.6.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/metrics-core-2.2.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/plexus-utils-3.1.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/reflections-0.9.11.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/rocksdbjni-5.14.2.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/scala-library-2.11.12.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/scala-logging_2.11-3.9.0.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/scala-reflect-2.11.12.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/slf4j-api-1.7.25.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/slf4j-log4j12-1.7.25.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/snappy-java-1.1.7.2.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/validation-api-1.1.0.Final.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/zkclient-0.11.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/zookeeper-3.4.13.jar:/usr/local/soft/kafka_2.11-2.1.1/bin/../libs/zstd-jni-1.3.7-1.jar (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,605] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,606] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,606] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,606] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,606] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,607] INFO Client environment:os.version=3.10.0-693.el7.x86_64 (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,607] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,607] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,607] INFO Client environment:user.dir=/usr/local/soft/kafka_2.11-2.1.1/bin (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,630] INFO Initiating client connection, connectString=master:2181 sessionTimeout=6000 watcher=[email protected] (org.apache.zookeeper.ZooKeeper)
[2019-04-07 10:47:32,712] INFO Opening socket connection to server master/192.168.245.136:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2019-04-07 10:47:32,720] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-04-07 10:47:32,775] INFO Socket connection established to master/192.168.245.136:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-04-07 10:47:32,885] INFO Session establishment complete on server master/192.168.245.136:2181, sessionid = 0x1000023437d0000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-04-07 10:47:32,911] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
[2019-04-07 10:47:35,633] INFO Cluster ID = 1AkrnNRhRiW9PWHA77R9lA (kafka.server.KafkaServer)
[2019-04-07 10:47:35,648] WARN No meta.properties file under dir /usr/local/soft/kafka_2.11-2.1.1/logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-04-07 10:47:36,138] INFO KafkaConfig values:
    advertised.host.name = null
    advertised.listeners = null
    advertised.port = null
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.id = 0
    broker.id.generation.enable = true
    broker.rack = null
    client.quota.callback.class = null
    compression.type = producer
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    controlled.shutdown.enable = true
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 5000
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.master.key = null
    delegation.token.max.lifetime.ms = 604800000
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    fetch.purgatory.purge.interval.requests = 1000
    group.initial.rebalance.delay.ms = 0
    group.max.session.timeout.ms = 300000
    group.min.session.timeout.ms = 6000
    host.name =
    inter.broker.listener.name = null
    inter.broker.protocol.version = 2.1-IV2
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    listeners = null
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dirs = /usr/local/soft/kafka_2.11-2.1.1/logs
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.message.downconversion.enable = true
    log.message.format.version = 2.1-IV2
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    max.incremental.fetch.session.cache.slots = 1000
    message.max.bytes = 1000012
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
    password.encoder.iterations = 4096
    password.encoder.key.length = 128
    password.encoder.keyfactory.algorithm = null
    password.encoder.old.secret = null
    password.encoder.secret = null
    port = 9092
    principal.builder.class = null
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.consumer.default = 9223372036854775807
    quota.producer.default = 9223372036854775807
    quota.window.num = 11
    quota.window.size.seconds = 1
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 10000
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = [GSSAPI]
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism.inter.broker.protocol = GSSAPI
    sasl.server.callback.handler.class = null
    security.inter.broker.protocol = PLAINTEXT
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
    transaction.max.timeout.ms = 900000
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 1
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    zookeeper.connect = master:2181
    zookeeper.connection.timeout.ms = 6000
    zookeeper.max.in.flight.requests = 10
    zookeeper.session.timeout.ms = 6000
    zookeeper.set.acl = false
    zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-04-07 10:47:36,198] INFO KafkaConfig values:
    advertised.host.name = null
    advertised.listeners = null
    advertised.port = null
    alter.config.policy.class.name = null
    alter.log.dirs.replication.quota.window.num = 11
    alter.log.dirs.replication.quota.window.size.seconds = 1
    authorizer.class.name =
    auto.create.topics.enable = true
    auto.leader.rebalance.enable = true
    background.threads = 10
    broker.id = 0
    broker.id.generation.enable = true
    broker.rack = null
    client.quota.callback.class = null
    compression.type = producer
    connection.failed.authentication.delay.ms = 100
    connections.max.idle.ms = 600000
    controlled.shutdown.enable = true
    controlled.shutdown.max.retries = 3
    controlled.shutdown.retry.backoff.ms = 5000
    controller.socket.timeout.ms = 30000
    create.topic.policy.class.name = null
    default.replication.factor = 1
    delegation.token.expiry.check.interval.ms = 3600000
    delegation.token.expiry.time.ms = 86400000
    delegation.token.master.key = null
    delegation.token.max.lifetime.ms = 604800000
    delete.records.purgatory.purge.interval.requests = 1
    delete.topic.enable = true
    fetch.purgatory.purge.interval.requests = 1000
    group.initial.rebalance.delay.ms = 0
    group.max.session.timeout.ms = 300000
    group.min.session.timeout.ms = 6000
    host.name =
    inter.broker.listener.name = null
    inter.broker.protocol.version = 2.1-IV2
    kafka.metrics.polling.interval.secs = 10
    kafka.metrics.reporters = []
    leader.imbalance.check.interval.seconds = 300
    leader.imbalance.per.broker.percentage = 10
    listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    listeners = null
    log.cleaner.backoff.ms = 15000
    log.cleaner.dedupe.buffer.size = 134217728
    log.cleaner.delete.retention.ms = 86400000
    log.cleaner.enable = true
    log.cleaner.io.buffer.load.factor = 0.9
    log.cleaner.io.buffer.size = 524288
    log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    log.cleaner.min.cleanable.ratio = 0.5
    log.cleaner.min.compaction.lag.ms = 0
    log.cleaner.threads = 1
    log.cleanup.policy = [delete]
    log.dir = /tmp/kafka-logs
    log.dirs = /usr/local/soft/kafka_2.11-2.1.1/logs
    log.flush.interval.messages = 9223372036854775807
    log.flush.interval.ms = null
    log.flush.offset.checkpoint.interval.ms = 60000
    log.flush.scheduler.interval.ms = 9223372036854775807
    log.flush.start.offset.checkpoint.interval.ms = 60000
    log.index.interval.bytes = 4096
    log.index.size.max.bytes = 10485760
    log.message.downconversion.enable = true
    log.message.format.version = 2.1-IV2
    log.message.timestamp.difference.max.ms = 9223372036854775807
    log.message.timestamp.type = CreateTime
    log.preallocate = false
    log.retention.bytes = -1
    log.retention.check.interval.ms = 300000
    log.retention.hours = 168
    log.retention.minutes = null
    log.retention.ms = null
    log.roll.hours = 168
    log.roll.jitter.hours = 0
    log.roll.jitter.ms = null
    log.roll.ms = null
    log.segment.bytes = 1073741824
    log.segment.delete.delay.ms = 60000
    max.connections.per.ip = 2147483647
    max.connections.per.ip.overrides =
    max.incremental.fetch.session.cache.slots = 1000
    message.max.bytes = 1000012
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    min.insync.replicas = 1
    num.io.threads = 8
    num.network.threads = 3
    num.partitions = 1
    num.recovery.threads.per.data.dir = 1
    num.replica.alter.log.dirs.threads = null
    num.replica.fetchers = 1
    offset.metadata.max.bytes = 4096
    offsets.commit.required.acks = -1
    offsets.commit.timeout.ms = 5000
    offsets.load.buffer.size = 5242880
    offsets.retention.check.interval.ms = 600000
    offsets.retention.minutes = 10080
    offsets.topic.compression.codec = 0
    offsets.topic.num.partitions = 50
    offsets.topic.replication.factor = 1
    offsets.topic.segment.bytes = 104857600
    password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
    password.encoder.iterations = 4096
    password.encoder.key.length = 128
    password.encoder.keyfactory.algorithm = null
    password.encoder.old.secret = null
    password.encoder.secret = null
    port = 9092
    principal.builder.class = null
    producer.purgatory.purge.interval.requests = 1000
    queued.max.request.bytes = -1
    queued.max.requests = 500
    quota.consumer.default = 9223372036854775807
    quota.producer.default = 9223372036854775807
    quota.window.num = 11
    quota.window.size.seconds = 1
    replica.fetch.backoff.ms = 1000
    replica.fetch.max.bytes = 1048576
    replica.fetch.min.bytes = 1
    replica.fetch.response.max.bytes = 10485760
    replica.fetch.wait.max.ms = 500
    replica.high.watermark.checkpoint.interval.ms = 5000
    replica.lag.time.max.ms = 10000
    replica.socket.receive.buffer.bytes = 65536
    replica.socket.timeout.ms = 30000
    replication.quota.window.num = 11
    replication.quota.window.size.seconds = 1
    request.timeout.ms = 30000
    reserved.broker.max.id = 1000
    sasl.client.callback.handler.class = null
    sasl.enabled.mechanisms = [GSSAPI]
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.principal.to.local.rules = [DEFAULT]
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism.inter.broker.protocol = GSSAPI
    sasl.server.callback.handler.class = null
    security.inter.broker.protocol = PLAINTEXT
    socket.receive.buffer.bytes = 102400
    socket.request.max.bytes = 104857600
    socket.send.buffer.bytes = 102400
    ssl.cipher.suites = []
    ssl.client.auth = none
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = https
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
    transaction.max.timeout.ms = 900000
    transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    transaction.state.log.load.buffer.size = 5242880
    transaction.state.log.min.isr = 1
    transaction.state.log.num.partitions = 50
    transaction.state.log.replication.factor = 1
    transaction.state.log.segment.bytes = 104857600
    transactional.id.expiration.ms = 604800000
    unclean.leader.election.enable = false
    zookeeper.connect = master:2181
    zookeeper.connection.timeout.ms = 6000
    zookeeper.max.in.flight.requests = 10
    zookeeper.session.timeout.ms = 6000
    zookeeper.set.acl = false
    zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2019-04-07 10:47:36,717] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-04-07 10:47:36,717] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-04-07 10:47:36,748] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2019-04-07 10:47:36,980] INFO Loading logs. (kafka.log.LogManager)
[2019-04-07 10:47:37,027] INFO Logs loading complete in 46 ms. (kafka.log.LogManager)
[2019-04-07 10:47:37,097] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2019-04-07 10:47:37,115] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2019-04-07 10:47:43,023] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2019-04-07 10:47:43,297] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)
[2019-04-07 10:47:43,468] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-07 10:47:43,469] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-07 10:47:43,491] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-07 10:47:43,617] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2019-04-07 10:47:43,763] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2019-04-07 10:47:43,793] INFO Result of znode creation at /brokers/ids/0 is: OK (kafka.zk.KafkaZkClient)
[2019-04-07 10:47:43,799] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(master,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
[2019-04-07 10:47:43,821] WARN No meta.properties file under dir /usr/local/soft/kafka_2.11-2.1.1/logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2019-04-07 10:47:44,255] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-07 10:47:44,255] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-07 10:47:44,275] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-04-07 10:47:44,374] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2019-04-07 10:47:44,424] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-04-07 10:47:44,470] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-04-07 10:47:44,636] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 184 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-04-07 10:47:44,805] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2019-04-07 10:47:45,012] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-04-07 10:47:45,048] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-04-07 10:47:45,096] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-04-07 10:47:45,480] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-04-07 10:47:45,536] INFO [SocketServer brokerId=0] Started processors for 1 acceptors (kafka.network.SocketServer)
[2019-04-07 10:47:45,541] INFO Kafka version : 2.1.1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-07 10:47:45,542] INFO Kafka commitId : 21234bee31165527 (org.apache.kafka.common.utils.AppInfoParser)
[2019-04-07 10:47:45,572] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

    端口检测:

    

    其中9092为kafka的监听端口,2181位zookeeper的使用端口。

  4.单机连通性测试

    4.1启动生产者producer

      

    4.2启动消费者consumer

      

    4.3测试

      生产者生产数据:

      

      消费者消耗数据:

      

二.Kafka集群搭建

  未完待续......

原文地址:https://www.cnblogs.com/yszd/p/10664405.html

时间: 2024-08-04 09:16:07

Kafka单节点及集群配置安装的相关文章

Redis(二)CentOS7之Redis单节点与集群部署安装

一 Redis单机安装 1 Redis下载安装 1.1 检查依赖环境(Redis是C语言开发,编译依赖gcc环境) [[email protected] redis-4.0.10]$ gcc -v -bash: gcc: command not found [[email protected] redis-4.0.10]$ yum install -y gcc 1.2 解压文件到指定目录 [[email protected] software]$ wget http://download.red

跟我一起学docker(16)--单节点mesos集群

Mesos简介 什么是MESOS? Apache Mesos 是一个集群管理器,提供了有效的.跨分布式应用或框架的资源隔离和共享,可以运行 Hadoop.MPI.Hypertable.Spark. 几个基本概念: Mesos master:负责任务调度的节点. Mesos slave:负责执行任务的节点. Mesos 框架:需要由mesos调度的应用程序,比如hadoop.spark.marathon.chronos等. Mesos实现了两级调度架构,它可以管理多种类型的应用程序.第一级调度是M

最新Hadoop-2.7.2+hbase-1.2.0+zookeeper-3.4.8 HA高可用集群配置安装

Ip 主机名 程序 进程 192.168.128.11 h1 Jdk Hadoop hbase Namenode DFSZKFailoverController Hamster 192.168.128.12 h2 Jdk Hadoop hbase Namenode DFSZKFailoverController Hamster 192.168.128.13 h3 Jdk Hadoop resourceManager 192.168.128.14 h4 Jdk Hadoop resourceMan

ActiveMQ的单节点和集群部署

平安寿险消息队列用的是ActiveMQ. 单节点部署: 下载解压后,直接cd到bin目录,用activemq start命令就可启动activemq服务端了. ActiveMQ默认采用61616端口提供JMS服务,使用8161端口提供管理控制台服务,执行以下命令以便检验是否已经成功启动ActiveMQ服务: ps -aux | grep activemq netstat -anp | grep 61616 此外,还可直接访问管理页面:http://ip:8161/admin/ ,用户名和密码可以

使用Minikube运行一个本地单节点Kubernetes集群

使用Minikube是运行Kubernetes集群最简单.最快捷的途径,Minikube是一个构建单节点集群的工具,对于测试Kubernetes和本地开发应用都非常有用. ⒈安装Minikube Minikube是一个需要下载并放到路径中的二进制文件.它适用于Windows.Linux和OSX系统. Github地址:https://github.com/kubernetes/minikube,上面有详细的安装过程,我们可以借鉴官方的安装方法进行安装. 我的是Centos7系统,执行以下命令:(

mongodb 单节点集群配置 (开发环境)

最近项目会用到mongodb的oplog触发业务流程,开发时的debug很不方便.所以在本地创建一个单台mongodb 集群进行开发debug. 大概:mongodb可以产生oplog的部署方式应该是两种,一种是replica set ,一种是shard;项目中使用的的shard,所以参照文档本地部署了单节点shard集群-只为debug. 根据文档整理的内容包含三部分: 1.配置文件 配置文件有三个,分别是config.conf,shard.conf,mongos.conf;一下是内容 1 #

Kafka 集群配置SASL+ACL

** Kafka 集群配置SASL+ACL 测试环境:** 系统: CentOS 6.5 x86_64 JDK : java version 1.8.0_121 kafka: kafka_2.11-1.0.0.tgz zookeeper: 3.4.5 ip: 192.168.49.161 (我们这里在一台机上部署整套环境) kafka 名词解析: Broker: Kafka 集群包含一个或多个服务器,这种服务器被称为broker Topic: 每条发布到Kafka 集群的消息都有一个类别,这个类

rabbitmq安装及基本操作(含集群配置)

一.rabbitmq的安装 因为rabbitmq是基于 erlang语言开发,所有要先安装erlang 1.安装erlang 这里我下载的是19.2的版本,地址为https://www.erlang.org/downloads/19.2 下载编译安装包: cd /mnt wget http://erlang.org/download/otp_src_19.2.tar.gz 解缩 tar -zxvf otp_src_19.2.tar.gz 安装编译依赖 yum -y install gcc gcc

项目进阶 之 集群环境搭建(三)多管理节点MySQL集群

上次的博文项目进阶 之 集群环境搭建(二)MySQL集群中,我们搭建了一个基础的MySQL集群,这篇博客咱们继续讲解MySQL集群的相关内容,同时针对上一篇遗留的问题提出一个解决方案. 1.单管理节点MySQL集群和多管理节点MySQL集群 上一篇的博客中,我们搭建的MySQL集群架构中,只存在一个管理节点,这样搭建的集群可以用如下所示的结构表示. 仔细分析上图就会发现,上图所示的单管理节点MySQL集群存在当唯一的管理节点由于网络.断电.压力过大等各种原因宕机后,数据节点和SQL节点将会各自为