Authentication using SASL/Kerberos

  1. Prerequisites
    1. Kerberos
      If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (UbuntuRedhat). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security.
    2. Create Kerberos Principals
      If you are using the organization‘s Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
      If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:

      1

      2

      sudo /usr/sbin/kadmin.local -q ‘addprinc -randkey kafka/{hostname}@{REALM}‘

      sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"

    3. Make sure all hosts can be reachable using hostnames - it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.
  2. Configuring Kafka Brokers
      1. Add a suitably modified JAAS file similar to the one below to each Kafka broker‘s config directory, let‘s call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):


        1

        2

        3

        4

        5

        6

        7

        8

        9

        10

        11

        12

        13

        14

        15

        16

        KafkaServer {

            com.sun.security.auth.module.Krb5LoginModule required

            useKeyTab=true

            storeKey=true

            keyTab="/etc/security/keytabs/kafka_server.keytab"

            principal="kafka/[email protected]";

        };

        // Zookeeper client authentication

        Client {

        com.sun.security.auth.module.Krb5LoginModule required

        useKeyTab=true

        storeKey=true

        keyTab="/etc/security/keytabs/kafka_server.keytab"

        principal="kafka/[email protected]";

        };

    KafkaServer

        section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It allows the broker to login using the keytab specified in this section. See

    notes

        for more details on Zookeeper SASL configuration.

      1. Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see here for more details):
            -Djava.security.krb5.conf=/etc/kafka/krb5.conf
                -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
      2. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.
      3. Configure SASL port and SASL mechanisms in server.properties as described here. For example:
            listeners=SASL_PLAINTEXT://host.name:port
                security.inter.broker.protocol=SASL_PLAINTEXT
                sasl.mechanism.inter.broker.protocol=GSSAPI
                sasl.enabled.mechanisms=GSSAPI
                    
      4. We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/[email protected]", so:

        sasl.kerberos.service.name=kafka
  3. Configuring Kafka Clients

    To configure SASL authentication on the clients:

    1. Clients (producers, consumers, connect workers, etc) will authenticate to the cluster with their own principal (usually with the same name as the user running the client), so obtain or create these principals as needed. Then configure the JAAS configuration property for each client. Different clients within a JVM may run as different users by specifiying different principals. The property sasl.jaas.config in producer.properties or consumer.properties describes how clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client using a keytab (recommended for long-running processes):

          sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required         useKeyTab=true         storeKey=true          keyTab="/etc/security/keytabs/kafka_client.keytab"         principal="[email protected]";

      For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used along with "useTicketCache=true" as in:

          sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required         useTicketCache=true;

      JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named KafkaClient. This option allows only one user for all client connections from a JVM.

    2. Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.
    3. Optionally pass the krb5 file locations as JVM parameters to each client JVM (see here for more details):
          -Djava.security.krb5.conf=/etc/kafka/krb5.conf
    4. Configure the following properties in producer.properties or consumer.properties:
          security.protocol=SASL_PLAINTEXT (or SASL_SSL)
          sasl.mechanism=GSSAPI
          sasl.kerberos.service.name=kafka

原文地址:https://www.cnblogs.com/felixzh/p/11508169.html

时间: 2024-11-07 10:02:20

Authentication using SASL/Kerberos的相关文章

为CDH 5.7集群添加Kerberos身份验证及Sentry权限控制

4. 为CDH 5集群添加Kerberos身份验证4.1 安装sentry1.点击"操作","添加服务":2.选择sentry,并"继续": 3.选择一组依赖关系 4.确认新服务的主机分配 5.配置存储数据库:在mysql中创建对应用户和数据库: mysql>create database sentry default character set utf8 collate utf8_general_ci; mysql>grant al

Kerberos+LDAP+NFSv4 实现单点登录

Kerberos : 身份认证LDAP : 目录信息服务NFSv4 : 网络共享 实验环境 : debian 9 三台主机:nfs服务器 : 192.168.1.103nfs客户机 : 192.168.1.102 即SSSD客户端+NFS客户端kdc服务器 : 192.168.1.101 即Kerberos+LDAP 以下 [email protected]:~# 表示以root根用户运行命令 一.安装NTP时间同步要使用Kerberos提供身份认证,各主机需时间同步 在一台主机上安装时间同步服

Kerberos+LDAP+NFSv4 实现单点登录(上)

Kerberos : 身份认证LDAP : 目录信息服务NFSv4 : 网络共享 实验环境 : debian 9 三台主机:nfs服务器 : 192.168.1.103nfs客户机 : 192.168.1.102 即SSSD客户端+NFS客户端kdc服务器 : 192.168.1.101 即Kerberos+LDAP 以下 [email protected]:~# 表示以root根用户运行命令 一.安装NTP时间同步要使用Kerberos提供身份认证,各主机需时间同步 在一台主机上安装时间同步服

无线端安全登录与鉴权一之Kerberos

无线端登录与鉴权是安全登录以及保证用户数据安全的第一步,也是最重要的一步.之前做过一个安全登录与鉴权的方案,借这个机会,系统的思考一下,与大家交流交流 先介绍一下TX系统使用的Kerberos方案,参考了 http://blog.csdn.net/wulantian/article/details/42418231 的文章 一.概念介绍 Kerberos:起源于希腊神话,是一支守护着冥界长着3个头颅的神犬,在keberos Authentication中,Kerberos的3个头颅代表中认证过程

(转)Openfire 中SASL的认证方式之:PLAIN,DIGEST-MD5,anonymous

转:http://blog.csdn.net/coding_me/article/details/39524137 SASL  的认证方式包括: 1. PLAIN:plain是最简单的机制,但同时也是最危险的机制,因为身份证书(登录名称与密码)是以base64字符串格式通过网络,没有任何加密保护措施.因此,使用plain机制时,你可能会想要结合tls. 2.DIGEST-MD5:使用这种机制时,client与server共享同一个隐性密码,而且此密码不通过网络传输.验证过程是从服务器先提出cha

[转载]kerberos认证原理

前几天在给人解释Windows是如何通过Kerberos进行Authentication的时候,讲了半天也别把那位老兄讲明白,还差点把自己给绕进去.后来想想原因有以下两点:对于一个没有完全不了解Kerberos的人来说,Kerberos的整个Authentication过程确实不好理解——一会儿以这个Key进行加密.一会儿又要以另一个Key进行加密,确实很容易把人给弄晕:另一方面是我讲解方式有问题,一开始就从Kerberos的3个Sub-protocol全面讲述整个Authentication

Kafka集成Kerberos之后如何使用生产者消费者命令

前提:1.kafka版本1.0.12.在linux中使用kinit刷新kerberos认证信息/在配置文件中配置keytab路径和票据 1.生产者1.1.准备jaas.conf并添加到环境变量(使用以下方式的其中一种)1.1.1.使用Kinit方式前提是手动kinit 配置内容为: KafkaClient {com.sun.security.auth.module.Krb5LoginModule requireduseTicketCache=truerenewTicket=trueservice

kafka(2.2.1)(kerberos+LDAP+Sentry)访问使用

目录 kafka(2.2.1)(kerberos+LDAP+Sentry)访问使用 一.访问的kafka的一些配置(已集成kerberos ) 二.Shell 命令行使用Kafka(已集成sentry) 三.代码访问(java) kafka(2.2.1)(kerberos+LDAP+Sentry)访问使用 一.访问的kafka的一些配置(已集成kerberos ) 由于kafka集成了kerberos 所以需要通过kerberos的认证 认证方式有两种 1.通过配置文件 2.通过keytab文件

基于0.14.0版本配置HiveServer2

项目中需要访问hive作为mondrian的异种数据源执行MDX查询,而我一般使用hive的时候都是直接通过hive命令行的方式直接执行SQL,或者通过hive的jar包在程序中访问,在这种方式的使用过程中,访问的hadoop集群都是公司的集群,之前测试hive的过程中记得自己对hive的jdbc源码进行了修改,主要是修改了一些hive在实现jdbc中没有实现但是抛出异常的接口,而mondrian会调用这些接口导致下面的流程走不下去了,整体的修改应该说还是比较简单的.另外一个问题是当时的hive