Kerberos How to Kerberize an Hadoop Cluster

Most Hadoop clusters adopt Kerberos as the authentication protocol

安装 KDC

  • 启动Kerberos 认证需要安装 KDC 服务器和必要的软件。安装KDC 的命令可以在任何机器上执行。
yum -y install krb5-server krb5-lib krb5-auth-dialog krb5-workstation
  • 接着,在集群中的其他节点上安装Kerberos client和命令
yum -y install krb5-lib krb5-auth-dialog krb5-workstation
  • 编辑 KDC 配置的realms,AD(active directory)

  krb5.conf 文件包含 KDCs、admin 服务器的地址,是当前 realm 和 Kerberos 应用的默认配置,该配置将主机名映射到 Kerberos realms。krb5.conf一般在/etc/krb5.conf

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = HADOOP.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true

[realms]
 HADOOP.COM = {
  kdc = node1.hadoop.com
  admin_server = node1.hadoop.com
 }

AD.COM = {
 kdc = windc.ad.com
 admin_server = windc.ad.com
}

[domain_realm]
 .hadoop.com = HADOOP.COM
 hadoop.com = HADOOP.COM
 .ad.com = AD.COM
 ad.com = AD.COM

[capaths]
 AD.COM = {
  HADOOP.COM = .
 }

realms: HADOOP_COM下的 kdc, admin_server是我们安装KDC的主机地址,AD.COM下的是 Domain Controller主机地址。

domain_realm: 提供domain name 或者主机名字到kerberos realms名字的转换。两者都必须小写。

capaths: cross-realm authentication中,不同 realms 之间需要数据库去创建authentication paths。 这部分定义存储。

  • 编辑 kdc.conf,默认在 /var/Kerberos/krb5kdc/kdc.conf。包含 KDC 配置信息,包括发放 Kerberos tickets 时的默认值。
[realms]
  HADOOP.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }
时间: 2024-10-07 13:31:02

Kerberos How to Kerberize an Hadoop Cluster的相关文章

Enable Kerberos secured Hadoop cluster with Cloudera Manager

I created an secured Hadoop cluster for P&G with cloudera manager, and this document is to record how to enable kerberos secured cluster with cloudera manager. Firstly we should have a cluster that contains kerberos KDC and kerberos clients 1. Instal

Hadoop Cluster 安装

本篇源自Hadoop官网,先将中文翻译如下. 目标 本文章主要是描述如何安装和配置几个节点的Hadoop clusters,甚至于数以千计的节点数.为了了解详细的安装步骤,需要先了解如何安装在单台机器上. 本文档不包含高级的设置点,比如:安全性或者高可用性. 准备 需要安装Java环境 从Apache网站下载一个稳定的Hadoop安装镜像 安装 搭建一个Hadoop集群需要将软件安装到集群中的所有机器中,或者一个适合当前操作系统集成的系统.这样做的目标就是区分不同的硬件安装不同的功能. 典型的安

HADOOP cluster some issue for installation

给namenode搭建了HA,然后根据网上的配置也配置了secondary namenode, 但是一直没有从日志中看到启动secondnary namenode,当然进程也没有. 找了很多资料,按照资料配置了,执行 hdfs getconf –secondaryNameNodes Incorrect configuration: secondary namenode address dfs.namenode.secondary.http-address is not configured. 但

Hadoop: Hadoop Cluster配置文件

一.环境准备 1.系统环境 2台CentOS7服务器   NameNode ResourceManager DataNode NodeManager server1 是   是 是 server2   是 是 是 2.软件环境 java-1.8.0-openjdk java-1.8.0-openjdk-devel Hadoop 2.7.3 二.Hadoop配置文件 Hadoop的配置文件: 只读的默认配置文件:core-default.xml, hdfs-default.xml, yarn-de

Hadoop MapReduce Next Generation - Setting up a Single Node Cluster

Hadoop MapReduce Next Generation - Setting up a Single Node Cluster. Purpose This document describes how to set up and configure a single-node Hadoop installation so that you can quickly perform simple operations using Hadoop MapReduce and the Hadoop

Hadoop single Node cluster

目的: 本文描述怎么安装和配置一个单结点的Hadoop,以便搭建能快速简单操作和使用Hadoop的MapReduce和Hadoop的分布式文件系统(HDFS); 先决条件: 支持的平台 GNU/Linux 可用作开发和生产平台. Hadoop 在2000台GNU/Linux 的集群上做过演示. Windows 平台也是支持的, 下面的步骤仅对linux适用. 要在Windows上安装hadoop, 查看 wiki page. 所需软件: 在linux下,所需的软件如下: 必须安装JAVA. 推荐

HDFS分布式文件系统(The Hadoop Distributed File System)

The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications. In a large cluster, thousands of servers both host directly attached storage and execu

Hue集成Hadoop和Hive

一.环境准备 1.下载Hue:https://dl.dropboxusercontent.com/u/730827/hue/releases/3.12.0/hue-3.12.0.tgz 2.安装依赖 yum groupinstall -y "Development Tools" "Development Libraries" yum install -y apache-maven ant asciidoc cyrus-sasl-devel cyrus-sasl-gs

Hadoop实战-中高级部分 之 Hadoop作业调优参数调整及原理

第一部分:core-site.xml •core-site.xml为Hadoop的核心属性文件,参数为Hadoop的核心功能,独立于HDFS与MapReduce. 参数列表 •fs.default.name •默认值 file:/// •说明:设置Hadoop  namenode的hostname及port,预设是Standalone mode,如果是伪分布式文件系统要设置成 hdfs://localhost:9000,如果使用集群模式则配置为 hdfs://hostname:9000 •had