七:HDFS Permissions Guide 权限

1.权限模式

    简单:启动HDFS的操作系统用户即为超级用户,可以通过HADOOP_USER_NAME指定

kerberos:

2.group mapping

组列表由group mapping service完成,该服务由hadoop.security.group.mapping参数决定,默认值是org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback,即由JNI. 如果没有JNI, 使用org.apache.hadoop.security.ShellBasedUnixGroupsMapping,意思是使用shell命令bash -c groups决定group. group mapping 服务由namenode提供

3.如何执行权限

每次执行一个操作之前都会检查权限。客户端会把用户标识发给namenode.

4.改变文件系统权限的API

  • public FSDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, Progressable progress) throws IOException;
  • public boolean mkdirs(Path f, FsPermission permission) throws IOException;
  • public void setPermission(Path p, FsPermission permission) throws IOException;
  • public void setOwner(Path p, String username, String groupname) throws IOException;
  • public FileStatus getFileStatus(Path f) throws IOException;

    来源: http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html

5.改变文件系统权限的shell

  • chmod [-R] mode file …

    Only the owner of a file or the super-user is permitted to change the mode of a file.

  • chgrp [-R] group file …

    The user invoking chgrp must belong to the specified group and be the owner of the file, or be the super-user.
  • chown [-R] [owner][:[group]] file …

    The owner of a file may only be altered by a super-user.
  • ls file …
  • lsr file …

以上命令使用bin/hdfs dfs -执行

6.配置参数

  • dfs.permissions.enabled = true   是否启用权限

    If yes use the permissions system as described here. If no, permission checking is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. Regardless of whether permissions are on or off, chmod, chgrp, chown and setfacl always check permissions. These functions are only useful in the permissions context, and so there is no backwards compatibility issue. Furthermore, this allows administrators to reliably set owners and permissions in advance of turning on regular permissions checking.
  • dfs.web.ugi = webuser,webgroup   

    The user name to be used by the web server. Setting this to the name of the super-user allows any web client to see everything. Changing this to an otherwise unused identity allows web clients to see only those things visible using "other" permissions. Additional groups may be added to the comma-separated list.
  • dfs.permissions.superusergroup = supergroup   超级用户组

    The name of the group of super-users.
  • fs.permissions.umask-mode = 0022    

    The umask used when creating files and directories. For configuration files, the decimal value 18 may be used.
  • dfs.cluster.administrators = ACL-for-admins

    The administrators for the cluster specified as an ACL. This controls who can access the default servlets, etc. in the HDFS.
  • dfs.namenode.acls.enabled = true  

    Set to true to enable support for HDFS ACLs (Access Control Lists). By default, ACLs are disabled. When ACLs are disabled, the NameNode rejects all attempts to set an ACL.

来自为知笔记(Wiz)

时间: 2024-10-25 06:41:39

七:HDFS Permissions Guide 权限的相关文章

HDFS客户端的权限错误:Permission denied

搭建了一个Hadoop的环境,Hadoop集群环境部署在几个Linux服务器上,现在想使用windows上的Java客户端来操作集群中的HDFS文件,但是在客户端运行时出现了如下的认证错误,被折磨了几天,问题终得以解决.以此文记录问题的解决过程. (如果想看最终解决问题的方法拉到最后,如果想看我的问题解决思路请从上向下看) 问题描述 上传文件的代码: private static void uploadToHdfs() throws FileNotFoundException,IOExcepti

解决从linux本地文件系统上传文件到HDFS时的权限问题

当使用 hadoop fs -put localfile /user/xxx 时提示: put: Permission denied: user=root, access=WRITE, inode="/user/shijin":hdfs:supergroup:drwxr-xr-x 表明:权限不够.这里涉及到两个方面的权限.一个是本地文件系统中localfile 文件的权限,一个是HDFS上 /user/xxx目录的权限. 先看看 /user/xxx目录的权限:drwxr-xr-x  

解决从本地文件系统上传到HDFS时的权限问题

当使用 hadoop fs -put localfile /user/xxx 时提示: put: Permission denied: user=root, access=WRITE, inode="/user/shijin":hdfs:supergroup:drwxr-xr-x 表明:权限不够.这里涉及到两个方面的权限.一个是本地文件系统中localfile 文件的权限,一个是HDFS上 /user/xxx目录的权限. 先看看 /user/xxx目录的权限:drwxr-xr-x  

HDFS Commands Guide

(Apache Hadoop 2.7.2 Last Published: 2016-01-26) Usage: hadoop fs [generic options] [-appendToFile <localsrc> ... <dst>] [-cat [-ignoreCrc] <src> ...] [-checksum <src> ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] <MODE[,MODE]..

92、App Permissions(权限管理)实例

•Manifest权限声明 •Permission Groups-权限组 •权限的区分-安装时授权于运行时授权 •撤销权限 •检查.请求权限 •在应用中如何合理的处理权限请求逻辑 在AndroidManifest.xml中配置权限. <!-- 照相机权限 --> <uses-permission android:name="android.permission.CAMERA"/> 1 public class MainActivity extends Activ

【Java EE 学习第75天】【数据采集系统第七天】【权限管理】【权限分析和设计】

一.权限保存相关分析 1.如何存储权限 首先说一下权限保存的问题,一个系统中最多有多少权限呢?一个大的系统中可能有成百上千个权限需要管理.怎么保存这么多的权限?首先,我们使用一个数字中的一位保存一种权限,那么如果现在有3600种权限需要保存,我们就需要一个3600位的数字来保存该权限,首先我们如果不考虑大数的话其它数据类型是没有办法保存这么长的数字的.所以我们为了能够保存这么多的权限,就引入了一个“权限组”的概念,这个权限组只是一个标识权限的容器,我们使用long类型的数字来保存63个权限,假设

第七十八天上课 PHP权限控制

ShenfenGuanLi <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <script src="../ZZ-function/js/$Jquery.js"></script> <title>无标题文档</title> <br> <div>请选择用户:  <sel

HDFS F ile System Shell Guide

Overview appendToFile cat checksum chgrp chmod chown copyFromLocal copyToLocal count cp createSnapshot deleteSnapshot df du dus expunge find get getfacl getfattr getmerge help ls lsr mkdir moveFromLocal moveToLocal mv put renameSnapshot rm rmdir rmr

HADOOP docker(九):hdfs权限

1. 概述2. 用户身份标识3. 组映射4.关于权限的实现5.文件系统API的变更6.应用程序shell的变更7.超级用户8.ACLs9.ACL 文件系统API10.ACL命令11.参数配置12.总结 骚年们,我们今天来学习hdfs的权限~ 请忽略4,5两段内容~ 文档:http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html 中文文档参考:http://hadoo