网学之家(5588net.com) - 执着关注互联网技术!

网学之家|5588net.com

hadoop mapred-queue-acls 配置(转)

来源:www.5588net.com 作者:网学之家 时间:2014-05-05 点击:

hadoop作业提交时可以指定相应的队列,例如:-Dmapred.job.queue.name=queue2

通过对mapred-queue-acls.xml和mapred-site.xml配置可以对不同的队列实现不同用户的提交权限.

先编辑mapred-site.xml,修改配置如下(增加四个队列):

 

  mapred.queue.names 

  default,queue1,queue2,queue3,queue4 

修改生效后通过jobtrack界面可以看到配置的队列信息。

要对队列进行控制, 还需要编辑mapred-queue-acls.xml文件

 

  mapred.queue.queue1.acl-submit-job 

  ' ' 

  Comma separated list of user and group names that are allowed 

   to submit jobs to the 'default' queue. The user list and the group list 

   are separated by a blank. For e.g. user1,user2 group1,group2. 

   If set to the special value '*', it means all users are allowed to 

   submit jobs. If set to ' '(i.e. space), no user will be allowed to submit 

   jobs. 

   It is only used if authorization is enabled in Map/Reduce by setting the 

   configuration property mapred.acls.enabled to true. 

   Irrespective of this ACL configuration, the user who started the cluster and 

   cluster administrators configured via 

   mapreduce.cluster.administrators can submit jobs. 

   

 

 要配置多个队列, 只需要重复添加上面配置信息,修改队列名称和value值,为方便测试,queue1禁止所有用户向其提交作业. 

 要使该配置生效, 还需要修改mapred-site.xml,将mapred.acls.enabled值设置为true

 

  mapred.acls.enabled 

  true 

 

 重启hadoop, 使配置生效, 接下来拿hive进行测试:

先使用queue2队列:

set mapred.job.queue.name=queue2; 

hive>  

    > select count(*) from t_aa_pc_log; 

Total MapReduce jobs = 1 

Launching Job 1 out of 1 

Number of reduce tasks determined at compile time: 1 

In order to change the average load for a reducer (in bytes): 

  set hive.exec.reducers.bytes.per.reducer= 

In order to limit the maximum number of reducers: 

  set hive.exec.reducers.max= 

In order to set a constant number of reducers: 

  set mapred.reduce.tasks= 

Starting Job = job_201205211843_0002, Tracking URL =     http://192.168.189.128:50030/jobdetails.jsp?jobid=job_201205211843_0002 

Kill Command = /opt/app/hadoop-0.20.2-cdh3u3/bin/hadoop job  -Dmapred.job.tracker=192.168.189.128:9020 -kill job_201205211843_0002 

2012-05-21 18:45:01,593 Stage-1 map = 0%,  reduce = 0% 

2012-05-21 18:45:04,613 Stage-1 map = 100%,  reduce = 0% 

2012-05-21 18:45:12,695 Stage-1 map = 100%,  reduce = 100% 

Ended Job = job_201205211843_0002 

OK 

136003 

Time taken: 14.674 seconds 

hive>  

作业成功完成

再来向queue1队列提交作业:

> set mapred.job.queue.name=queue1; 

hive> select count(*) from t_aa_pc_log; 

Total MapReduce jobs = 1 

Launching Job 1 out of 1 

Number of reduce tasks determined at compile time: 1 

In order to change the average load for a reducer (in bytes): 

set hive.exec.reducers.bytes.per.reducer= 

In order to limit the maximum number of reducers: 

set hive.exec.reducers.max= 

In order to set a constant number of reducers: 

set mapred.reduce.tasks= 

org.apache.hadoop.ipc.RemoteException:

org.apache.hadoop.security.AccessControlException: User p_sdo_data_01 cannot perform operation SUBMIT_JOB on queue queue1. 

Please run "hadoop queue -showacls" command to find the queues you have access to . 

at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:179) 

at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:136) 

at org.apache.hadoop.mapred.ACLsManager.checkAccess(ACLsManager.java:113) 

at org.apache.hadoop.mapred.JobTracker.submitJob(JobTracker.java:3781) 

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 

at

sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 

at java.lang.reflect.Method.invoke(Method.java:597) 

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) 

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434) 

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430) 

at java.security.AccessController.doPrivileged(Native Method) 

at javax.security.auth.Subject.doAs(Subject.java:396) 

at

org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157) 

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428) 

作业提交失败!

最后, 可以使用 hadoop queue -showacls 命令查看队列信息:

[hadoop@localhost conf]$ hadoop queue -showacls 

Queue acls for user :  hadoop 

Queue  Operations 

===================== 

queue1  administer-jobs 

queue2  submit-job,administer-jobs 

queue3  submit-job,administer-jobs 

queue4  submit-job,administer-jobs 

顶一下
(0)
0%
踩一下
(0)
0%
------分隔线----------------------------
发表评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
验证码: 点击我更换图片
栏目列表
推荐内容