Configure Hadoop Security with Cloudera Manager version less than 5- using Kerberos

By | September 2, 2014
If you are using Cloudera Manager version 5 or later. Check out the other blog here

Kerberos is a network authentication protocol created by MIT, and uses symmetric-key cryptography to authenticate users to network services, which means passwords are never actually sent over the network.Rather than authenticating each user to each network service separately as with simple password authentication, Kerberos uses symmetric encryption and a trusted third party (a key distribution center or KDC) to authenticate users to a suite of network services. The computers managed by that KDC and any secondary KDCs constitute a realm.
When a user authenticates to the KDC, the KDC sends a set of credentials (a ticket) specific to that session back to the user’s machine, and any Kerberos-aware services look for the ticket on the user’s machine rather than requiring the user to authenticate using a password.To enable Security in Hadoop, we integrate Kerberos Authentication.

If you want to know more about Kerberos. Check out this link:

For this example, lets say our cluster has 3 nodes, which is managed by cloudera Manager (or) host1 –> Kerberos Server & Client (KDC) You can make a remote node as server (or) host2 –> Kerberos Client (or) host3 –> Kerberos Client
Our realm name –> PUNEETHA.COM

Cloudera Manager version less than 5
1. You have a Hadoop Cluster managed by Cloudera Manager. If you dont have one, check this link to create a cluster managed by cloudera manager –>
2. Install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy File (on all nodes)

  1. Download from below locations according to your java version:
    For java 6 >

    For java 7 >

  2. Uncompress and extract the downloaded file
    (Note:For more information on JCE Policy File installation instructions, see the README.txt file included in the UnlimitedJCEPolicyJDK7.zipfile.)
  3. Make a copy of the original JCE policy files (US_export_policy.jar and local_policy.jar)
  4. Replace the strong policy files with the unlimited strength versions extracted from the zip file.
    (i.e. US_export_policy.jar and local_policy.jar)
  5. Place the JCE jurisdiction policy JAR files in below location: (Whichever location your java points to)
    i.e. copy US_export_policy.jar and local_policy.jar to the location -> /usr/java/latest/jre/lib/security/

Note: Stop All services
Step 1:
To install packages for a Kerberos server:

# yum -y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation

To install packages for a Kerberos client:

# yum -y install krb5-workstation krb5-libs krb5-auth-dialog

Step 2:
–> Change Realm Name > PUNEETHA.COM
–> Add parameters > max_life = 1d and max_renewable_life = 7d

# vim /var/kerberos/krb5kdc/kdc.conf
 kdc_ports = 88
 kdc_tcp_ports = 88

  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
  max_life = 1d
  max_renewable_life = 7d

Step 3:
Add below properties in All Clients:
> udp_preference_limit = 1
> default_tgs_enctypes = arcfour-hmac
> default_tkt_enctypes = arcfour-hmac

# vim /etc/krb5.conf 
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 default_realm = PUNEETHA.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 udp_preference_limit = 1
 default_tgs_enctypes = arcfour-hmac
 default_tkt_enctypes = arcfour-hmac 

  kdc =
  admin_server =

[domain_realm] = PUNEETHA.COM = PUNEETHA.COM

Step 4:
Create the database using the kdb5_util utility. (Server)

# /usr/sbin/kdb5_util create -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'PUNEETHA.COM',
master key name 'K/M@PUNEETHA.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:

Step 5:
In Server, add cloudera-scm principal, it will be used by Cloudera Manager later to manage Hadoop principals.

# kadmin.local
kadmin.local:  addprinc cloudera-scm@PUNEETHA.COM
WARNING: no policy specified for cloudera-scm@PUNEETHA.COM; defaulting to no policy
Enter password for principal "cloudera-scm@PUNEETHA.COM":
Re-enter password for principal "cloudera-scm@PUNEETHA.COM":
Principal "cloudera-scm@PUNEETHA.COM" created.

Step 6:
Add */admin and cloudera-scm to ACL(Access Control List), which gives privilege to add principals for admin and cloudera-scm principal

# vim /var/kerberos/krb5kdc/kadm5.acl 
*/admin@PUNEETHA.COM *
cloudera-scm@PUNEETHA.COM admilc

Step 7:
Adds the password policy to the database.

# kadmin.local
kadmin.local:  addpol admin
kadmin.local:  addpol users
kadmin.local:  addpol hosts
kadmin.local:  exit

Step 8:
Generate the cmf.keytab file:

# kadmin.local
kadmin.local:  xst -k cmf.keytab cloudera-scm@PUNEETHA.COM
kadmin.local: exit

Step 9:
Move keytab file to cloudera-scm-server location and provide appropriate permissions.

# mv cmf.keytab /etc/cloudera-scm-server/
# chown cloudera-scm:cloudera-scm /etc/cloudera-scm-server/cmf.keytab 
# chmod 600 /etc/cloudera-scm-server/cmf.keytab

Step 10:
Create a file called cmf.principal and add cloudera principal name in that file as shown below and provide appropriate permissions:

#vim /etc/cloudera-scm-server/cmf.principal

# chown cloudera-scm:cloudera-scm /etc/cloudera-scm-server/cmf.principal 
# chmod 600 /etc/cloudera-scm-server/cmf.principal 

Step 11:
Start Kerberos using the following commands:

#service krb5kdc start
#service kadmin start

Step 12:
In Cloudera Manager:
Administration -> Settings -> Security ->Kerberos Security Realm -> PUNEETHA.COM

Note: Configure Security for only those services which you have on your cluster as below:

Zookeeper Security:

Zookeeper Service -> Configuration -> Service-wide ->  Enable Kerberos Authentication -> Check

HDFS Security:

HDFS Service -> Configuration -> Service-wide -> Security -> Hadoop Secure Authentication -> Click and Select "kerberos"

HDFS Service -> Configuration -> Service-wide -> Security -> Hadoop Secure Authorization  -> Select the checkbox

HDFS Service -> Configuration -> Datanode(Default) -> Security -> DataNode Data Directory Permissions -> 700

For every DataNode Role Config Group:
HDFS Service -> Configuration -> Datanode(Default) -> Ports and Addresses -> Datanode Transceiver Port ->  1004
HDFS Service -> Configuration -> Datanode(Default) -> Ports and Addresses -> Datanode HTTP Web UI Port -> 1006

Hue Security:

Hue Service -> Add -> Instances -> Assign the Kerberos Ticket Renewer role instance to the same host as the Hue server

Hive Security:
Hive Service -> Configuration -> Service-wide -> Advanced -> Hive Service Configuration Safety Valve for hive-site.xml
Add the below 3 property tags there:


Solr Security

Solr Service -> Configuration -> Service-wide -> Security -> Solr Secure Authentication -> Kerberos 

Solr Service -> Configuration -> Service-wide -> Security -> Trusted Kerberos Realms -> PUNEETHA.COM

Then go to Actions -> Deploy Client Configuration

Start the whole cluster (or) if you want to start few services only then do it manually by starting each service in hierarchy as below:
1) Zookeeper
3) Mapreduce
4) Hive
5) Rest of the services

You have a Kerberized Cluster now 🙂

Comment below if you find this blog useful.

Few more useful things. (FYI)
Lets go one step ahead, now that we have a kerberized cluster, users wont be able to access the cluster by the command ‘hadoop fs -ls ‘
He has to be a kerberos user. Only hdfs user can add users to the cluster Ex: hadoop fs -mkdir /user/puneetha

Generate a keytab for hdfs principal
If we want to use the keytab from the node host2, then we generate hdfs keytab for the node2 principal as below:

kadmin.local: xst -norandkey -k hdfs.keytab hdfs/ HTTP/

kadmin.local: addprinc hdfs@PUNEETHA.COM
kadmin.local: exit

If you have hdfs keytab file >> $kinit hdfs -k -t /unix-path/hdfs.keytab
If you are hdfs user >> $kinit hdfs

Create Kerberos user
Ex: I want to create a kerberos user called ‘puneetha’
Add user ‘puneetha’ to all nodes (user puneetha should be present in hadoop nodes, I am talking about UNIX shell)
In all nodes of the cluster:

#useradd puneetha -u 1000

Generate UNIX password for the user

#passwd puneetha

Create hdfs user ‘puneetha’ using hdfs.keytab

$ kinit hdfs -k -t /unix-path/hdfs.keytab
$ hadoop fs -mkdir /user/puneetha
$ hadoop fs -chown puneetha:puneetha /user/puneetha

In Kerberos, add principal for the user ‘puneetha’

kadmin.local: addprinc puneetha@PUNEETHA.COM
kadmin.local:  exit

To access the cluster, you need to issue kinit command and obtain a ticket.

$kinit  puneetha@PUNEETHA.COM
$kinit  puneetha

and start accessing the hadoop cluster
Ex: $ hadoop fs -ls /user/puneetha

Other commands:
To list all principals:

kadmin.local: getprincs
kadmin.local: exit

To provide password for the principal while creating principal:

kadmin.local: addprinc -pw <password> puneetha@PUNEETHA.COM
kadmin.local: exit

To add user from command line:

# kadmin.local -q "addprinc dummyuser"

To enter Impala shell

$impala-shell -k

To refresh metadata while entering Impala shell

$impala-shell -k -r

2 thoughts on “Configure Hadoop Security with Cloudera Manager version less than 5- using Kerberos

  1. Sameer

    Hello Puneetha,

    I tried to kerberize single node hadoop cluster and i get below error when i do an hdfs dfs -ls /
    could please help ?
    Java config name: null
    Native config name: /etc/krb5.conf
    Loaded from native config
    >>>KinitOptions cache name is /tmp/krb5cc_0
    15/07/01 12:54:42 WARN ipc.Client: Exception encountered while connecting to the server : GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
    ls: Failed on local exception: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]; Host Details : local host is: “localhost/”; destination host is: “localhost”:9000;

    1. Guna

      I believe you are not having the TGT , so follow this step
      execute kinit and it will prompt you for a passsword.
      once it is done .
      you shoud be able to execute the hadoop fs ls command.

      Note : if you are annoyed by the password prompt for every time , then you can create a ktab file and make the password entry in that particular node


Leave a Reply

Your email address will not be published. Required fields are marked *