You are on page 1of 7

This document explains the procedure to setup a single node Hadoop cluster on

CentOS. You are expected to know basic UNIX commands and VI editor commands.
If you are not familiar with UNIX and vi commands, you should brush up your UNIX
basics before proceeding.

1. Download and install VMware player


Download VMware player latest version and install on your laptop/desktop. You can
also install VMware tools, which will be very useful for your working with guest OS.
1.1 Install VMware Tools
You can install VMware tools so that you can you can share data from host system to
guest system. You can see full system view only if you install VMware tools.
# make a mount point if needed:
sudo mkdir /media/cdrom
# Mount the CD
sudo mount /dev/cdrom /media/cdrom
#Download the VMWare tools from VMWare player.
Go to Player -> manage - > Install VMWare tools
# Copy and extract VMWareTools
sudo cp /media/cdrom/VMwareTools*.tar.gz ~/Desktop
# you can extract with archive manager, right click on the archive and extract ... or
sudo tar -xvf VMwareTools*.tar.gz
# Install as below
Open a terminal window, and run the following commands.
cd ~/Desktop/vmware-tools-distrib
sudo ./vmware-install.pl
#Enable sharing between the host OS and guest OS as follows:
Go to Virtual Machine Virtual Machine Settings Options Shared Folders and see
if the share is enabled or not. If sharing is enabled, you can add a new share location

and enjoy working on Hadoop.

2. Download CentOS iso DVD version 6.3, which is stable as on Feb 2013

http://mirrors.hns.net.in/centos/6.3/isos/x86_64/.
If you are using 32bit windows OS then you may download 32bit centos from
http://wiki.centos.org/Download
3. Run the VMware player and click on Create New Virtual Machine.
Browse to the iso downloaded in previous step.
CentOS will be installed on local system. We will be using the root user only for
installing and running Hadoop.
4. Install Java

Hadoop needs to have Java installed on your CentOS but CentOS does not come with
Oracle Java because of licensing issues. So, please use the below commands to
install java.
Download the latest java from
http://www.oracle.com/technetwork/java/javase/downloads/index.html
And it gets downloaded to Download folder of root user run the following commands.
a) rpm -Uvh /root/Downloads/jdk-7u13-linux-x64.rpmsudo apt-get update
b) alternatives --install /usr/bin/java java
/usr/java/latest/jre/bin/java 20000
c) export JAVA_HOME="/usr/java/latest"
d) Confirm the java path by running
javac version
java -version
5. Confirm your machine name
When you create a new CentOS machine, the default host name is
localhost.localdomain.
Open the /etc/hostname file with vi editor and change it if needed.
6. Configuring SSH
Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your
local machine if you want to use Hadoop on it (which is what we want to do in this
short tutorial). For our single-node setup of Hadoop, we therefore need to configure
SSH access to localhost for the anil user we created in the previous section.
#Install ssh server on your computer
yum install openssh-server
ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/anil/.ssh/id_rsa):
Created directory '/home/anil/.ssh'.
Your identification has been saved in /home/anil/.ssh/id_rsa.
Your public key has been saved in /home/anil/.ssh/id_rsa.pub.
The key fingerprint is:
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 root@localhost
The key's randomart image is:
[...snipp...]

The final step is to test the SSH setup by connecting to your local machine with the
anil user. The step is also needed to save your local machines host key fingerprint to
the anil users known_hosts file. If you have any special SSH configuration for your

local machine like a non-standard SSH port, you can define host-specific SSH options
in $HOME/.ssh/config (see man ssh_config for more information).

#now copy the public key to the authorized_keys file, so that ssh should not require
passwords every time

root@localhost:~$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys


If ssh is not running, then run it by giving the below command
root@localhost:~$service sshd start
root@localhost:~$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Linux ubuntu 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010
i686 GNU/Linux
Ubuntu 10.04 LTS
[...snipp...]
root@localhost:~$
You should get connected to localhost and please exit from ssh by typing exit at
command promt.
Next sections will describe how to setup a single-node Hadoop cluster.
7. Download Hadoop
For this tutorial, I am using Hadoop version hadoop-1.0.4, but it should work with
any other version.
Got to the URL http://archive.apache.org/dist/hadoop/core/ and click on hadoop1.0.4/ and then select hadoop-1.0.4.tar.gz. The file will be saved to
/root/Downloads if you choose defaults. Now perform the following steps to install
Hadoop on your CentOS.
Copy your downloaded file from Downloads folder to /usr/local folder
$ cd /usr/local
$ gunzip hadoop-1.0.4.tar.gz
$ tar -xvf hadoop-1.0.4.tar
$ ln s hadoop-1.0.4 hadoop
8. Add Java location to Hadoop so that it can recognize Java

Add the following to /usr/local/hadoop/conf/hadoop-env.sh


export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true

export HADOOP_HOME_WARN_SUPPRESS="TRUE"
export JAVA_HOME=/usr/java/default
9. Update $HOME/.bashrc
Add the following lines to the end of the $HOME/.bashrc file of user anil. If you use a
shell other than bash, you should of course update its appropriate configuration files
instead of .bashrc.
# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop
# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/java/default
# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"
# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
hadoop fs -cat $1 | lzop -dc | head -1000 | less
}
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin:$PATH:$JAVA_HOME/bin

Close this terminal and open a new one.


10.Create a temporary directory which will be used as base location for DFS.
Now we create the directory and set the required ownerships and permissions:
$ mkdir -p /app/hadoop/tmp
If you forget to set the required ownerships and permissions, you will see a
java.io.IOException when you try to format the name node in the next section).
11.Update core-site.xml file

Add the following snippets between the <configuration> ... </configuration> tags in
/usr/local/hadoop/conf/core-site.xml:
<!-- In: conf/core-site.xml -->
<property>
<name>hadoop.tmp.dir</name>
<value>/app/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
12.Update mapred-site.xml file

Add the following to /usr/local/hadoop/conf/mapred-site.xml between


<configuration> ... </configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
13.Update hdfs-site.xml file
Add the following to /usr/local/hadoop/conf/hdfs-site.xml between <configuration>
... </configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>

14.Format your namenode


Format name with
root@localhost:~$ hadoop namenode format
15.Starting your single-node cluster
Congratulations, your Hadoop single node cluster is ready to use. Test your cluster
by running the following commands.
root@localhost:~$ start-dfs.sh
root@localhost:~$ start-mapred.sh
Check if the Hadoop services are running by opening a browser and hit the below
URLs
http://localhost:50030
- for mapreduce
http://localhost:50070
- For hdfs
A command line utility to check whether all the hadoop demons are running or not
is: jps
Give the jps at command prompt and you should see this.

root@localhost:~$ jps
9168
9127
8824
8714
8935
9017

Jps
TaskTracker
DataNode
NameNode
SecondaryNameNode
JobTracker

You might also like