Professional Documents
Culture Documents
You are
expected to know basic UNIX commands and VI editor commands. If you are not
familiar with UNIX and vi commands, you should brush up your UNIX basics before
proceeding.
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 anil@ubuntu
The key's randomart image is:
[...snipp...]
The final step is to test the SSH setup by connecting to your local machine with the
anil user. The step is also needed to save your local machines host key fingerprint to
the anil users known_hosts file. If you have any special SSH configuration for your
local machine like a non-standard SSH port, you can define host-specific SSH options
in $HOME/.ssh/config (see man ssh_config for more information).
#now copy the public key to the authorized_keys file, so that ssh should not require
passwords every time
anil@ubuntu:~$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
anil@ubuntu:~$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Linux ubuntu 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010
i686 GNU/Linux
Ubuntu 10.04 LTS
[...snipp...]
anil@ubuntu:~$
cd /usr/local
sudo gunzip hadoop-0.20.1.tar.gz
sudo tar -xvf hadoop-0.20.1.tar
sudo ln s hadoop-0.20.1 hadoop
sudo chown -R anil:root hadoop-0.20.1
10.Create a temporary directory which will be used as base location for DFS.
Now we create the directory and set the required ownerships and permissions:
$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown anil:root /app/hadoop/tmp
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
14.Format your namenode
Format name with
anil@ubuntu:~$ /usr/local/hadoop/bin/hadoop namenode format
15.Starting your single-node cluster
Congratulations, your Hadoop single node cluster is ready to use. Test your cluster
by running the following commands.
anil@ubuntu:~$ /usr/local/hadoop/bin/start-dfs.sh
anil@ubuntu:~$ /usr/local/hadoop/bin/start-mapred.sh
Check if the Hadoop services are running by giving the below command
anil@ubuntu:~$ jps
#You should see all five Hadoop demons running.
16.Install VMware Tools
You can install VMware tools so that you can you can share data from host system to
guest system. You can see full system view only if you install VMware tools.
# make a mount point if needed:
sudo mkdir /media/cdrom
# Mount the CD
sudo mount /dev/cdrom /media/cdrom
#Download the VMWare tools from VMWare player.
# Copy and extract VMWareTools
sudo cp /media/cdrom/VMwareTools*.tar.gz ~/Desktop
# you can extract with archive manager, right click on the archive and extract ... or
sudo tar -xvf VMwareTools*.tar.gz
# Install as below
Go to Virtual Machine Virtual Machine Settings Options Shared Folders and see
if the share is enabled or not. If sharing is enabled, you can add a new share location
and Enjoy working on Hadoop.