Skip to content

Instantly share code, notes, and snippets.

@tomasdelvechio
Last active June 29, 2016 02:53
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tomasdelvechio/26e709368d461406d91d to your computer and use it in GitHub Desktop.
Save tomasdelvechio/26e709368d461406d91d to your computer and use it in GitHub Desktop.
Instalacion de cluster Hadoop

Instalación de Hadoop (Cluster Mode)

El siguiente tutorial muestra como instalar paso a paso Hadoop 2.5.0 en Ubuntu Linux 14.04

Los nodos donde se instale Hadoop deberia tener al menos 1Gb de RAM.

Descargar la versión de Hadoop en cada nodo del cluster

wget http://apache.dattatec.com/hadoop/common/hadoop-2.5.0/hadoop-2.5.0.tar.gz

Instalación

tar xfz hadoop-2.5.0.tar.gz
sudo su
mkdir /usr/local/hadoop
cp -R hadoop-2.5.0/* /usr/local/hadoop

Crear usuario y grupo para Hadoop

addgroup hadoop
adduser --ingroup hadoop hduser # Contraseña: hadoop

Es conveniente que tanto el usuario como el grupo NO sean iguales.

Cambiar los permisos de la carpeta de instalacion para que sean propiedad del usuario/grupo creado

chown hduser.hadoop /usr/local/hadoop -R

Instalar Java JDK

aptitude update
aptitude install default-jdk

Login automatico

Con el usuario 'hduser', generar la clave SSH

su hduser
ssh-keygen -t rsa -P '' # Si pregunta alguna cosa, dejar en blanco y apretar enter
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Variables de entorno de Hadoop

JAVA_HOME

Para setear la variable JAVA_HOME, suponiendo el JDK instalado, podemos hacer lo siguiente

ll /usr/lib/jvm/
total 96
drwxr-xr-x   5 root root  4096 Jun  9 21:03 ./
drwxr-xr-x 216 root root 69632 Jul 12 12:03 ../
lrwxrwxrwx   1 root root    24 Nov 30  2012 default-java -> java-1.7.0-openjdk-amd64/
lrwxrwxrwx   1 root root    20 Apr 16 03:09 java-1.6.0-openjdk-amd64 -> java-6-openjdk-amd64/
-rw-r--r--   1 root root  2387 Apr 16 03:09 .java-1.6.0-openjdk-amd64.jinfo
lrwxrwxrwx   1 root root    20 Jul  3  2013 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64/
-rw-r--r--   1 root root  2439 Apr 17 22:03 .java-1.7.0-openjdk-amd64.jinfo
drwxr-xr-x   5 root root  4096 Jun  9 21:03 java-6-openjdk-amd64/
drwxr-xr-x   3 root root  4096 Jun  9 21:03 java-6-openjdk-common/
drwxr-xr-x   7 root root  4096 Jun  8 19:29 java-7-openjdk-amd64/

Con lo anterior, se puede setear la variable de la siguiente forma.

JAVA_HOME=/usr/lib/jvm/default-java

En el archivo ~/.bashrc agregamos al final el siguiente contenido

#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/default-java
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END

Cargar las variables en el entorno

source ~/.bashrc

Archivos de configuracion de Hadoop

hadoop-env.sh

Editar el archivo hadoop-env.sh

nano /usr/local/hadoop/etc/hadoop/hadoop-env.sh

Buscar la linea

export JAVA_HOME=${JAVA_HOME}

Y cambiarla por lo siguiente

# export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/default-java

core-site.xml

nano /usr/local/hadoop/etc/hadoop/core-site.xml

Agregar el siguiente contenido

<property>
   <name>fs.default.name</name>
   <value>hdfs://master:54310</value>
</property>
<property>
   <name>hadoop.tmp.dir</name>
   <value>/usr/local/hadoop/hadoop_store/tmp</value>
</property>

Crear el directorio referenciado anteriormente

mkdir -p /usr/local/hadoop/hadoop_store/tmp

yarn-site.xml

nano /usr/local/hadoop/etc/hadoop/yarn-site.xml

Agregar las siguientes propiedades

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
   <name>yarn.nodemanager.resource.memory-mb</name>
   <value>4096</value>
</property>
<property>
   <name>yarn.resourcemanager.hostname</name>
   <value>master</value>
</property>
<property>
   <name>yarn.scheduler.minimum-allocation-mb</name>
   <value>500</value>
</property>

mapred-site.xml

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
nano /usr/local/hadoop/etc/hadoop/mapred-site.xml

Agregar la siguiente propiedad

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

hdfs-site.xml

nano /usr/local/hadoop/etc/hadoop/hdfs-site.xml

Agregar el contenido

<property>
   <name>dfs.replication</name>
   <value>3</value>
</property>
<property>
   <name>dfs.namenode.name.dir</name>
   <value>file:///usr/local/hadoop/hadoop_store/hdfs/namenode</value>
</property>
<property>
   <name>dfs.datanode.data.dir</name>
   <value>file:///usr/local/hadoop/hadoop_store/hdfs/datanode</value>
</property>

Crear las carpetas referenciadas anteriormente

mkdir -p /usr/local/hadoop/hadoop_store/hdfs/namenode
mkdir -p /usr/local/hadoop/hadoop_store/hdfs/datanode

Asignación de Roles (solo master)

Hay que configurar que nodo funcionara como master y cual como slaves. Primero creamos el archivo masters

nano /usr/local/hadoop/etc/hadoop/masters

Agregar una unica linea

master

A continuacion crear el archivo slaves

nano /usr/local/hadoop/etc/hadoop/slaves

Agregar la línea

slave1
slave2
slave3
slave4

(IMPORTANTE: Todos los nombres deben coincidir con el hostname de cada Nodo, segun esten definidos en /etc/hosts)

Intercambio de credenciales SSL

Ejecutar en todos los nodos:

ssh-copy-id hduser@master
ssh-copy-id hduser@slave1
ssh-copy-id hduser@slave2
ssh-copy-id hduser@slave3
ssh-copy-id hduser@slave4

Formatear HDFS

hdfs namenode -format

Iniciar Hadoop

Se necesita iniciar HDFS y YARN por separado para que Hadoop funcione. A continuacion se muestran los comandos y sus salidas.

start-dfs.sh
start-yarn.sh

Verificar los servicios

master

jps
27443 Jps
26634 NameNode
27056 ResourceManager

slaves

jps
27443 Jps
26758 DataNode
27150 NodeManager

Prueba Final

hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar pi 4 100

Instalación de Hadoop (Single Node)

El siguiente tutorial muestra como instalar paso a paso Hadoop 2.7.2 en Ubuntu Linux 16.04

La maquina donde se instale Hadoop deberia tener al menos 1Gb de RAM.

Descargar la versión de Hadoop

wget http://apache.dattatec.com/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz

Si el enlace no funciona, revisar el siguiente enlace

Instalación

tar xfz hadoop-2.7.2.tar.gz
sudo su
mkdir /usr/local/hadoop
cp -R hadoop-2.7.2/* /usr/local/hadoop

Crear usuario y grupo para Hadoop

addgroup hadoop
adduser --ingroup hadoop hduser # Contraseña: hadoop

Es conveniente que tanto el usuario como el grupo NO sean iguales.

Cambiar los permisos de la carpeta de instalacion para que sean propiedad del usuario/grupo creado

chown hduser.hadoop /usr/local/hadoop -R

Instalar Java JDK

apt update
apt install default-jdk

Login automatico

Con el usuario 'hduser', generar la clave SSH

su hduser
ssh-keygen -t rsa -P '' # Si pregunta alguna cosa, dejar en blanco y apretar enter
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Variables de entorno de Hadoop

JAVA_HOME

Para setear la variable JAVA_HOME, suponiendo el JDK instalado, podemos hacer lo siguiente

ll /usr/lib/jvm/
total 96
drwxr-xr-x   5 root root  4096 Jun  9 21:03 ./
drwxr-xr-x 216 root root 69632 Jul 12 12:03 ../
lrwxrwxrwx   1 root root    24 Nov 30  2012 default-java -> java-1.7.0-openjdk-amd64/
lrwxrwxrwx   1 root root    20 Apr 16 03:09 java-1.6.0-openjdk-amd64 -> java-6-openjdk-amd64/
-rw-r--r--   1 root root  2387 Apr 16 03:09 .java-1.6.0-openjdk-amd64.jinfo
lrwxrwxrwx   1 root root    20 Jul  3  2013 java-1.7.0-openjdk-amd64 -> java-7-openjdk-amd64/
-rw-r--r--   1 root root  2439 Apr 17 22:03 .java-1.7.0-openjdk-amd64.jinfo
drwxr-xr-x   5 root root  4096 Jun  9 21:03 java-6-openjdk-amd64/
drwxr-xr-x   3 root root  4096 Jun  9 21:03 java-6-openjdk-common/
drwxr-xr-x   7 root root  4096 Jun  8 19:29 java-7-openjdk-amd64/

Con lo anterior, se puede setear la variable de la siguiente forma.

JAVA_HOME=/usr/lib/jvm/default-java

En el archivo ~/.bashrc agregamos al final el siguiente contenido

#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/default-java
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/"
#HADOOP VARIABLES END

Cargar las variables en el entorno

source ~/.bashrc

Archivos de configuracion de Hadoop

hadoop-env.sh

Editar el archivo hadoop-env.sh

nano /usr/local/hadoop/etc/hadoop/hadoop-env.sh

Buscar la linea

export JAVA_HOME=${JAVA_HOME}

Y cambiarla por lo siguiente

# export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/default-java

core-site.xml

nano /usr/local/hadoop/etc/hadoop/core-site.xml

Agregar el siguiente contenido

<property>
   <name>fs.default.name</name>
   <value>hdfs://localhost:54310</value>
</property>
<property>
   <name>hadoop.tmp.dir</name>
   <value>/usr/local/hadoop/hadoop_store/tmp</value>
</property>

Crear el directorio referenciado anteriormente

mkdir -p /usr/local/hadoop/hadoop_store/tmp

yarn-site.xml

nano /usr/local/hadoop/etc/hadoop/yarn-site.xml

Agregar las siguientes propiedades

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
   <name>yarn.nodemanager.resource.memory-mb</name>
   <value>4096</value>
</property>
<property>
   <name>yarn.resourcemanager.hostname</name>
   <value>localhost</value>
</property>
<property>
   <name>yarn.scheduler.minimum-allocation-mb</name>
   <value>500</value>
</property>

mapred-site.xml

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml
nano /usr/local/hadoop/etc/hadoop/mapred-site.xml

Agregar la siguiente propiedad

<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

hdfs-site.xml

nano /usr/local/hadoop/etc/hadoop/hdfs-site.xml

Agregar el contenido

<property>
   <name>dfs.replication</name>
   <value>1</value>
</property>
<property>
   <name>dfs.namenode.name.dir</name>
   <value>file:///usr/local/hadoop/hadoop_store/hdfs/namenode</value>
</property>
<property>
   <name>dfs.datanode.data.dir</name>
   <value>file:///usr/local/hadoop/hadoop_store/hdfs/datanode</value>
</property>
<property>       
    <name>dfs.secondary.http.address</name>       
    <value>localhost:50090</value>
</property>

Crear las carpetas referenciadas anteriormente

mkdir -p /usr/local/hadoop/hadoop_store/hdfs/namenode
mkdir -p /usr/local/hadoop/hadoop_store/hdfs/datanode

Asignación de Roles

Hay que configurar que nodo funcionara como master y cual como slaves. Primero creamos el archivo masters

nano /usr/local/hadoop/etc/hadoop/masters

Agregar una unica linea

localhost

A continuacion crear el archivo slaves

nano /usr/local/hadoop/etc/hadoop/slaves

Agregar la línea

localhost

Archivo /etc/hosts

Averiguar cual es el hostname de nuestro equipo

hostname

Si la salida fuera precise64, entonces editar el archivo /etc/hosts como root, dejando las dos primeras lineas de la siguiente forma

127.0.0.1       localhost
127.0.0.1       precise64

Verificacion de credenciales SSL

Intentar hacer login via ssh con los siguientes comandos

su hduser
ssh hduser@localhost
ssh hduser@`hostname`

Tiene que loguearse sin pedir password, puede pedir aceptar algun Fingerprint, de la siguiente manera

The authenticity of host 'precise64 (127.0.0.1)' can't be established.
ECDSA key fingerprint is 11:5d:55:29:8a:77:d8:08:b4:00:9b:a3:61:93:fe:e5.
Are you sure you want to continue connecting (yes/no)?

Responder yes. Solo lo preguntara la primera vez. Tipear exit si el login fue exitoso.

Formatear HDFS

hdfs namenode -format

La salida deberia verse como similar a:

14/09/05 22:15:44 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = precise64/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.5.0
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.5.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.5.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.0-tests.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common -r 1616291; compiled by 'jenkins' on 2014-08-06T17:31Z
STARTUP_MSG:   java = 1.6.0_32
************************************************************/
14/09/05 22:15:44 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/09/05 22:15:44 INFO namenode.NameNode: createNameNode [-format]
14/09/05 22:15:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/09/05 22:15:45 WARN common.Util: Path /usr/local/hadoop/hadoop_store/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
14/09/05 22:15:45 WARN common.Util: Path /usr/local/hadoop/hadoop_store/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-f21b3661-2348-40a3-81f0-de352c36a9ad
14/09/05 22:15:45 INFO namenode.FSNamesystem: fsLock is fair:true
14/09/05 22:15:45 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/09/05 22:15:45 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/09/05 22:15:45 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
14/09/05 22:15:45 INFO blockmanagement.BlockManager: The block deletion will start around 2014 Sep 05 22:15:45
14/09/05 22:15:45 INFO util.GSet: Computing capacity for map BlocksMap
14/09/05 22:15:45 INFO util.GSet: VM type       = 64-bit
14/09/05 22:15:45 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
14/09/05 22:15:45 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/09/05 22:15:45 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/09/05 22:15:45 INFO blockmanagement.BlockManager: defaultReplication         = 1
14/09/05 22:15:45 INFO blockmanagement.BlockManager: maxReplication             = 512
14/09/05 22:15:45 INFO blockmanagement.BlockManager: minReplication             = 1
14/09/05 22:15:45 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
14/09/05 22:15:45 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
14/09/05 22:15:45 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/09/05 22:15:45 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
14/09/05 22:15:45 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
14/09/05 22:15:45 INFO namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)
14/09/05 22:15:45 INFO namenode.FSNamesystem: supergroup          = supergroup
14/09/05 22:15:45 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/09/05 22:15:45 INFO namenode.FSNamesystem: HA Enabled: false
14/09/05 22:15:45 INFO namenode.FSNamesystem: Append Enabled: true
14/09/05 22:15:46 INFO util.GSet: Computing capacity for map INodeMap
14/09/05 22:15:46 INFO util.GSet: VM type       = 64-bit
14/09/05 22:15:46 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
14/09/05 22:15:46 INFO util.GSet: capacity      = 2^20 = 1048576 entries
14/09/05 22:15:46 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/09/05 22:15:46 INFO util.GSet: Computing capacity for map cachedBlocks
14/09/05 22:15:46 INFO util.GSet: VM type       = 64-bit
14/09/05 22:15:46 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
14/09/05 22:15:46 INFO util.GSet: capacity      = 2^18 = 262144 entries
14/09/05 22:15:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/09/05 22:15:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/09/05 22:15:46 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
14/09/05 22:15:46 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/09/05 22:15:46 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/09/05 22:15:46 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/09/05 22:15:46 INFO util.GSet: VM type       = 64-bit
14/09/05 22:15:46 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
14/09/05 22:15:46 INFO util.GSet: capacity      = 2^15 = 32768 entries
14/09/05 22:15:46 INFO namenode.NNConf: ACLs enabled? false
14/09/05 22:15:46 INFO namenode.NNConf: XAttrs enabled? true
14/09/05 22:15:46 INFO namenode.NNConf: Maximum size of an xattr: 16384
14/09/05 22:15:46 INFO namenode.FSImage: Allocated new BlockPoolId: BP-2031094325-127.0.0.1-1409955346126
14/09/05 22:15:46 INFO common.Storage: Storage directory /usr/local/hadoop/hadoop_store/hdfs/namenode has been successfully formatted.
14/09/05 22:15:46 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/09/05 22:15:46 INFO util.ExitUtil: Exiting with status 0
14/09/05 22:15:46 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at precise64/127.0.0.1
************************************************************/

Lo importante es que las lineas contenga mensajes INFO o WARN. Si hay algun error, algo fallo en los pasos anteriores.

Iniciar Hadoop

Se necesita iniciar HDFS y YARN por separado para que Hadoop funcione. A continuacion se muestran los comandos y sus salidas.

$ start-dfs.sh
14/09/05 22:31:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-precise64.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-precise64.out
Starting secondary namenodes [localhost]
localhost: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-precise64.out
14/09/05 22:31:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-precise64.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-precise64.out

Verificar los servicios

$ jps
27443 Jps
26914 SecondaryNameNode
26758 DataNode
26634 NameNode
27150 NodeManager
27056 ResourceManager

Si todos los servicios anteriores son listados, se podria decir que Hadoop esta completamente instalado.

Prueba Final

Hadoop provee un conjunto de programas de ejemplos para ser probados en el momento, sin necesidad de programr y comprobar que todo quedo instalado y configurado de forma correcta.

hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar pi 4 10000

Si el comando anterior devuelve un valor aproximado para PI, entonces HADOOP se instaló correctamente

Una salida posible podria ser la siguiente

Number of Maps  = 4
Samples per Map = 10000
14/09/05 22:38:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Starting Job
14/09/05 22:38:32 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
14/09/05 22:38:34 INFO input.FileInputFormat: Total input paths to process : 4
14/09/05 22:38:34 INFO mapreduce.JobSubmitter: number of splits:4
14/09/05 22:38:35 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1409956545680_0001
14/09/05 22:38:36 INFO impl.YarnClientImpl: Submitted application application_1409956545680_0001
14/09/05 22:38:37 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1409956545680_0001/
14/09/05 22:38:37 INFO mapreduce.Job: Running job: job_1409956545680_0001
14/09/05 22:39:28 INFO mapreduce.Job: Job job_1409956545680_0001 running in uber mode : false
14/09/05 22:39:28 INFO mapreduce.Job:  map 0% reduce 0%
14/09/05 22:40:12 INFO mapreduce.Job:  map 25% reduce 0%
14/09/05 22:40:45 INFO mapreduce.Job:  map 50% reduce 0%
14/09/05 22:41:26 INFO mapreduce.Job:  map 75% reduce 0%
14/09/05 22:42:04 INFO mapreduce.Job:  map 100% reduce 0%
14/09/05 22:42:23 INFO mapreduce.Job:  map 100% reduce 100%
14/09/05 22:42:48 INFO mapreduce.Job: Job job_1409956545680_0001 completed successfully
14/09/05 22:42:49 INFO mapreduce.Job: Counters: 49
    File System Counters
		FILE: Number of bytes read=94
		FILE: Number of bytes written=487696
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=1064
		HDFS: Number of bytes written=215
		HDFS: Number of read operations=19
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=3
	Job Counters 
		Launched map tasks=4
		Launched reduce tasks=1
		Data-local map tasks=4
		Total time spent by all maps in occupied slots (ms)=433134
		Total time spent by all reduces in occupied slots (ms)=95571
		Total time spent by all map tasks (ms)=144378
		Total time spent by all reduce tasks (ms)=31857
		Total vcore-seconds taken by all map tasks=144378
		Total vcore-seconds taken by all reduce tasks=31857
		Total megabyte-seconds taken by all map tasks=147843072
		Total megabyte-seconds taken by all reduce tasks=32621568
	Map-Reduce Framework
		Map input records=4
		Map output records=8
		Map output bytes=72
		Map output materialized bytes=112
		Input split bytes=592
		Combine input records=0
		Combine output records=0
		Reduce input groups=2
		Reduce shuffle bytes=112
		Reduce input records=8
		Reduce output records=0
		Spilled Records=16
		Shuffled Maps =4
		Failed Shuffles=0
		Merged Map outputs=4
		GC time elapsed (ms)=2968
		CPU time spent (ms)=16280
		Physical memory (bytes) snapshot=416526336
		Virtual memory (bytes) snapshot=5519654912
		Total committed heap usage (bytes)=463572992
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=472
	File Output Format Counters 
		Bytes Written=97
Job Finished in 257.978 seconds
Estimated value of Pi is 3.14140000000000000000

El tiempo que tarde dependera de los recursos y velocidad del equipo donde instalemos Hadoop.

@tomasdelvechio
Copy link
Author

Proxy

Apt

Editar como root el archivo /etc/apt/apt.conf, agregando lo siguiente

Acquire::http::Proxy "http://proxyw.unlu.edu.ar:8080";

wget

Editar el archivo /etc/wgetrc y buscar las variables siguientes (todas comentadas)

https_proxy
http_proxy
ftp_proxy
use_proxy

Y setearlas adecuadamente

@tomasdelvechio
Copy link
Author

Archivos para descargar

wget http://www.gutenberg.org/cache/epub/135/pg135.txt
wget http://www.gutenberg.org/cache/epub/1661/pg1661.txt
wget http://www.gutenberg.org/cache/epub/4300/pg4300.txt

@tomasdelvechio
Copy link
Author

Web UI:

ResourceManager: http://master:8088/
Namenode: http://master:50070/

@tomasdelvechio
Copy link
Author

Recomendable que el equipo tenga disponibles 2Gb de RAM como mínimo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment