> CentOS > CentOS教程 >

CentOS 6.5编译安装hadoop-2.2.0



这几天在琢磨Hadoop,首先是安装Hadoop,在安装过程中出现过不少问题,现在将整个过程总结一下,网络上已经有很多这方面的资料了,但我觉得还是有必要记述一下部分重要安装过程,方便以后发现与解决问题,也希望能给需要的朋友一些参考。

 

   我所用系统是CentOS6.5 64bit,编译安装hadoop-2.2.0hadoop配置为单节点。在ApacheHadoop下载的hadoop-2.2.0有两种版本:1)编译版本:hadoop-2.2.0.tar.gz,2)源代码版本:hadoop-2.2.0-src.tar.gz。对于编译版本,解压缩后进行配置就可以使用了,而对于源代码版本,先要编译,然后再进行配置。

 

    我第一次装hadoop-2.2.0使用的是编译版本,但由于我的系统是64位,自带的native-hadooplibrary不能使用(自带的库适用于32位系统,但不适用于64位系统),总是出现下列提示:”WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes whereapplicable”(问题综述3.2),因此我采用源代码重新编译安装。

 

 

  1编译hadoop-2.2.0源代码

 

    1.1编译前环境准备

    我编译源代码过程中参考的是这个篇博文:hadoop2.2.0centos编译安装详解,这其中需要安装的软件或者包有许多,可以分为两类:

  • yum安装:java,gcc,gcc-c++,make,lzo-devel,zlib-devel,gcc,autoconf,automake,libtool,ncurses-devel,openssl-devel
  • 手动安装:Maven,ProtocolBuffer

      对于yum安装的软件或者依赖包,在CentOS 6.5中大部分可能都已经预先安装了,可以先先检查一下是否安装或者更新:yum info package,如果需要安装:yum -y install package,如果有可用更新:yum -y update package

    对于手动安装中的软件,需要先下载软件包,然后再安装,使用的具体版本是protobuf-2.5.0.tar.gz(http://download.csdn.net/detail/erli11/7408809,官方网站被wall了可以选择这个下载)apache-maven-3.0.5-bin.zip(mirror.bit.edu.cn/apache/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.zip)protobuf需要采用源码安装;maven是编译版本,只需配置环境变量。但请注意:不要使用Maven3.1.1,因为Maven3.1.1Maven3.0.x存在兼容性问题,不能成功下载插件,会出现问题maven-"ServiceUnavailable"。建议使用oschinamaven镜像,因为国外的某些网站可能会被Wall。这两个软件的安装都可参考上面的博客。

 

    安装完上面所列出的软件或者依赖包后,需要配置系统环境变量,让系统能够找到软件所对应的命令,以下是我在/root/.bashrc中添加的配置:

 

[plain] view plaincopy
 
  1. export JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64"  
  2. export CLASSPATH=.:${JAVA_HOME}/lib/:${JAVA_HOME}/jre/lib/  
  3. export PATH=${JAVA_HOME}/bin:${JAVA_HOME}/jre/bin:$PATH  
  4. export MAVEN_HOME="/usr/local/src/apache-maven-3.0.5"  
  5. export PATH=$PATH:$MAVEN_HOME/bin  
  6. export PROTOBUF_HOME="/usr/local/protobuf"  
  7. export PATH=$PATH:$PROTOBUF_HOME/bin  

    在/root/.bashrc配置环境变量后需要使用命令source/root/.bashrc加载配置。检测javamaven是否安装成功,出现下列信息表示安装成功。

 

 

[plain] view plaincopy
 
  1. [root@lls~]# java -version  
  2. javaversion "1.7.0_71"  
  3. OpenJDKRuntime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)  
  4. OpenJDK64-Bit Server VM (build 24.65-b04, mixed mode)  
  5. [root@lls~]# mvn -version  
  6. ApacheMaven 3.0.5 (r01de14724cdef164cd33c7c8c2fe155faf9602da; 2013-02-1921:51:28+0800)  
  7. Mavenhome: /usr/local/src/apache-maven-3.0.5  
  8. Javaversion: 1.7.0_71, vendor: Oracle Corporation  
  9. Javahome: /usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64/jre  
  10. Defaultlocale: en_US, platform encoding: UTF-8  
  11. OSname: "linux", version: "2.6.32-431.29.2.el6.x86_64",arch: "amd64", family: "unix"  

 

 

    1.2编译hadoop

    下载hadoop-2.2.0(http://apache.fastbull.org/hadoop/common/hadoop-2.2.0/hadoop-2.2.0-src.tar.gz,官方已经不支持下载,可以在这里下载)源代码,hadoop-2.2.0SourceCode压缩包解压出来的源代码有个bug需要patch后才能编译,参考:https://issues.apache.org/jira/browse/HADOOP-10110

    一切准备就绪,开始编译:

 

[plain] view plaincopy
 
  1. cd /home/xxx/softwares/hadoop/hadoop-2.2.0-src  
  2. mvn package -Pdist,native -DskipTests -Dtar  

   需要等待一段时间,编译成功之后的结果如下(maven使用默认镜像):

 

[plain] view plaincopy
 
  1. [INFO] Reactor Summary:  
  2. [INFO]  
  3. [INFO] Apache Hadoop Main................................ SUCCESS [2.109s]  
  4. [INFO] Apache Hadoop Project POM......................... SUCCESS [1.828s]  
  5. [INFO] Apache Hadoop Annotations......................... SUCCESS [5.266s]  
  6. [INFO] Apache Hadoop Assemblies.......................... SUCCESS [0.228s]  
  7. [INFO] Apache Hadoop Project Dist POM.................... SUCCESS [2.184s]  
  8. [INFO] Apache Hadoop Maven Plugins....................... SUCCESS [3.562s]  
  9. [INFO] Apache Hadoop Auth................................ SUCCESS [3.128s]  
  10. [INFO] Apache Hadoop Auth Examples....................... SUCCESS [2.444s]  
  11. [INFO] Apache Hadoop Common.............................. SUCCESS [1:17.748s]  
  12. [INFO] Apache Hadoop NFS................................. SUCCESS [16.455s]  
  13. [INFO] Apache Hadoop Common Project...................... SUCCESS [0.056s]  
  14. [INFO] Apache Hadoop HDFS................................ SUCCESS [2:18.736s]  
  15. [INFO] Apache Hadoop HttpFS.............................. SUCCESS [18.687s]  
  16. [INFO] Apache Hadoop HDFS BookKeeper Journal............. SUCCESS [23.553s]  
  17. [INFO] Apache Hadoop HDFS-NFS............................ SUCCESS [3.453s]  
  18. [INFO] Apache Hadoop HDFS Project........................ SUCCESS [0.046s]  
  19. [INFO] hadoop-yarn....................................... SUCCESS [48.652s]  
  20. [INFO] hadoop-yarn-api................................... SUCCESS [44.591s]  
  21. [INFO] hadoop-yarn-common................................ SUCCESS [30.677s]  
  22. [INFO] hadoop-yarn-server................................ SUCCESS [0.096s]  
  23. [INFO] hadoop-yarn-server-common......................... SUCCESS [9.340s]  
  24. [INFO] hadoop-yarn-server-nodemanager.................... SUCCESS [16.656s]  
  25. [INFO] hadoop-yarn-server-web-proxy...................... SUCCESS [3.115s]  
  26. [INFO] hadoop-yarn-server-resourcemanager................ SUCCESS [13.133s]  
  27. [INFO] hadoop-yarn-server-tests.......................... SUCCESS [0.614s]  
  28. [INFO] hadoop-yarn-client................................ SUCCESS [4.646s]  
  29. [INFO] hadoop-yarn-applications.......................... SUCCESS [0.100s]  
  30. [INFO] hadoop-yarn-applications-distributedshell......... SUCCESS [2.815s]  
  31. [INFO] hadoop-mapreduce-client........................... SUCCESS [0.096s]  
  32. [INFO] hadoop-mapreduce-client-core...................... SUCCESS [23.624s]  
  33. [INFO]hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [2.056s]  
  34. [INFO] hadoop-yarn-site.................................. SUCCESS [0.099s]  
  35. [INFO] hadoop-yarn-project............................... SUCCESS [11.009s]  
  36. [INFO] hadoop-mapreduce-client-common.................... SUCCESS [20.053s]  
  37. [INFO] hadoop-mapreduce-client-shuffle................... SUCCESS [3.310s]  
  38. [INFO] hadoop-mapreduce-client-app....................... SUCCESS [9.819s]  
  39. [INFO] hadoop-mapreduce-client-hs........................ SUCCESS [4.843s]  
  40. [INFO] hadoop-mapreduce-client-jobclient................. SUCCESS [6.115s]  
  41. [INFO] hadoop-mapreduce-client-hs-plugins................ SUCCESS [1.682s]  
  42. [INFO] Apache Hadoop MapReduce Examples.................. SUCCESS [6.336s]  
  43. [INFO] hadoop-mapreduce.................................. SUCCESS [3.946s]  
  44. [INFO] Apache Hadoop MapReduce Streaming................. SUCCESS [4.788s]  
  45. [INFO] Apache Hadoop Distributed Copy.................... SUCCESS [8.510s]  
  46. [INFO] Apache Hadoop Archives............................ SUCCESS [2.061s]  
  47. [INFO] Apache Hadoop Rumen............................... SUCCESS [7.269s]  
  48. [INFO] Apache Hadoop Gridmix............................. SUCCESS [4.815s]  
  49. [INFO] Apache Hadoop Data Join........................... SUCCESS [3.659s]  
  50. [INFO] Apache Hadoop Extras.............................. SUCCESS [3.132s]  
  51. [INFO] Apache Hadoop Pipes............................... SUCCESS [9.350s]  
  52. [INFO] Apache Hadoop Tools Dist.......................... SUCCESS [1.850s]  
  53. [INFO] Apache Hadoop Tools............................... SUCCESS [0.023s]  
  54. [INFO] Apache Hadoop Distribution........................ SUCCESS [19.184s]  
  55. [INFO] Apache Hadoop Client.............................. SUCCESS [6.730s]  
  56. [INFO] Apache Hadoop Mini-Cluster........................ SUCCESS [0.192s]  
  57. [INFO]------------------------------------------------------------------------  
  58. [INFO] BUILD SUCCESS  
  59. [INFO]------------------------------------------------------------------------  
  60. [INFO] Total time: 10:40.193s  
  61. [INFO] Finished at: Fri Nov 21 14:43:06 CST 2014  
  62. [INFO] Final Memory: 131M/471M  
  63. [INFO]------------------------------------------------------------------------  
  编译后的文件为hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0

 

 

 

  2.单节点安装hadoop

   以下采用single-node模式在CentOS6.5 64bits中安装hadoop-2.2.0

 

    2.1创建用户组及添加用户

 

[plain] view plaincopy
 
  1. [root@lls Desktop]#groupadd hadoopgroup  
  2. [root@lls Desktop]#useradd hadoopuser  
  3. [root@lls Desktop]#passwd hadoopuserID-0217ef09-5d44-44dc-815a-f0e0569e0  
  4. Changing password foruser hadoopuser.  
  5. New password:  
  6. Retype new password:  
  7. passwd: allauthentication tokens updated successfully.  
  8. [root@lls Desktop]#usermod -g hadoopgroup hadoopuser  

 

    2.2.安装和配置SSH

       Hadoop使用SSH的方式管理其节点,即使在single-node方式中也需要对其进行配置。否则会出现connectionrefused on port 22”错误。在此之前,请确保您已经安装了SSH,如果没有,可以使用yum install openssh-server安装。

    为hadoop用户产生一个SSH密钥,以后无需密码即可登录到hadoop节点:

   注意:这步是切换到hadoopuser后执行的。

 

[plain] view plaincopy
 
  1. [hadoopuser@lls ~]$ ssh-keygen -t rsa  
  2. Generating public/private rsa key pair.  
  3. Enter file in which to save the key (/home/hadoopuser/.ssh/id_rsa):   
  4. Enter passphrase (empty for no passphrase):   
  5. Enter same passphrase again:   
  6. Your identification has been saved in /home/hadoopuser/.ssh/id_rsa.  
  7. Your public key has been saved in /home/hadoopuser/.ssh/id_rsa.pub.  
  8. The key fingerprint is:  
  9. 0b:6e:2f:89:a5:42:42:40:b2:69:fc:3f:4c:84:33:eb hadoopuser@lls.pc  
  10. The key's randomart image is:  
  11. +--[ RSA 2048]----+  
  12. |o.               |  
  13. |+o  .            |  
  14. |+o + .           |  
  15. |... =            |  
  16. |.  o .. S        |  
  17. |. o +... .       |  
  18. | o E Bo..        |  
  19. |  . o.+.         |  
  20. |   .   ..        |  
  21. +-----------------+  
    使新创建的密钥能够访问本地主机:

 

[plain] view plaincopy
 
  1. [hadoopuser@lls ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  


 

    2.3设置安装文件权限

   我将hadoop安装在/usr/local,将编译后的文件hadoop-2.2.0复制到/usr/local中,修改所有者:

 

[plain] view plaincopy
 
  1. cp -R /home/xxx/softwares/hadoop/hadoop-2.2.0-src/hadoop-dist/target/hadoop-2.2.0 /usr/local/  
  2. cd /usr/local/  
  3. mv hadoop-2.2.0/ hadoop  
  4. chown -R hadoopuser:hadoopgroup hadoop/  

 

    2.4创建HDFS路径

 

[plain] view plaincopy
 
  1. cd /usr/local/hadoop/  
  2. mkdir -p data/namenode  
  3. mkdir -p data/datanode  
  4. mkdir -p data/secondarydatanode  

 

  2.5配置hadoop-env.sh

    在文件/usr/local/hadoop/etc/hadoop/hadoop-env.sh中,添加

 

[plain] view plaincopy
 
  1. export JAVA_HOME="/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.71.x86_64"  
  2. export HADOOP_HOME="/usr/local/hadoop"  
  3. export HADOOP_MAPRED_HOME=$HADOOP_HOME  
  4. export HADOOP_COMMON_HOME=$HADOOP_HOME  
  5. export HADOOP_HDFS_HOME=$HADOOP_HOME  
  6. export YARN_HOME=$HADOOP_HOME  
  7. export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native  
  8. export HADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=/usr/local/hadoop/lib/native"  
  9. export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop  

 

  注意:要将原配置文件中下列的两行屏蔽或者直接删除,否则不能加载成功native-hadooplibrary

[plain] view plaincopy
 
  1. export JAVA_HOME=${JAVA_HOME}  
  2. export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}  

 

   2.6配置core-site.xml

    在文件/usr/local/hadoop/etc/hadoop/core-site.xml(<configuration>标签内)添加:

 

[html] view plaincopy
 
  1. <property>  
  2.     <name>fs.default.name</name>  
  3.     <value>hdfs://localhost:9000</value>  
  4. </property>  

 

 

   2.7配置hdfs-site.xml

    在文件/usr/local/hadoop/etc/hadoop/hdfs-site.xml(<configuration>标签内)添加:

[html] view plaincopy
 
  1. <property>  
  2.     <name>dfs.replication</name>  
  3.     <value>1</value>  
  4. </property>  
  5. <property>  
  6.     <name>dfs.name.dir</name>  
  7.     <value>file:///usr/local/hadoop/data/namenode</value>  
  8. </property>  
  9. <property>  
  10.     <name>fs.checkpoint.dir</name>  
  11.     <value>file:///usr/local/hadoop/data/secondarynamenode</value>  
  12. </property>  
  13. <property>  
  14.     <name>dfs.data.dir</name>  
  15.     <value>file:///usr/local/hadoop/data/datanode</value>  
  16. </property>  


   2.8配置yarn-site.xml

    在文件/usr/local/hadoop/etc/hadoop/yarn-site.xml(<configuration>标签内)添加:

 

[html] view plaincopy
 
  1. <property>  
  2.     <name>yarn.nodemanager.aux-services</name>  
  3.     <value>mapreduce_shuffle</value>  
  4. </property>  
  5. <property>  
  6.     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
  7.     <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
  8. </property>  

  2.9配置mapred-site.xml

    创建mapred-site.xml文件:

 

[html] view plaincopy
 
  1. cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml  

 

    在文件/usr/local/hadoop/etc/hadoop/mapred-site.xml(<configuration>标签内)添加:

[html] view plaincopy
 
  1. <property>  
  2.     <name>mapreduce.framework.name</name>  
  3.     <value>yarn</value>  
  4. </property>  
  5. <property>  
  6.     <name>mapreduce.job.tracker</name>  
  7.     <value>localhost:8021</value>  
  8. </property>  
  9. <property>  
  10.     <name>mapreduce.local.dir</name>  
  11.     <value>file:///usr/local/hadoop/data/mapreduce</value>  
  12. </property>  


  2.10添加hadoop可执行路径

    在/home/hadoopuser/.bashrc添加hadoop可执行路径:

 

[plain] view plaincopy
 
  1. echo "exportPATH=$PATH:/usr/local/hadoop/bin:/usr/local/hadoop/sbin" >> /home/hadoopuser/.bashrc  
  2. source ~/.bashrc  

 


  2.11格式化HDFS


[plain] view plaincopy
 
  1. [hadoopuser@lls hadoop]$ hdfs namenode -format  
  2. 14/11/22 13:00:18 INFO namenode.NameNode: STARTUP_MSG:   
  3. /************************************************************  
  4. STARTUP_MSG: Starting NameNode  
  5. STARTUP_MSG:   host = lls.pc/127.0.0.1  
  6. STARTUP_MSG:   args = [-format]  
  7. STARTUP_MSG:   version = 2.2.0  
  8. STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.6.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar  
  9. STARTUP_MSG:   build = Unknown -r Unknown; compiled by 'root' on 2014-11-21T06:32Z  
  10. STARTUP_MSG:   java = 1.7.0_71  
  11. ************************************************************/  
  12. 14/11/22 13:00:18 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]  
  13. Formatting using clusterid: CID-bf1d252c-2710-45e6-af26-344debf86840  
  14. 14/11/22 13:00:19 INFO namenode.HostFileManager: read includes:  
  15. HostSet(  
  16. )  
  17. 14/11/22 13:00:19 INFO namenode.HostFileManager: read excludes:  
  18. HostSet(  
  19. )  
  20. 14/11/22 13:00:19 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000  
  21. 14/11/22 13:00:19 INFO util.GSet: Computing capacity for map BlocksMap  
  22. 14/11/22 13:00:19 INFO util.GSet: VM type       = 64-bit  
  23. 14/11/22 13:00:19 INFO util.GSet: 2.0% max memory = 889 MB  
  24. 14/11/22 13:00:19 INFO util.GSet: capacity      = 2^21 = 2097152 entries  
  25. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false  
  26. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: defaultReplication         = 1  
  27. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: maxReplication             = 512  
  28. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: minReplication             = 1  
  29. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2  
  30. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false  
  31. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000  
  32. 14/11/22 13:00:19 INFO blockmanagement.BlockManager: encryptDataTransfer        = false  
  33. 14/11/22 13:00:19 INFO namenode.FSNamesystem: fsOwner             = hadoopuser (auth:SIMPLE)  
  34. 14/11/22 13:00:19 INFO namenode.FSNamesystem: supergroup          = supergroup  
  35. 14/11/22 13:00:19 INFO namenode.FSNamesystem: isPermissionEnabled = true  
  36. 14/11/22 13:00:19 INFO namenode.FSNamesystem: HA Enabled: false  
  37. 14/11/22 13:00:19 INFO namenode.FSNamesystem: Append Enabled: true  
  38. 14/11/22 13:00:19 INFO util.GSet: Computing capacity for map INodeMap  
  39. 14/11/22 13:00:19 INFO util.GSet: VM type       = 64-bit  
  40. 14/11/22 13:00:19 INFO util.GSet: 1.0% max memory = 889 MB  
  41. 14/11/22 13:00:19 INFO util.GSet: capacity      = 2^20 = 1048576 entries  
  42. 14/11/22 13:00:19 INFO namenode.NameNode: Caching file names occuring more than 10 times  
  43. 14/11/22 13:00:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033  
  44. 14/11/22 13:00:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0  
  45. 14/11/22 13:00:19 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000  
  46. 14/11/22 13:00:19 INFO namenode.FSNamesystem: Retry cache on namenode is enabled  
  47. 14/11/22 13:00:19 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis  
  48. 14/11/22 13:00:19 INFO util.GSet: Computing capacity for map Namenode Retry Cache  
  49. 14/11/22 13:00:19 INFO util.GSet: VM type       = 64-bit  
  50. 14/11/22 13:00:19 INFO util.GSet: 0.029999999329447746% max memory = 889 MB  
  51. 14/11/22 13:00:19 INFO util.GSet: capacity      = 2^15 = 32768 entries  
  52. Re-format filesystem in Storage Directory /usr/local/hadoop/data/namenode ? (Y or N) y  
  53. 14/11/22 13:00:39 INFO common.Storage: Storage directory /usr/local/hadoop/data/namenode has been successfully formatted.  
  54. 14/11/22 13:00:39 INFO namenode.FSImage: Saving image file /usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000 using no compression  
  55. 14/11/22 13:00:39 INFO namenode.FSImage: Image file /usr/local/hadoop/data/namenode/current/fsimage.ckpt_0000000000000000000 of size 202 bytes saved in 0 seconds.  
  56. 14/11/22 13:00:39 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0  
  57. 14/11/22 13:00:39 INFO util.ExitUtil: Exiting with status 0  
  58. 14/11/22 13:00:39 INFO namenode.NameNode: SHUTDOWN_MSG:   
  59. /************************************************************  
  60. SHUTDOWN_MSG: Shutting down NameNode at lls.pc/127.0.0.1  
  61. ************************************************************/  

    2.12启动hadoop

 

[plain] view plaincopy
 
  1. [root@llshadoop]# su hadoopuser  
  2. [hadoopuser@llshadoop]$ start-dfs.sh && start-yarn.sh  
  3. Startingnamenodes on [localhost]  
  4. localhost:starting namenode, logging to/usr/local/hadoop/logs/hadoop-hadoopuser-namenode-lls.pc.out  
  5. localhost:starting datanode, logging to/usr/local/hadoop/logs/hadoop-hadoopuser-datanode-lls.pc.out  
  6. Startingsecondary namenodes [0.0.0.0]  
  7. 0.0.0.0:starting secondarynamenode, logging to/usr/local/hadoop/logs/hadoop-hadoopuser-secondarynamenode-lls.pc.out  
  8. startingyarn daemons  
  9. startingresourcemanager, logging to/usr/local/hadoop/logs/yarn-hadoopuser-resourcemanager-lls.pc.out  
  10. localhost:starting nodemanager, logging to/usr/local/hadoop/logs/yarn-hadoopuser-nodemanager-lls.pc.out  
  11. [root@llshadoop]#   

 

 2.13查看hadoop状态

    查看hadoop守护进程:

 

[plain] view plaincopy
 
  1. [hadoopuser@lls data]$ jps  
  2. 13466 Jps  
  3. 18277 ResourceManager  
  4. 17952 DataNode  
  5. 18126 SecondaryNameNode  
  6. 18394 NodeManager  
  7. 17817 NameNode   

    最左边的数字表示java进程的PID(启动hadoop时动态分配)DataNode,NameNode,NodeManager,SecondaryNameNode,ResourceManagerhadoop的守护进程。

     HDFS内置了许多web服务,用户借助于这些服务,可以通过浏览器来查看HDFS的运行状况。可以通过浏览器查看更详细hadoop状态

  • Cluster status  http://localhost:8088/cluster

  •  

    HDFS status http://localhost:50070/dfshealth.jsp

  •  

     

    SecondaryNameNode status  http://localhost:50090/status.jsp

 

  2.14测试hadoop

    以下是hadoop自带的计算pi的例子:

 

[plain] view plaincopy
 
  1. [hadoopuser@lls data]$ hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar  pi 10 100  
  2. Number of Maps  = 10  
  3. Samples per Map = 100  
  4. Wrote input for Map #0  
  5. Wrote input for Map #1  
  6. Wrote input for Map #2  
  7. Wrote input for Map #3  
  8. Wrote input for Map #4  
  9. Wrote input for Map #5  
  10. Wrote input for Map #6  
  11. Wrote input for Map #7  
  12. Wrote input for Map #8  
  13. Wrote input for Map #9  
  14. Starting Job  
  15. 14/11/22 13:15:23 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032  
  16. 14/11/22 13:15:24 INFO input.FileInputFormat: Total input paths to process : 10  
  17. 14/11/22 13:15:24 INFO mapreduce.JobSubmitter: number of splits:10  
  18. 14/11/22 13:15:24 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name  
  19. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar  
  20. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative  
  21. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces  
  22. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class  
  23. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative  
  24. 14/11/22 13:15:24 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class  
  25. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name  
  26. 14/11/22 13:15:24 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class  
  27. 14/11/22 13:15:24 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class  
  28. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir  
  29. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir  
  30. 14/11/22 13:15:24 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class  
  31. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps  
  32. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class  
  33. 14/11/22 13:15:24 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir  
  34. 14/11/22 13:15:24 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1416632942167_0003  
  35. 14/11/22 13:15:24 INFO impl.YarnClientImpl: Submitted application application_1416632942167_0003 to ResourceManager at /0.0.0.0:8032  
  36. 14/11/22 13:15:24 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1416632942167_0003/  
  37. 14/11/22 13:15:24 INFO mapreduce.Job: Running job: job_1416632942167_0003  
  38. 14/11/22 13:15:31 INFO mapreduce.Job: Job job_1416632942167_0003 running in uber mode : false  
  39. 14/11/22 13:15:31 INFO mapreduce.Job:  map 0% reduce 0%  
  40. 14/11/22 13:15:54 INFO mapreduce.Job:  map 10% reduce 0%  
  41. 14/11/22 13:15:55 INFO mapreduce.Job:  map 50% reduce 0%  
  42. 14/11/22 13:15:56 INFO mapreduce.Job:  map 60% reduce 0%  
  43. 14/11/22 13:16:17 INFO mapreduce.Job:  map 90% reduce 0%  
  44. 14/11/22 13:16:18 INFO mapreduce.Job:  map 100% reduce 0%  
  45. 14/11/22 13:16:19 INFO mapreduce.Job:  map 100% reduce 100%  
  46. 14/11/22 13:16:19 INFO mapreduce.Job: Job job_1416632942167_0003 completed successfully  
  47. 14/11/22 13:16:19 INFO mapreduce.Job: Counters: 43  
  48.     File System Counters  
  49.         FILE: Number of bytes read=226  
  50.         FILE: Number of bytes written=879518  
  51.         FILE: Number of read operations=0  
  52.         FILE: Number of large read operations=0  
  53.         FILE: Number of write operations=0  
  54.         HDFS: Number of bytes read=2700  
  55.         HDFS: Number of bytes written=215  
  56.         HDFS: Number of read operations=43  
  57.         HDFS: Number of large read operations=0  
  58.         HDFS: Number of write operations=3  
  59.     Job Counters   
  60.         Launched map tasks=10  
  61.         Launched reduce tasks=1  
  62.         Data-local map tasks=10  
  63.         Total time spent by all maps in occupied slots (ms)=215911  
  64.         Total time spent by all reduces in occupied slots (ms)=20866  
  65.     Map-Reduce Framework  
  66.         Map input records=10  
  67.         Map output records=20  
  68.         Map output bytes=180  
  69.         Map output materialized bytes=280  
  70.         Input split bytes=1520  
  71.         Combine input records=0  
  72.         Combine output records=0  
  73.         Reduce input groups=2  
  74.         Reduce shuffle bytes=280  
  75.         Reduce input records=20  
  76.         Reduce output records=0  
  77.         Spilled Records=40  
  78.         Shuffled Maps =10  
  79.         Failed Shuffles=0  
  80.         Merged Map outputs=10  
  81.         GC time elapsed (ms)=3216  
  82.         CPU time spent (ms)=6420  
  83.         Physical memory (bytes) snapshot=2573750272  
  84.         Virtual memory (bytes) snapshot=10637529088  
  85.         Total committed heap usage (bytes)=2063073280  
  86.     Shuffle Errors  
  87.         BAD_ID=0  
  88.         CONNECTION=0  
  89.         IO_ERROR=0  
  90.         WRONG_LENGTH=0  
  91.         WRONG_MAP=0  
  92.         WRONG_REDUCE=0  
  93.     File Input Format Counters   
  94.         Bytes Read=1180  
  95.     File Output Format Counters   
  96.         Bytes Written=97  
  97. Job Finished in 55.969 seconds  
  98. Estimated value of Pi is 3.14800000000000000000  
  99.    

    出现上述结果,表示整个安装过程大功告成。

 

 

 3.问题综述

 

    3.1主机名映射错误

 

[plain] view plaincopy
 
  1. STARTUP_MSG: host =java.net.UnknownHostException: lls.pc: lls.pc  

 

    解决方案:参考http://www.linuxidc.com/Linux/2012-03/55663.htm

 

  3.2无法加载native-hadoop library

 

[plain] view plaincopy
 
  1. WARNutil.NativeCodeLoader: Unable to load native-hadoop library for yourplatform... using builtin-java classes where applicable  

    解决方案:官网上下载的hadoop编译版本的native-hadooplibrary适用于32bit系统,不适用64bit系统,64bit系统需要自己手动编译;编译完成之后用64bitnative-hadooplibrary替代原来的库(64bitnative文件夹替代原来的文件夹),替代完成之后还需要谨慎地配置环境变量,可以参考我的配置或者下面的链接。

    参考:http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-error-on-centos

 

参考:

[1]http://www.ercoppa.org/Linux-Compile-Hadoop-220-fix-Unable-to-load-native-hadoop-library.htm

[2]http://www.ercoppa.org/Linux-Install-Hadoop-220-on-Ubuntu-Linux-1304-Single-Node-Cluster.htm

[3]http://blog.csdn.net/w13770269691/article/details/16883663

[4] http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

[5]http://tecadmin.net/steps-to-install-hadoop-on-centosrhel-6/

[6]http://alanxelsys.com/2014/02/01/hadoop-2-2-single-node-installation-on-centos-6-5/

[7]http://blog.csdn.net/zwj0403/article/details/16855555

[8]http://www.oschina.net/question/1177468_193584

[9]http://maven.oschina.net/help.html

[10]http://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-error-on-centos

[11]https://issues.apache.org/jira/browse/HADOOP-10110

[12]http://www.linuxidc.com/Linux/2012-03/55663.htm



(责任编辑:IT)