hadoop安装与WordCount例子
时间:2014-12-30 01:35 来源:linux.it.net.cn 作者:it.net.cn
1、JDK安装
下载网址:
http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html
如果本地有安装包,则用SecureCRT连接Linux机器,然后用rz指令进行上传文件;
下载后获得jdk-6u29-linux-i586-rpm.bin文件,使用sh jdk-6u29-linux-i586-rpm.bin进行安装,
等待安装完成即可;java默认会安装在/usr/java下;
在命令行输入:vi /etc/profile在里面添加如下内容export JAVA_HOME=/usr/java/jdk1.6.0_29export JAVA_BIN=/usr/java/jdk1.6.0_29/binexport PATH=$PATH:$JAVA_HOME/binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATH
进入 /usr/bin/目录cd /usr/binln -s -f /usr/java/jdk1.6.0_29/jre/bin/javaln -s -f /usr/java/jdk1.6.0_29/bin/javac
在命令行输入java -version屏幕输出:java version "jdk1.6.0_02"Java(TM) 2 Runtime Environment, Standard Edition (build jdk1.6.0_02)Java HotSpot(TM) Client VM (build jdk1.6.0_02, mixed mode)则表示安装JDK1.6完毕.
2、Hadoop安装
下载网址:http://www.apache.org/dyn/closer.cgi/hadoop/common/
如果本地有安装包,则用SecureCRT连接Linux机器,然后用rz指令进行上传文件;
下载后获得hadoop-0.21.0.tar.gz文件
解压 tar zxvf hadoop-0.21.0.tar.gz
压缩:tar zcvf hadoop-0.21.0.tar.gz 目录名
在命令行输入:vi /etc/profile在里面添加如下内容
export hadoop_home = /usr/george/dev/install/hadoop-0.21.0
export JAVA_HOME=/usr/java/jdk1.6.0_29export JAVA_BIN=/usr/java/jdk1.6.0_29/binexport PATH=$PATH:$JAVA_HOME/bin:$hadoop_home/binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATH
需要注销用户或重启vm,就可以直接输入hadoop指令了;
WordCount例子代码
3.1 Java代码:
package demo;
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
public class WordCount {
public static class Map extends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
3.2 编译:
javac -classpath /usr/george/dev/install/hadoop-0.21.0/hadoop-hdfs-0.21.0.jar:/usr/george/dev/install/hadoop-0.21.0/hadoop-mapred-0.21.0.jar:/usr/george/dev/install/hadoop-0.21.0/hadoop-common-0.21.0.jar WordCount.java -d /usr/george/dev/wkspace/hadoop/wordcount/classes
在windows中,多个classpath参数值用;分割;在linux中用:分割;
编译后,会在/usr/george/dev/wkspace/hadoop/wordcount/classes目录下生成三个class文件:
WordCount.class WordCount$Map.class WordCount$Reduce.class
3.3将class文件打成jar包
到/usr/george/dev/wkspace/hadoop/wordcount/classes目录,运行jar cvf WordCount.jar *.class就会生成:
WordCount.class WordCount.jar WordCount$Map.class WordCount$Reduce.class
3.4 创建输入数据:
创建/usr/george/dev/wkspace/hadoop/wordcount/datas目录,在其下创建input1.txt和input2.txt文件:
Touch input1.txt
Vi input1.txt
文件内容如下:
i love chinaare you ok?
按照同样的方法创建input2.txt,内容如下:
hello, i love word
You are ok
创建成功后可以通过cat input1.txt 和 cat input2.txt查看内容;
3.5 创建hadoop输入与输出目录:
hadoop fs -mkdir wordcount/inputhadoop fs -mkdir wordcount/outputhadoop fs -put input1.txt wordcount/input/hadoop fs -put input2.txt wordcount/input/
Ps : 可以不创建out目录,要不运行WordCount程序时会报output文件已经存在,所以下面的命令行中使用了output1为输出目录;
3.6运行
到/usr/george/dev/wkspace/hadoop/wordcount/classes目录,运行
[root@localhost classes]# hadoop jar WordCount.jar WordCount wordcount/input wordcount/output1
11/12/02 05:53:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/12/02 05:53:59 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
11/12/02 05:53:59 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
11/12/02 05:53:59 INFO mapred.FileInputFormat: Total input paths to process : 2
11/12/02 05:54:00 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
11/12/02 05:54:00 INFO mapreduce.JobSubmitter: number of splits:2
11/12/02 05:54:00 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
11/12/02 05:54:00 INFO mapreduce.Job: Running job: job_201112020429_0003
11/12/02 05:54:01 INFO mapreduce.Job: map 0% reduce 0%
11/12/02 05:54:20 INFO mapreduce.Job: map 50% reduce 0%
11/12/02 05:54:23 INFO mapreduce.Job: map 100% reduce 0%
11/12/02 05:54:29 INFO mapreduce.Job: map 100% reduce 100%
11/12/02 05:54:32 INFO mapreduce.Job: Job complete: job_201112020429_0003
11/12/02 05:54:32 INFO mapreduce.Job: Counters: 33
FileInputFormatCounters
BYTES_READ=54
FileSystemCounters
FILE_BYTES_READ=132
FILE_BYTES_WRITTEN=334
HDFS_BYTES_READ=274
HDFS_BYTES_WRITTEN=65
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
Job Counters
Data-local map tasks=2
Total time spent by all maps waiting after reserving slots (ms)=0
Total time spent by all reduces waiting after reserving slots (ms)=0
SLOTS_MILLIS_MAPS=24824
SLOTS_MILLIS_REDUCES=6870
Launched map tasks=2
Launched reduce tasks=1
Map-Reduce Framework
Combine input records=12
Combine output records=12
Failed Shuffles=0
GC time elapsed (ms)=291
Map input records=4
Map output bytes=102
Map output records=12
Merged Map outputs=2
Reduce input groups=10
Reduce input records=12
Reduce output records=10
Reduce shuffle bytes=138
Shuffled Maps =2
Spilled Records=24
SPLIT_RAW_BYTES=220
3.7 查看输出目录
[root@localhost classes]# hadoop fs -ls wordcount/output1
11/12/02 05:54:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/12/02 05:55:00 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
Found 2 items
-rw-r--r-- 1 root supergroup 0 2011-12-02 05:54 /user/root/wordcount/output1/_SUCCESS
-rw-r--r-- 1 root supergroup 65 2011-12-02 05:54 /user/root/wordcount/output1/part-00000
[root@localhost classes]# hadoop fs -cat /user/root/wordcount/output1/part-00000
11/12/02 05:56:05 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
11/12/02 05:56:05 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
You 1
are 2
china 1
hello,i 1
i 1
love 2
ok 1
ok? 1
word 1
you 1 (责任编辑:IT)
1、JDK安装 下载网址: http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html 如果本地有安装包,则用SecureCRT连接Linux机器,然后用rz指令进行上传文件; 下载后获得jdk-6u29-linux-i586-rpm.bin文件,使用sh jdk-6u29-linux-i586-rpm.bin进行安装, 等待安装完成即可;java默认会安装在/usr/java下; 在命令行输入:vi /etc/profile在里面添加如下内容export JAVA_HOME=/usr/java/jdk1.6.0_29export JAVA_BIN=/usr/java/jdk1.6.0_29/binexport PATH=$PATH:$JAVA_HOME/binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATH 进入 /usr/bin/目录cd /usr/binln -s -f /usr/java/jdk1.6.0_29/jre/bin/javaln -s -f /usr/java/jdk1.6.0_29/bin/javac 在命令行输入java -version屏幕输出:java version "jdk1.6.0_02"Java(TM) 2 Runtime Environment, Standard Edition (build jdk1.6.0_02)Java HotSpot(TM) Client VM (build jdk1.6.0_02, mixed mode)则表示安装JDK1.6完毕. 2、Hadoop安装 下载网址:http://www.apache.org/dyn/closer.cgi/hadoop/common/ 如果本地有安装包,则用SecureCRT连接Linux机器,然后用rz指令进行上传文件; 下载后获得hadoop-0.21.0.tar.gz文件 解压 tar zxvf hadoop-0.21.0.tar.gz 压缩:tar zcvf hadoop-0.21.0.tar.gz 目录名 在命令行输入:vi /etc/profile在里面添加如下内容 export hadoop_home = /usr/george/dev/install/hadoop-0.21.0 export JAVA_HOME=/usr/java/jdk1.6.0_29export JAVA_BIN=/usr/java/jdk1.6.0_29/binexport PATH=$PATH:$JAVA_HOME/bin:$hadoop_home/binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATH 需要注销用户或重启vm,就可以直接输入hadoop指令了; WordCount例子代码 3.1 Java代码: package demo; import java.io.IOException; import java.util.Iterator; import java.util.StringTokenizer; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.FileInputFormat; import org.apache.hadoop.mapred.FileOutputFormat; import org.apache.hadoop.mapred.JobClient; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.Reducer; import org.apache.hadoop.mapred.Reporter; import org.apache.hadoop.mapred.TextInputFormat; import org.apache.hadoop.mapred.TextOutputFormat; public class WordCount { public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); output.collect(word, one); } } } public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } output.collect(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { JobConf conf = new JobConf(WordCount.class); conf.setJobName("wordcount"); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(Map.class); conf.setCombinerClass(Reduce.class); conf.setReducerClass(Reduce.class); conf.setInputFormat(TextInputFormat.class); conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf); } } 3.2 编译: javac -classpath /usr/george/dev/install/hadoop-0.21.0/hadoop-hdfs-0.21.0.jar:/usr/george/dev/install/hadoop-0.21.0/hadoop-mapred-0.21.0.jar:/usr/george/dev/install/hadoop-0.21.0/hadoop-common-0.21.0.jar WordCount.java -d /usr/george/dev/wkspace/hadoop/wordcount/classes 在windows中,多个classpath参数值用;分割;在linux中用:分割; 编译后,会在/usr/george/dev/wkspace/hadoop/wordcount/classes目录下生成三个class文件: WordCount.class WordCount$Map.class WordCount$Reduce.class 3.3将class文件打成jar包 到/usr/george/dev/wkspace/hadoop/wordcount/classes目录,运行jar cvf WordCount.jar *.class就会生成: WordCount.class WordCount.jar WordCount$Map.class WordCount$Reduce.class 3.4 创建输入数据: 创建/usr/george/dev/wkspace/hadoop/wordcount/datas目录,在其下创建input1.txt和input2.txt文件: Touch input1.txt Vi input1.txt 文件内容如下: i love chinaare you ok? 按照同样的方法创建input2.txt,内容如下: hello, i love word You are ok 创建成功后可以通过cat input1.txt 和 cat input2.txt查看内容; 3.5 创建hadoop输入与输出目录: hadoop fs -mkdir wordcount/inputhadoop fs -mkdir wordcount/outputhadoop fs -put input1.txt wordcount/input/hadoop fs -put input2.txt wordcount/input/ Ps : 可以不创建out目录,要不运行WordCount程序时会报output文件已经存在,所以下面的命令行中使用了output1为输出目录; 3.6运行 到/usr/george/dev/wkspace/hadoop/wordcount/classes目录,运行 [root@localhost classes]# hadoop jar WordCount.jar WordCount wordcount/input wordcount/output1 11/12/02 05:53:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000 11/12/02 05:53:59 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 11/12/02 05:53:59 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 11/12/02 05:53:59 INFO mapred.FileInputFormat: Total input paths to process : 2 11/12/02 05:54:00 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 11/12/02 05:54:00 INFO mapreduce.JobSubmitter: number of splits:2 11/12/02 05:54:00 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null 11/12/02 05:54:00 INFO mapreduce.Job: Running job: job_201112020429_0003 11/12/02 05:54:01 INFO mapreduce.Job: map 0% reduce 0% 11/12/02 05:54:20 INFO mapreduce.Job: map 50% reduce 0% 11/12/02 05:54:23 INFO mapreduce.Job: map 100% reduce 0% 11/12/02 05:54:29 INFO mapreduce.Job: map 100% reduce 100% 11/12/02 05:54:32 INFO mapreduce.Job: Job complete: job_201112020429_0003 11/12/02 05:54:32 INFO mapreduce.Job: Counters: 33 FileInputFormatCounters BYTES_READ=54 FileSystemCounters FILE_BYTES_READ=132 FILE_BYTES_WRITTEN=334 HDFS_BYTES_READ=274 HDFS_BYTES_WRITTEN=65 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 Job Counters Data-local map tasks=2 Total time spent by all maps waiting after reserving slots (ms)=0 Total time spent by all reduces waiting after reserving slots (ms)=0 SLOTS_MILLIS_MAPS=24824 SLOTS_MILLIS_REDUCES=6870 Launched map tasks=2 Launched reduce tasks=1 Map-Reduce Framework Combine input records=12 Combine output records=12 Failed Shuffles=0 GC time elapsed (ms)=291 Map input records=4 Map output bytes=102 Map output records=12 Merged Map outputs=2 Reduce input groups=10 Reduce input records=12 Reduce output records=10 Reduce shuffle bytes=138 Shuffled Maps =2 Spilled Records=24 SPLIT_RAW_BYTES=220 3.7 查看输出目录 [root@localhost classes]# hadoop fs -ls wordcount/output1 11/12/02 05:54:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000 11/12/02 05:55:00 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id Found 2 items -rw-r--r-- 1 root supergroup 0 2011-12-02 05:54 /user/root/wordcount/output1/_SUCCESS -rw-r--r-- 1 root supergroup 65 2011-12-02 05:54 /user/root/wordcount/output1/part-00000 [root@localhost classes]# hadoop fs -cat /user/root/wordcount/output1/part-00000 11/12/02 05:56:05 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000 11/12/02 05:56:05 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id You 1 are 2 china 1 hello,i 1 i 1 love 2 ok 1 ok? 1 word 1 you 1 (责任编辑:IT) |