众所周知,Hadoop对处理单个大文件比处理多个小文件更有效率,另外单个文件也非常占用HDFS的存储空间。所以往往要将其合并起来。
1,getmerge
hadoop有一个命令行工具getmerge,用于将一组HDFS上的文件复制到本地计算机以前进行合并
参考:http://hadoop.apache.org/common/docs/r0.19.2/cn/hdfs_shell.html
使用方法:hadoop fs -getmerge <src> <localdst> [addnl]
接受一个源目录和一个目标文件作为输入,并且将源目录中所有的文件连接成本地目标文件。addnl是可选的,用于指定在每个文件结尾添加一个换行符。
多嘴几句:调用文件系统(FS)Shell命令应使用 bin/hadoop fs <args>的形式。 所有的的FS shell命令使用URI路径作为参数。URI格式是scheme://authority/path。
2.putmerge
将本地小文件合并上传到HDFS文件系统中。
一种方法可以现在本地写一个脚本,先将一个文件合并为一个大文件,然后将整个大文件上传,这种方法占用大量的本地磁盘空间;
另一种方法如下,在复制的过程中上传。参考:《hadoop in action》
-
import java.io.IOException;
-
-
import org.apache.hadoop.conf.Configuration;
-
import org.apache.hadoop.fs.FSDataInputStream;
-
import org.apache.hadoop.fs.FSDataOutputStream;
-
import org.apache.hadoop.fs.FileStatus;
-
import org.apache.hadoop.fs.FileSystem;
-
import org.apache.hadoop.fs.Path;
-
import org.apache.hadoop.io.IOUtils;
-
-
-
public class PutMerge {
-
-
public static void putMergeFunc(String LocalDir, String fsFile) throws IOException
-
{
-
Configuration conf = new Configuration();
-
FileSystem fs = FileSystem.get(conf);
-
FileSystem local = FileSystem.getLocal(conf);
-
-
Path localDir = new Path(LocalDir);
-
Path HDFSFile = new Path(fsFile);
-
-
FileStatus[] status = local.listStatus(localDir);
-
FSDataOutputStream out = fs.create(HDFSFile);
-
-
for(FileStatus st: status)
-
{
-
Path temp = st.getPath();
-
FSDataInputStream in = local.open(temp);
-
IOUtils.copyBytes(in, out, 4096, false);
-
in.close();
-
}
-
out.close();
-
}
-
public static void main(String [] args) throws IOException
-
{
-
String l = "/home/kqiao/hadoop/MyHadoopCodes/putmergeFiles";
-
String f = "hdfs://ubuntu:9000/user/kqiao/test/PutMergeTest";
-
putMergeFunc(l,f);
-
}
-
}
3.将小文件打包成SequenceFile的MapReduce任务
来自:《hadoop权威指南》
实现将整个文件作为一条记录处理的InputFormat:
-
public class WholeFileInputFormat
-
extends FileInputFormat<NullWritable, BytesWritable> {
-
-
@Override
-
protected boolean isSplitable(JobContext context, Path file) {
-
return false;
-
}
-
-
@Override
-
public RecordReader<NullWritable, BytesWritable> createRecordReader(
-
InputSplit split, TaskAttemptContext context) throws IOException,
-
InterruptedException {
-
WholeFileRecordReader reader = new WholeFileRecordReader();
-
reader.initialize(split, context);
-
return reader;
-
}
-
}
实现上面类中使用的定制的RecordReader:
-
/实现一个定制的RecordReader,这六个方法均为继承的RecordReader要求的虚函数。
-
-
public class WholeFileRecordReader extends RecordReader<NullWritable, BytesWritable>{
-
-
private FileSplit fileSplit;
-
private Configuration conf;
-
private BytesWritable value = new BytesWritable();
-
private boolean processed = false;
-
@Override
-
public void close() throws IOException {
-
-
}
-
-
@Override
-
public NullWritable getCurrentKey() throws IOException,
-
InterruptedException {
-
return NullWritable.get();
-
}
-
-
@Override
-
public BytesWritable getCurrentValue() throws IOException,
-
InterruptedException {
-
return value;
-
}
-
-
@Override
-
public float getProgress() throws IOException, InterruptedException {
-
return processed? 1.0f : 0.0f;
-
}
-
-
@Override
-
public void initialize(InputSplit split, TaskAttemptContext context)
-
throws IOException, InterruptedException {
-
this.fileSplit = (FileSplit) split;
-
this.conf = context.getConfiguration();
-
}
-
-
-
@Override
-
public boolean nextKeyValue() throws IOException, InterruptedException {
-
if (!processed) {
-
byte[] contents = new byte[(int) fileSplit.getLength()];
-
Path file = fileSplit.getPath();
-
FileSystem fs = file.getFileSystem(conf);
-
FSDataInputStream in = null;
-
try {
-
in = fs.open(file);
-
-
-
IOUtils.readFully(in, contents, 0, contents.length);
-
-
-
value.set(contents, 0, contents.length);
-
} finally {
-
IOUtils.closeStream(in);
-
}
-
processed = true;
-
return true;
-
}
-
return false;
-
}
-
}
将小文件打包成SequenceFile:
-
-
public class SmallFilesToSequenceFileConverter extends Configured implements Tool{
-
-
-
static class SequenceFileMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable>
-
{
-
private Text filenameKey;
-
-
-
@Override
-
protected void setup(Context context)
-
{
-
InputSplit split = context.getInputSplit();
-
Path path = ((FileSplit) split).getPath();
-
filenameKey = new Text(path.toString());
-
}
-
@Override
-
public void map(NullWritable key, BytesWritable value, Context context)
-
throws IOException, InterruptedException{
-
context.write(filenameKey, value);
-
}
-
}
-
-
@Override
-
public int run(String[] args) throws Exception {
-
Configuration conf = new Configuration();
-
Job job = new Job(conf);
-
job.setJobName("SmallFilesToSequenceFileConverter");
-
-
FileInputFormat.addInputPath(job, new Path(args[0]));
-
FileOutputFormat.setOutputPath(job, new Path(args[1]));
-
-
-
job.setInputFormatClass(WholeFileInputFormat.class);
-
job.setOutputFormatClass(SequenceFileOutputFormat.class);
-
-
-
job.setOutputKeyClass(Text.class);
-
job.setOutputValueClass(BytesWritable.class);
-
-
job.setMapperClass(SequenceFileMapper.class);
-
-
return job.waitForCompletion(true) ? 0 : 1;
-
}
-
-
public static void main(String [] args) throws Exception
-
{
-
int exitCode = ToolRunner.run(new SmallFilesToSequenceFileConverter(), args);
-
System.exit(exitCode);
-
}
-
}
-