伪分布模式下执行wordcount实例时报错解决办法
时间:2016-06-09 23:55 来源:linux.it.net.cn 作者:IT
问题1、不能分配内存,错误提示如下:
FAILED
java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: Cannot run program "/bin/ls": java.io.IOException: error=12, Cannot allocate memory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:488)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448)
解决把法:
问题原因可能是机器内存太小(2G),安装Ubuntu时又没有使用交换空间。
修改conf下的hadoop-env.sh
# The maximum amount of heap to use, in MB. Default is 1000.
export HADOOP_HEAPSIZE=2000
问题2:JobTracker处于安全模式
13/10/12 09:25:14 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.mapred.SafeModeException: JobTracker is in safe mode
at org.apache.hadoop.mapred.JobTracker.checkSafeMode(JobTracker.java:5178)
解决办法:
解除安全模式,如下命令:
bin/hadoop dfsadmin -safemode leave
操作安全模式的参数:
enter 进入安全模式
leave 强制退出安全模式
get 返回是否开启安全模式
wait 等待,一直到安全模式结束
问题3:output文件夹已存在
假如程序的输出路径为output,如果该文件夹已经存在,会提示此错误。
ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory output already exists
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory output already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137)
解决办法:执行下面命令删除文件夹
bin/hadoop dfs -rmr output
(责任编辑:IT)
问题1、不能分配内存,错误提示如下: FAILED java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: Cannot run program "/bin/ls": java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:488) at org.apache.hadoop.util.Shell.runCommand(Shell.java:200) at org.apache.hadoop.util.Shell.run(Shell.java:182) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375) at org.apache.hadoop.util.Shell.execCommand(Shell.java:461) at org.apache.hadoop.util.Shell.execCommand(Shell.java:444) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:712) at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:448) 解决把法: 问题原因可能是机器内存太小(2G),安装Ubuntu时又没有使用交换空间。 修改conf下的hadoop-env.sh # The maximum amount of heap to use, in MB. Default is 1000. export HADOOP_HEAPSIZE=2000 问题2:JobTracker处于安全模式 13/10/12 09:25:14 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.mapred.SafeModeException: JobTracker is in safe mode at org.apache.hadoop.mapred.JobTracker.checkSafeMode(JobTracker.java:5178) 解决办法: 解除安全模式,如下命令: bin/hadoop dfsadmin -safemode leave 操作安全模式的参数: enter 进入安全模式 leave 强制退出安全模式 get 返回是否开启安全模式 wait 等待,一直到安全模式结束 问题3:output文件夹已存在 假如程序的输出路径为output,如果该文件夹已经存在,会提示此错误。 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory output already exists org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory output already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:137) 解决办法:执行下面命令删除文件夹 bin/hadoop dfs -rmr output (责任编辑:IT) |