|
hadoop無(wú)法正常啟動(dòng)(1) 執(zhí)行 $ bin/hadoop start-all.sh之后,無(wú)法啟動(dòng). 異常一 Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority. localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:214) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:135) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:119) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:481) 解決方法:此時(shí)是沒(méi)有配置conf/mapred-site.xml的緣故. 在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml,0.20.2版本中配置mapred-site.xml無(wú)效,只能配置core-site.xml文件 <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>mapred.job.tracker</name> <value>hdfs://localhost:9001</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
hadoop無(wú)法正常啟動(dòng)(2) 異常二、 starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out localhost: Exception in thread "main" java.lang.NullPointerException localhost: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134) localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156) localhost: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115) localhost: at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469) starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out 解決方法:此時(shí)是沒(méi)有配置conf/mapred-site.xml的緣故. 在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml , 0.20.2版本中配置mapred-site.xml無(wú)效,只能配置core-site.xml文件 <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>mapred.job.tracker</name> <value>hdfs://localhost:9001</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
hadoop無(wú)法正常啟動(dòng)(3) 異常三、 starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out localhost: Error: JAVA_HOME is not set. localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out localhost: Error: JAVA_HOME is not set. starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out localhost: Error: JAVA_HOME is not set.
解決方法: 請(qǐng)?jiān)?hadoop/conf/hadoop-env.sh文件中配置JDK的環(huán)境變量 JAVA_HOME=/home/xixitie/jdk CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export JAVA_HOME CLASSPATH hadoop無(wú)法正常啟動(dòng)(4) 異常四:mapred-site.xml配置中使用hdfs://localhost:9001,而不使用localhost:9001的配置 異常信息如下: 11/04/20 23:33:25 INFO security.Groups: Group mapping impl=org.apache.hadoop.sec urity.ShellBasedUnixGroupsMapping; cacheTimeout=300000 11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead. 11/04/20 23:33:25 WARN conf.Configuration: mapred.task.id is deprecated. Instead , use mapreduce.task.attempt.id 11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead. 11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste m name. Use "hdfs://localhost:9000/" instead.
解決方法: mapred-site.xml配置中使用hdfs://localhost:9000,而不使用localhost:9000的配置 <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> <property> <name>mapred.job.tracker</name> <value>hdfs://localhost:9001</value> </property>
hadoop無(wú)法正常啟動(dòng)(5) 異常五、no namenode to stop 問(wèn)題的解決: 異常信息如下:11/04/20 21:48:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 0 time(s). 11/04/20 21:48:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 1 time(s). 11/04/20 21:48:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 2 time(s). 11/04/20 21:48:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 3 time(s). 11/04/20 21:48:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 4 time(s). 11/04/20 21:48:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 5 time(s). 11/04/20 21:48:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 6 time(s). 11/04/20 21:48:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 7 time(s). 11/04/20 21:48:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0 .1:9000. Already tried 8 time(s).
解決方法: 這個(gè)問(wèn)題是由namenode沒(méi)有啟動(dòng)起來(lái)引起的,為什么no namenode to stop,可能之前的一些數(shù)據(jù)對(duì)namenode有影響, 你需要執(zhí)行: $ bin/hadoop namenode -format 然后 $bin/hadoop start-all.sh hadoop無(wú)法正常啟動(dòng)(6) 異常五、no datanode to stop 問(wèn)題的解決: 有時(shí)數(shù)據(jù)結(jié)構(gòu)出現(xiàn)問(wèn)題會(huì)產(chǎn)生無(wú)法啟動(dòng)datanode的問(wèn)題。 然后用 hadoop namenode -format 重新格式化后仍然無(wú)效,/tmp中的文件并沒(méi)有清楚。 其實(shí)還需要清除/tmp/hadoop*里的文件。 執(zhí)行步驟: 一、先刪除hadoop:///tmp hadoop fs -rmr /tmp 二、停止 hadoop stop-all.sh 三、刪除/tmp/hadoop* rm -rf /tmp/hadoop* 四、格式化hadoop hadoop namenode -format 五、啟動(dòng)hadoop start-all.sh 之后即可解決這個(gè)datanode沒(méi)法啟動(dòng)的問(wèn)題 |
|
|
來(lái)自: feiyacz > 《待分類(lèi)1》