하둡 실행 오류 모음집

김윤지·2024년 3월 19일
0

[datanode]
2024-01-02 11:49:22,347 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.net.BindException: Problem binding to [0.0.0.0:9866] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException

[nodemanager]
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.net.BindException: Problem binding to [0.0.0.0:8040] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException

[resourcemanager]
2024-01-02 11:49:30,495 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException
java.net.BindException: Port in use: 0.0.0.0:8088

[secondarynamenodeda]
2024-01-02 11:49:25,338 ERROR org.apache.hadoop.hdfs.server.common.Storage: Unable to acquire file lock on path /hadoop/hadoop_tmp/dfs/namesecondary/in_use.lock
2024-01-02 11:49:25,339 ERROR org.apache.hadoop.hdfs.server.common.Storage: It appears that another node 46995@ds01 has already locked the storage directory: /hadoop/hadoop_tmp/dfs/namesecondary

2024-01-02 15:08:30,703 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Failed to start secondary namenode
java.net.BindException: Port in use: 0.0.0.0:9868

포트가 다 이미 사용중이라고 뜸

netstat -tuln | grep 포트번호

yarn-site.xml

    <property>
            <name>yarn.resourcemanager.address</name>
            <value>127.0.0.1:8089</value>
    </property>
    <property>
            <name>yarn.resourcemanager.webapp.address</name>
            <value>127.0.0.1:8091</value>
    </property>
    <property>
            <name>yarn.nodemanager.address</name>
            <value>127.0.0.1:8044</value>
    </property>
    <property>
            <name>yarn.nodemanager.webapp.address</name>
            <value>127.0.0.1:8043</value>
    </property>

hdfs-site.xml

dfs.datanode.address
127.0.0.1:50010


dfs.datanode.http.address
127.0.0.1:50075

변경후

->> 변경을 했는데 그전과 동일한 포트로 사용할 수 없다고 뜸
왜 !!

->> hdfs-site.xml에 더 추가했더니 datanode 포트 문젠는 해결

         <property>
             <name>dfs.datanode.address</name>
             <value>127.0.0.1:50010</value>
    </property>
    <property>
             <name>dfs.datanode.http.address</name>
             <value>127.0.0.1:50075</value>
     </property>
     <property>
             <name>dfs.datanode.ipc.address</name>
             <value>127.0.0.1:50076</value>
     </property>
     <property>
             <name>dfs.datanode.https.address</name>
             <value>127.0.0.1:50077</value>

->> 포트 문제 해결 한 뒤 다시 datanode 문제 발생

2024-01-02 05:13:12,286 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid 97f34879-ad92-4d07-af9f-2c9c030c906f) service to localhost/127.0.0.1:9000
2024-01-02 05:13:12,301 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid 97f34879-ad92-4d07-af9f-2c9c030c906f)

namenode와 datanode에서 ClusterID안맞아서 그럼

hadoop@ds01:/hadoop/hadoop_tmp/dfs/name/current$ cat VERSION

hadoop@ds01:/hadoop/hadoop_tmp/dfs/data/current$ cat VERSION

네임노드 클러스터아이디를 복사해서 데이터노드 클러스터아이디바꿔줌

그리고 namenode에서
2024-01-02 16:18:10,022 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on default port 9000, call Call#5030 Retry#0 org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.getTransactionId from localhost:54060 / 127.0.0.1:54060: org.apache.hadoop.security.AccessControlException: Access denied for user root. Superuser privilege is required

이러한 권한 문제가 생겨 mapred-site.xml에 추가해줌

mapreduce.job.run-as-user hadoop

2024-01-02 23:46:52,107 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.io.IOException: Cannot lock storage /hadoop/hadoop_tmp/dfs/name. The directory is already locked

2024-01-03 01:57:19,361 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: There appears to be a gap in the edit log. We expected txid 1, but got txid 55.

2024-01-03 01:58:27,667 ERROR org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: finalize log segment 301, 302 failed for (journal JournalAndStream(mgr=FileJournalManager(root=/hadoop/hadoop_tmp/dfs/name), stream=null))

-->> https://velog.io/@makengi/hadoop-Txid-%ED%8A%B8%EB%9F%AC%EB%B8%94-%EC%8A%88%ED%8C%85
해결

0개의 댓글