Problem: HDFS Disk Usage Error
Exception: 2017-02-09 06:15:41,946 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=FAILED, progress=TotalTasks: 34 Succeeded: 32 Running: 0 Failed: 1 Killed: 1 FailedTaskAttempts: 4, diagnostics=Vertex failed, vertexName=scope-426, vertexId=vertex_1486122208753_0502_1_13, diagnostics=[Task failed, taskId=task_1486122208753_0502_1_13_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:org.apache.pig.backend.executionengine.ExecException: ERROR 2135: Received error from store function.org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/tsluc/loan_dataset/refined/_temporary/1/_temporary/attempt_148612220875313_0502_r_000000_0/part-v013-o000-r-00000 could only be replicated to 0 nodes instead of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
Ambari monitoring screen:
Tez execution engine UI with error message,
Solutions: Free up the HDFS disk space by doing,
Exception: 2017-02-09 06:15:41,946 [PigTezLauncher-0] INFO org.apache.pig.backend.hadoop.executionengine.tez.TezJob - DAG Status: status=FAILED, progress=TotalTasks: 34 Succeeded: 32 Running: 0 Failed: 1 Killed: 1 FailedTaskAttempts: 4, diagnostics=Vertex failed, vertexName=scope-426, vertexId=vertex_1486122208753_0502_1_13, diagnostics=[Task failed, taskId=task_1486122208753_0502_1_13_000000, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:org.apache.pig.backend.executionengine.ExecException: ERROR 2135: Received error from store function.org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/tsluc/loan_dataset/refined/_temporary/1/_temporary/attempt_148612220875313_0502_r_000000_0/part-v013-o000-r-00000 could only be replicated to 0 nodes instead of minReplication (=1). There are 4 datanode(s) running and no node(s) are excluded in this operation.
Ambari monitoring screen:
Tez execution engine UI with error message,
Solutions: Free up the HDFS disk space by doing,
- Remove the data from .Trash
- Archive/Remove the old log file.
- Remove YARN local usercache and filecache.
Delete Trash files,
sudo -u hdfs hdfs dfs -expunge
Cleanup Log files,



Comments
Post a Comment