java - Hadoop MapReduce NoSuchElementException -


i wanted run mapreduce-job on freebsd-cluster 2 nodes following exception

14/08/27 14:23:04 warn util.nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable 14/08/27 14:23:04 info configuration.deprecation: session.id deprecated. instead, use dfs.metrics.session-id 14/08/27 14:23:04 info jvm.jvmmetrics: initializing jvm metrics processname=jobtracker, sessionid= 14/08/27 14:23:04 warn mapreduce.jobsubmitter: hadoop command-line option parsing not performed. implement tool interface , execute application toolrunner remedy this. 14/08/27 14:23:04 warn mapreduce.jobsubmitter: no job jar file set.  user classes may not found. see job or job#setjar(string). 14/08/27 14:23:04 info mapreduce.jobsubmitter: cleaning staging area file:/tmp/hadoop-otlam/mapred/staging/otlam968414084/.staging/job_local968414084_0001 exception in thread "main" java.util.nosuchelementexception @ java.util.stringtokenizer.nexttoken(stringtokenizer.java:349) @ org.apache.hadoop.fs.rawlocalfilesystem$deprecatedrawlocalfilestatus.loadpermissioninfo(rawlocalfilesystem.java:565) @ org.apache.hadoop.fs.rawlocalfilesystem$deprecatedrawlocalfilestatus.getpermission(rawlocalfilesystem.java:534) @ org.apache.hadoop.mapreduce.filecache.clientdistributedcachemanager.checkpermissionofother(clientdistributedcachemanager.java:276) @ org.apache.hadoop.mapreduce.filecache.clientdistributedcachemanager.ispublic(clientdistributedcachemanager.java:240) @ org.apache.hadoop.mapreduce.filecache.clientdistributedcachemanager.determinecachevisibilities(clientdistributedcachemanager.java:162) @ org.apache.hadoop.mapreduce.filecache.clientdistributedcachemanager.determinetimestampsandcachevisibilities(clientdistributedcachemanager.java:58) @ org.apache.hadoop.mapreduce.jobsubmitter.copyandconfigurefiles(jobsubmitter.java:265) @ org.apache.hadoop.mapreduce.jobsubmitter.copyandconfigurefiles(jobsubmitter.java:301) @ org.apache.hadoop.mapreduce.jobsubmitter.submitjobinternal(jobsubmitter.java:389) @ org.apache.hadoop.mapreduce.job$10.run(job.java:1285) @ org.apache.hadoop.mapreduce.job$10.run(job.java:1282) @ java.security.accesscontroller.doprivileged(native method) @ javax.security.auth.subject.doas(subject.java:415) @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1556) @ org.apache.hadoop.mapreduce.job.submit(job.java:1282) @ org.apache.hadoop.mapreduce.job.waitforcompletion(job.java:1303) ... 

this happens when try run job.watforcompletion(true); on new mapreduce-job. nosuchelementexception should thrown, because there no more elements in stringtokenizer , next() called on it. took source , found following codepart in rawlocalfilesystem.java:

/// loads permissions, owner, , group `ls -ld` private void loadpermissioninfo() {   ioexception e = null;   try {     string output = fileutil.execcommand(new file(getpath().touri()),          shell.getgetpermissioncommand());     stringtokenizer t =         new stringtokenizer(output, shell.token_separator_regex);     //expected format     //-rw-------    1 username groupname ...     string permission = t.nexttoken(); 

as far can see hadoop tries find out permissions on specific file ls -ld works if use in console. unfortunately havn't found out yet, files permissions looking for.

the hadoop version 2.4.1 , hbase version 0.98.4 , using java-api. other operations creating table work fine. did experience similar problems or knows do?

edit: found out hadoop related issue. making simplest mapreduce-operation without using hdfs gives me same exception.

can please check if can solve problem.

if yours permission issue, works.

public static void main(string[] args) {      //set user group information             usergroupinformation ugi = usergroupinformation.createremoteuser("hdfs");      //set privilege exception      ugi.doas(new privilegedexceptionaction<void>() {      public void run() throws exception {                 //create configuration object                  configuration config = new configuration();                  config.set("fs.defaultfs", "hdfs://ip:port/");                  config.set("hadoop.job.ugi", "hdfs");                  filesystem dfs = filesystem.get(config);                  .                  . 

Comments

Popular posts from this blog

javascript - Jquery show_hide, what to add in order to make the page scroll to the bottom of the hidden field once button is clicked -

javascript - Highcharts multi-color line -

javascript - Enter key does not work in search box -