JobInProgress.finishedMaps
Code IndexAdd Codota to your IDE (free)

Best code snippets using org.apache.hadoop.mapred.JobInProgress.finishedMaps(Showing top 15 results out of 315)

  • Common ways to obtain JobInProgress
private void myMethod () {
JobInProgress j =
  • JobChangeEvent jobChangeEvent;jobChangeEvent.getJobInProgress()
  • Smart code suggestions by Codota
}
origin: com.facebook.hadoop/hadoop-core

@Override
public boolean shouldSpeculateAllRemainingMaps() {
 if (speculativeMapUnfininshedThreshold == 0) {
  return false;
 }
 int unfinished = desiredMaps() - finishedMaps();
 if (unfinished < desiredMaps() * speculativeMapUnfininshedThreshold ||
   unfinished == 1) {
  return true;
 }
 return false;
}
origin: org.apache.hadoop/hadoop-core

/**
 * Check if we can schedule an off-switch task for this job.
 * 
 * @param numTaskTrackers number of tasktrackers
 * @return <code>true</code> if we can schedule off-switch, 
 *         <code>false</code> otherwise
 * We check the number of missed opportunities for the job. 
 * If it has 'waited' long enough we go ahead and schedule.
 */
public boolean scheduleOffSwitch(int numTaskTrackers) {
 long missedTaskTrackers = getNumSchedulingOpportunities();
 long requiredSlots = 
  Math.min((desiredMaps() - finishedMaps()), numTaskTrackers);
 
 return (requiredSlots  * localityWaitFactor) < missedTaskTrackers;
}

origin: org.apache.hadoop/hadoop-test

/**
 * Wait till noOfTasksToFinish number of tasks of type specified by isMap
 * are finished. This currently uses a jip object and directly uses its api to
 * determine the number of tasks finished.
 * 
 * <p>
 * 
 * TODO: It should eventually use a JobID and then get the information from
 * the JT to check the number of finished tasks.
 * 
 * @param jip
 * @param isMap
 * @param noOfTasksToFinish
 * @throws InterruptedException
 */
static void waitTillNTotalTasksFinish(JobInProgress jip, boolean isMap,
  int noOfTasksToFinish)
  throws InterruptedException {
 int noOfTasksAlreadyFinished = 0;
 while (noOfTasksAlreadyFinished < noOfTasksToFinish) {
  Thread.sleep(1000);
  noOfTasksAlreadyFinished =
    (isMap ? jip.finishedMaps() : jip.finishedReduces());
  LOG.info("Waiting till " + noOfTasksToFinish
    + (isMap ? " map" : " reduce") + " tasks of the job "
    + jip.getJobID() + " finish. " + noOfTasksAlreadyFinished
    + " tasks already got finished.");
 }
}
origin: org.apache.hadoop/hadoop-core

/**
 * Returns an XML-formatted table of the jobs in the list.
 * This is called repeatedly for different lists of jobs (e.g., running, completed, failed).
 */
public void generateJobTable(JspWriter out, String label, List<JobInProgress> jobs)
  throws IOException {
 if (jobs.size() > 0) {
  for (JobInProgress job : jobs) {
   JobProfile profile = job.getProfile();
   JobStatus status = job.getStatus();
   JobID jobid = profile.getJobID();
   int desiredMaps = job.desiredMaps();
   int desiredReduces = job.desiredReduces();
   int completedMaps = job.finishedMaps();
   int completedReduces = job.finishedReduces();
   String name = profile.getJobName();
   out.print("<" + label + "_job jobid=\"" + jobid + "\">\n");
   out.print("  <jobid>" + jobid + "</jobid>\n");
   out.print("  <user>" + profile.getUser() + "</user>\n");
   out.print("  <name>" + ("".equals(name) ? "&nbsp;" : name) + "</name>\n");
   out.print("  <map_complete>" + StringUtils.formatPercent(status.mapProgress(), 2) + "</map_complete>\n");
   out.print("  <map_total>" + desiredMaps + "</map_total>\n");
   out.print("  <maps_completed>" + completedMaps + "</maps_completed>\n");
   out.print("  <reduce_complete>" + StringUtils.formatPercent(status.reduceProgress(), 2) + "</reduce_complete>\n");
   out.print("  <reduce_total>" + desiredReduces + "</reduce_total>\n");
   out.print("  <reduces_completed>" + completedReduces + "</reduces_completed>\n");
   out.print("</" + label + "_job>\n");
  }
 }
}
origin: org.apache.hadoop/hadoop-core

for (JobInProgress job : jobQueue) {
 if (job.getStatus().getRunState() == JobStatus.RUNNING) {
  neededMaps += job.desiredMaps() - job.finishedMaps();
  neededReduces += job.desiredReduces() - job.finishedReduces();
origin: org.apache.hadoop/hadoop-test

Integer.parseInt(finMaps) == jip.finishedMaps());
origin: org.apache.hadoop/hadoop-core

/**
 * Check if we can schedule an off-switch task for this job.
 * 
 * @param numTaskTrackers number of tasktrackers
 * @return <code>true</code> if we can schedule off-switch, 
 *         <code>false</code> otherwise
 * We check the number of missed opportunities for the job. 
 * If it has 'waited' long enough we go ahead and schedule.
 */
public boolean scheduleOffSwitch(int numTaskTrackers) {
 long missedTaskTrackers = getNumSchedulingOpportunities();
 long requiredSlots = 
  Math.min((desiredMaps() - finishedMaps()), numTaskTrackers);
 
 return (requiredSlots  * localityWaitFactor) < missedTaskTrackers;
}

origin: org.apache.hadoop/hadoop-test

/**
 * Wait till noOfTasksToFinish number of tasks of type specified by isMap
 * are finished. This currently uses a jip object and directly uses its api to
 * determine the number of tasks finished.
 * 
 * <p>
 * 
 * TODO: It should eventually use a JobID and then get the information from
 * the JT to check the number of finished tasks.
 * 
 * @param jip
 * @param isMap
 * @param noOfTasksToFinish
 * @throws InterruptedException
 */
static void waitTillNTotalTasksFinish(JobInProgress jip, boolean isMap,
  int noOfTasksToFinish)
  throws InterruptedException {
 int noOfTasksAlreadyFinished = 0;
 while (noOfTasksAlreadyFinished < noOfTasksToFinish) {
  Thread.sleep(1000);
  noOfTasksAlreadyFinished =
    (isMap ? jip.finishedMaps() : jip.finishedReduces());
  LOG.info("Waiting till " + noOfTasksToFinish
    + (isMap ? " map" : " reduce") + " tasks of the job "
    + jip.getJobID() + " finish. " + noOfTasksAlreadyFinished
    + " tasks already got finished.");
 }
}
origin: org.apache.hadoop/hadoop-core

/**
 * Returns an XML-formatted table of the jobs in the list.
 * This is called repeatedly for different lists of jobs (e.g., running, completed, failed).
 */
public void generateJobTable(JspWriter out, String label, List<JobInProgress> jobs)
  throws IOException {
 if (jobs.size() > 0) {
  for (JobInProgress job : jobs) {
   JobProfile profile = job.getProfile();
   JobStatus status = job.getStatus();
   JobID jobid = profile.getJobID();
   int desiredMaps = job.desiredMaps();
   int desiredReduces = job.desiredReduces();
   int completedMaps = job.finishedMaps();
   int completedReduces = job.finishedReduces();
   String name = profile.getJobName();
   out.print("<" + label + "_job jobid=\"" + jobid + "\">\n");
   out.print("  <jobid>" + jobid + "</jobid>\n");
   out.print("  <user>" + profile.getUser() + "</user>\n");
   out.print("  <name>" + ("".equals(name) ? "&nbsp;" : name) + "</name>\n");
   out.print("  <map_complete>" + StringUtils.formatPercent(status.mapProgress(), 2) + "</map_complete>\n");
   out.print("  <map_total>" + desiredMaps + "</map_total>\n");
   out.print("  <maps_completed>" + completedMaps + "</maps_completed>\n");
   out.print("  <reduce_complete>" + StringUtils.formatPercent(status.reduceProgress(), 2) + "</reduce_complete>\n");
   out.print("  <reduce_total>" + desiredReduces + "</reduce_total>\n");
   out.print("  <reduces_completed>" + completedReduces + "</reduces_completed>\n");
   out.print("</" + label + "_job>\n");
  }
 }
}
origin: com.facebook.hadoop/hadoop-core

/**
 * Returns an XML-formatted table of the jobs in the list.
 * This is called repeatedly for different lists of jobs (e.g., running, completed, failed).
 */
public void generateJobTable(JspWriter out, String label, List<JobInProgress> jobs)
  throws IOException {
 if (jobs.size() > 0) {
  for (JobInProgress job : jobs) {
   JobProfile profile = job.getProfile();
   JobStatus status = job.getStatus();
   JobID jobid = profile.getJobID();
   int desiredMaps = job.desiredMaps();
   int desiredReduces = job.desiredReduces();
   int completedMaps = job.finishedMaps();
   int completedReduces = job.finishedReduces();
   String name = profile.getJobName();
   out.print("<" + label + "_job jobid=\"" + jobid + "\">\n");
   out.print("  <jobid>" + jobid + "</jobid>\n");
   out.print("  <user>" + profile.getUser() + "</user>\n");
   out.print("  <name>" + ("".equals(name) ? "&nbsp;" : name) + "</name>\n");
   out.print("  <map_complete>" + StringUtils.formatPercent(status.mapProgress(), 2) + "</map_complete>\n");
   out.print("  <map_total>" + desiredMaps + "</map_total>\n");
   out.print("  <maps_completed>" + completedMaps + "</maps_completed>\n");
   out.print("  <reduce_complete>" + StringUtils.formatPercent(status.reduceProgress(), 2) + "</reduce_complete>\n");
   out.print("  <reduce_total>" + desiredReduces + "</reduce_total>\n");
   out.print("  <reduces_completed>" + completedReduces + "</reduces_completed>\n");
   out.print("</" + label + "_job>\n");
  }
 }
}
origin: org.apache.hadoop/hadoop-mapred

/**
 * Returns an XML-formatted table of the jobs in the list.
 * This is called repeatedly for different lists of jobs (e.g., running, completed, failed).
 */
public void generateJobTable(JspWriter out, String label, List<JobInProgress> jobs)
  throws IOException {
 if (jobs.size() > 0) {
  for (JobInProgress job : jobs) {
   JobProfile profile = job.getProfile();
   JobStatus status = job.getStatus();
   JobID jobid = profile.getJobID();
   int desiredMaps = job.desiredMaps();
   int desiredReduces = job.desiredReduces();
   int completedMaps = job.finishedMaps();
   int completedReduces = job.finishedReduces();
   String name = profile.getJobName();
   out.print("<" + label + "_job jobid=\"" + jobid + "\">\n");
   out.print("  <jobid>" + jobid + "</jobid>\n");
   out.print("  <user>" + profile.getUser() + "</user>\n");
   out.print("  <name>" + ("".equals(name) ? "&nbsp;" : name) + "</name>\n");
   out.print("  <map_complete>" + StringUtils.formatPercent(status.mapProgress(), 2) + "</map_complete>\n");
   out.print("  <map_total>" + desiredMaps + "</map_total>\n");
   out.print("  <maps_completed>" + completedMaps + "</maps_completed>\n");
   out.print("  <reduce_complete>" + StringUtils.formatPercent(status.reduceProgress(), 2) + "</reduce_complete>\n");
   out.print("  <reduce_total>" + desiredReduces + "</reduce_total>\n");
   out.print("  <reduces_completed>" + completedReduces + "</reduces_completed>\n");
   out.print("</" + label + "_job>\n");
  }
 }
}
origin: org.apache.hadoop/hadoop-mapred

for (JobInProgress job : jobQueue) {
 if (job.getStatus().getRunState() == JobStatus.RUNNING) {
  neededMaps += job.desiredMaps() - job.finishedMaps();
  neededReduces += job.desiredReduces() - job.finishedReduces();
origin: org.apache.hadoop/hadoop-core

for (JobInProgress job : jobQueue) {
 if (job.getStatus().getRunState() == JobStatus.RUNNING) {
  neededMaps += job.desiredMaps() - job.finishedMaps();
  neededReduces += job.desiredReduces() - job.finishedReduces();
origin: com.facebook.hadoop/hadoop-core

for (JobInProgress job : jobQueue) {
 if (job.getStatus().getRunState() == JobStatus.RUNNING) {
  neededMaps += job.desiredMaps() - job.finishedMaps();
  neededReduces += job.desiredReduces() - job.finishedReduces();
origin: org.apache.hadoop/hadoop-test

Integer.parseInt(finMaps) == jip.finishedMaps());
org.apache.hadoop.mapredJobInProgressfinishedMaps

Popular methods of JobInProgress

  • getFinishTime
  • getJobID
  • getPriority
  • getStatus
  • getTasks
    Get all the tasks of the desired type in this job.
  • isComplete
  • runningMaps
  • runningReduces
  • <init>
  • convertTrackerNameToHostName
  • desiredMaps
  • desiredReduces
  • desiredMaps,
  • desiredReduces,
  • fail,
  • finishedReduces,
  • getBlackListedTrackers,
  • getCounters,
  • getJobConf,
  • getJobCounters,
  • getLaunchTime

Popular classes and methods

  • getOriginalFilename (MultipartFile)
  • notifyDataSetChanged (ArrayAdapter)
  • getExternalFilesDir (Context)
  • File (java.io)
    LocalStorage based File implementation for GWT. Should probably have used Harmony as a starting poin
  • String (java.lang)
    An immutable sequence of characters/code units ( chars). A String is represented by array of UTF-16
  • BigInteger (java.math)
    An immutable arbitrary-precision signed integer.FAST CRYPTOGRAPHY This implementation is efficient f
  • InetAddress (java.net)
    An Internet Protocol (IP) address. This can be either an IPv4 address or an IPv6 address, and in pra
  • MessageDigest (java.security)
    Uses a one-way hash function to turn an arbitrary number of bytes into a fixed-length byte sequence.
  • DateFormat (java.text)
    DateFormat is an abstract class for date/time formatting subclasses which formats and parses dates
  • HttpServlet (javax.servlet.http)
    Provides an abstract class to be subclassed to create an HTTP servlet suitable for a Web site. A sub

For IntelliJ IDEA,
Android Studio or Eclipse

  • Codota IntelliJ IDEA pluginCodota Android Studio pluginCode IndexSign in
  • EnterpriseFAQAboutContact Us
  • Terms of usePrivacy policyCodeboxFind Usages
Add Codota to your IDE (free)