DataNode.getStartupOption
Code IndexAdd Codota to your IDE (free)

Best Java code snippets using org.apache.hadoop.hdfs.server.datanode.DataNode.getStartupOption (Showing top 9 results out of 315)

  • Common ways to obtain DataNode
private void myMethod () {
DataNode d =
  • BlockScanner.Servlet blockScannerServlet;String str;(DataNode) blockScannerServlet.getServletContext().getAttribute(str)
  • BPOfferService bPOfferService;bPOfferService.getDataNode()
  • Smart code suggestions by Codota
}
origin: org.apache.hadoop/hadoop-hdfs

/**
 * Allows submission of a disk balancer Job.
 * @param planID  - Hash value of the plan.
 * @param planVersion - Plan version, reserved for future use. We have only
 *                    version 1 now.
 * @param planFile - Plan file name
 * @param planData - Actual plan data in json format
 * @throws IOException
 */
@Override
public void submitDiskBalancerPlan(String planID, long planVersion,
  String planFile, String planData, boolean skipDateCheck)
  throws IOException {
 checkSuperuserPrivilege();
 if (getStartupOption(getConf()) != StartupOption.REGULAR) {
  throw new DiskBalancerException(
    "Datanode is in special state, e.g. Upgrade/Rollback etc."
      + " Disk balancing not permitted.",
    DiskBalancerException.Result.DATANODE_STATUS_NOT_REGULAR);
 }
 getDiskBalancer().submitPlan(planID, planVersion, planFile, planData,
     skipDateCheck);
}
origin: org.apache.hadoop/hadoop-hdfs

final StartupOption startOpt = getStartupOption(getConf());
if (startOpt == null) {
 throw new IOException("Startup option not set.");
origin: com.facebook.hadoop/hadoop-core

String nameserviceId = this.namespaceManager.get(namespaceId).getNameserviceId();
Collection<StorageDirectory> newStorageDirectories =
 storage.recoverTransitionAdditionalRead(nsInfo, newDirs, getStartupOption(conf));
storage.recoverTransitionRead(this, namespaceId, nsInfo, newDirs, 
 getStartupOption(conf), nameserviceId);
origin: ch.cern.hadoop/hadoop-hdfs

/**
 * Process the given arg list as command line arguments to the DataNode
 * to make sure we get the expected result. If the expected result is
 * success then further validate that the parsed startup option is the
 * same as what was expected.
 *
 * @param expectSuccess
 * @param expectedOption
 * @param conf
 * @param arg
 */
private static void checkExpected(boolean expectSuccess,
                 StartupOption expectedOption,
                 Configuration conf,
                 String ... arg) {
 String[] args = new String[arg.length];
 int i = 0;
 for (String currentArg : arg) {
  args[i++] = currentArg;
 }
 boolean returnValue = DataNode.parseArguments(args, conf);
 StartupOption option = DataNode.getStartupOption(conf);
 assertThat(returnValue, is(expectSuccess));
 if (expectSuccess) {
  assertThat(option, is(expectedOption));
 }
}
origin: org.jvnet.hudson.hadoop/hadoop-core

StartupOption startOpt = getStartupOption(conf);
assert startOpt != null : "Startup option must be set.";
origin: io.fabric8/fabric-hadoop

StartupOption startOpt = getStartupOption(conf);
assert startOpt != null : "Startup option must be set.";
origin: ch.cern.hadoop/hadoop-hdfs

final StartupOption startOpt = getStartupOption(conf);
if (startOpt == null) {
 throw new IOException("Startup option not set.");
origin: com.facebook.hadoop/hadoop-core

void setupNSStorage() throws IOException {
 StartupOption startOpt = getStartupOption(conf);
 assert startOpt != null : "Startup option must be set.";
origin: io.prestosql.hadoop/hadoop-apache

final StartupOption startOpt = getStartupOption(conf);
if (startOpt == null) {
 throw new IOException("Startup option not set.");
org.apache.hadoop.hdfs.server.datanodeDataNodegetStartupOption

Popular methods of DataNode

  • shutdown
    Shut down this instance of the datanode. Returns only after shutdown is complete. This method can on
  • createDataNode
    Instantiate & Start a single datanode daemon and wait for it to finish. If this thread is specifical
  • createInterDataNodeProtocolProxy
  • getConf
  • getMetrics
  • instantiateDataNode
    Instantiate a single datanode object, along with its secure resources. This must be run by invoking
  • runDatanodeDaemon
    Start a single datanode daemon and wait for it to finish. If this thread is specifically interrupted
  • <init>
    Create the DataNode given a configuration, an array of dataDirs, and a namenode proxy.
  • getXceiverCount
    Number of concurrent xceivers per node.
  • parseArguments
    Parse and verify command line arguments and set configuration parameters.
  • recoverBlocks
  • syncBlock
    Block synchronization
  • recoverBlocks,
  • syncBlock,
  • checkDiskError,
  • handleDiskError,
  • join,
  • makeInstance,
  • newSocket,
  • notifyNamenodeDeletedBlock,
  • notifyNamenodeReceivedBlock

Popular in Java

  • Creating JSON documents from java classes using gson
  • startActivity (Activity)
  • getOriginalFilename (MultipartFile)
    Return the original filename in the client's filesystem.This may contain path information depending
  • getResourceAsStream (ClassLoader)
    Returns a stream for the resource with the specified name. See #getResource(String) for a descriptio
  • Menu (java.awt)
  • Deque (java.util)
    A linear collection that supports element insertion and removal at both ends. The name deque is shor
  • Manifest (java.util.jar)
    The Manifest class is used to obtain attribute information for a JarFile and its entries.
  • ImageIO (javax.imageio)
  • Join (org.hibernate.mapping)
  • Table (org.hibernate.mapping)
    A relational table

For IntelliJ IDEA,
Android Studio or Eclipse

  • Search for JavaScript code betaCodota IntelliJ IDEA pluginCodota Android Studio pluginCode IndexSign in
  • EnterpriseFAQAboutBlogContact Us
  • Plugin user guideTerms of usePrivacy policyCodeboxFind Usages
Add Codota to your IDE (free)