@Override protected void reduce(BytesWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException { SortableBytes keyBytes = SortableBytes.fromBytesWritable(key); final Iterable<DimValueCount> combinedIterable = combineRows(values); innerReduce(context, keyBytes, combinedIterable); }
@Override public boolean run() { final List<DataSegment> segments = IndexGeneratorJob.getPublishedSegments(config); final String segmentTable = config.getSchema().getIOConfig().getMetadataUpdateSpec().getSegmentTable(); handler.publishSegments(segmentTable, segments, HadoopDruidIndexerConfig.JSON_MAPPER); return true; } }
public String getWorkingPath() { final String workingPath = schema.getTuningConfig().getWorkingPath(); return workingPath == null ? DEFAULT_WORKING_PATH : workingPath; }
public void setShardSpecs(Map<Long, List<HadoopyShardSpec>> shardSpecs) { this.schema = schema.withTuningConfig(schema.getTuningConfig().withShardSpecs(shardSpecs)); this.pathSpec = JSON_MAPPER.convertValue(schema.getIOConfig().getPathSpec(), PathSpec.class); }
public boolean isUpdaterJobSpecSet() { return (schema.getIOConfig().getMetadataUpdateSpec() != null); }
public PartitionsSpec getPartitionsSpec() { return schema.getTuningConfig().getPartitionsSpec(); }
public boolean isForceExtendableShardSpecs() { return schema.getTuningConfig().isForceExtendableShardSpecs(); }
public int getMaxParseExceptions() { return schema.getTuningConfig().getMaxParseExceptions(); }
public IndexSpec getIndexSpec() { return schema.getTuningConfig().getIndexSpec(); }
public boolean isOverwriteFiles() { return schema.getTuningConfig().isOverwriteFiles(); }
public boolean isCombineText() { return schema.getTuningConfig().isCombineText(); }
public boolean isLogParseExceptions() { return schema.getTuningConfig().isLogParseExceptions(); }
public HadoopIngestionSpec withDataSchema(DataSchema schema) { return new HadoopIngestionSpec( schema, ioConfig, tuningConfig, uniqueId ); }
@Override protected void setup(Context context) throws IOException, InterruptedException { super.setup(context); rollupGranularity = getConfig().getGranularitySpec().getQueryGranularity(); }
@Override public TaskLocation getLocation() { return TaskLocation.create("testHost", 10000, 10000); }
@Override public String getErrorMessage() { if (job == null) { return null; } return Utils.getFailureMessage(job, config.JSON_MAPPER); }
@Override public Jobby getPartitionJob(HadoopDruidIndexerConfig config) { return new DeterminePartitionsJob(config); }
@Override public ShardSpec apply(HadoopyShardSpec input) { return input.getActualSpec(); } }
@Override public Map<String, Object> getStats() { if (indexJob == null) { return null; } return indexJob.getStats(); }
@Override public String getErrorMessage() { if (job == null) { return null; } return job.getErrorMessage(); }