private static String withQueryId(String druidQuery, String queryId) throws IOException { Query<?> queryWithId = DruidStorageHandlerUtils.JSON_MAPPER.readValue(druidQuery, BaseQuery.class).withId(queryId); return DruidStorageHandlerUtils.JSON_MAPPER.writeValueAsString(queryWithId); }
@Override public DruidWritable getCurrentValue() throws IOException, InterruptedException { // Create new value DruidWritable value = new DruidWritable(false); value.getValue().put(DruidConstants.EVENT_TIMESTAMP_COLUMN, current.getTimestamp() == null ? null : current.getTimestamp().getMillis() ); value.getValue().putAll(current.getValue().getBaseObject()); return value; }
private static HiveDruidSplit[] distributeScanQuery(String address, ScanQuery query, Path dummyPath) throws IOException { final boolean isFetch = query.getLimit() < Long.MAX_VALUE; if (isFetch) { return new HiveDruidSplit[] {new HiveDruidSplit(DruidStorageHandlerUtils.JSON_MAPPER.writeValueAsString(query), for (int i = 0; i < numSplits; i++) { final LocatedSegmentDescriptor locatedSD = segmentDescriptors.get(i); final String[] hosts = new String[locatedSD.getLocations().size() + 1]; for (int j = 0; j < locatedSD.getLocations().size(); j++) { hosts[j] = locatedSD.getLocations().get(j).getHost(); hosts[locatedSD.getLocations().size()] = address; new SegmentDescriptor(locatedSD.getInterval(), locatedSD.getVersion(), locatedSD.getPartitionNumber()); final Query partialQuery = query.withQuerySegmentSpec(new MultipleSpecificSegmentSpec(Lists.newArrayList(newSD))); splits[i] = new HiveDruidSplit(DruidStorageHandlerUtils.JSON_MAPPER.writeValueAsString(partialQuery), dummyPath, hosts);
@Override public Sequence<T> run(final QueryPlus<T> queryPlus, Map<String, Object> responseContext) { DataSource dataSource = queryPlus.getQuery().getDataSource(); if (dataSource instanceof QueryDataSource) { return run(queryPlus.withQuery((Query<T>) ((QueryDataSource) dataSource).getQuery()), responseContext); } else { return baseRunner.run(queryPlus, responseContext); } } }
private static HiveDruidSplit[] distributeSelectQuery(String address, SelectQuery query, Path dummyPath) throws IOException { final boolean isFetch = query.getContextBoolean(DruidConstants.DRUID_QUERY_FETCH, false); if (isFetch) { return new HiveDruidSplit[] {new HiveDruidSplit(DruidStorageHandlerUtils.JSON_MAPPER.writeValueAsString(query), for (int i = 0; i < numSplits; i++) { final LocatedSegmentDescriptor locatedSD = segmentDescriptors.get(i); final String[] hosts = new String[locatedSD.getLocations().size()]; for (int j = 0; j < locatedSD.getLocations().size(); j++) { hosts[j] = locatedSD.getLocations().get(j).getHost(); new SegmentDescriptor(locatedSD.getInterval(), locatedSD.getVersion(), locatedSD.getPartitionNumber()); query.withQuerySegmentSpec(new MultipleSpecificSegmentSpec(Lists.newArrayList(newSD))) .withPagingSpec(PagingSpec.newSpec(Integer.MAX_VALUE)); splits[i] = new HiveDruidSplit(DruidStorageHandlerUtils.JSON_MAPPER.writeValueAsString(partialQuery), dummyPath, hosts);
@Override public Sequence<T> run(final Query<T> query) { DataSource dataSource = query.getDataSource(); if (dataSource instanceof QueryDataSource) { return run((Query<T>) ((QueryDataSource) dataSource).getQuery()); } else { return baseRunner.run(query); } } }
+ " in table properties"); SegmentMetadataQueryBuilder builder = new Druids.SegmentMetadataQueryBuilder(); builder.dataSource(dataSource); builder.merge(true); builder.analysisTypes(); SegmentMetadataQuery query = builder.build(); throw new SerDeException(e); for (Entry<String, ColumnAnalysis> columnInfo : schemaInfo.getColumns().entrySet()) { if (columnInfo.getKey().equals(DruidConstants.DEFAULT_TIMESTAMP_COLUMN)) { PrimitiveTypeInfo type = DruidSerDeUtils.convertDruidToHiveType(columnInfo.getValue().getType()); // field type columnTypes.add(type instanceof TimestampLocalTZTypeInfo ? tsTZTypeInfo : type); inspectors.add(PrimitiveObjectInspectorFactory.getPrimitiveWritableObjectInspector(type));
@Override public Sequence<T> run(QueryPlus<T> queryPlus, Map<String, Object> responseContext) { if (QueryContexts.isBySegment(queryPlus.getQuery())) { return baseRunner.run(queryPlus, responseContext); } return doRun(baseRunner, queryPlus, responseContext); }
@Override public Sequence<T> run(Query<T> query) { if (query.getContextBySegment(false)) { return baseRunner.run(query); } return doRun(baseRunner, query); }
@Override public List<String> getNames() { return query.getDataSource().getNames(); }
@Override public DruidWritable getCurrentValue() throws IOException, InterruptedException { // Create new value DruidWritable value = new DruidWritable(false); value.getValue().put("timestamp", current.getTimestamp().getMillis()); if (values.hasNext()) { value.getValue().putAll(values.next().getBaseObject()); return value; } return value; }
@Override public Sequence<T> apply(DataSource singleSource) { return baseRunner.run( queryPlus.withQuery(query.withDataSource(singleSource)), responseContext ); } }
@Override public void dataSource(QueryType query) { setDimension(DruidMetrics.DATASOURCE, DataSourceUtil.getMetricName(query.getDataSource())); }
private static List<LocatedSegmentDescriptor> fetchLocatedSegmentDescriptors(String address, BaseQuery query) throws IOException { final String intervals = StringUtils.join(query.getIntervals(), ","); // Comma-separated intervals without brackets final String request = String.format("http://%s/druid/v2/datasources/%s/candidates?intervals=%s", address, query.getDataSource().getNames().get(0), URLEncoder.encode(intervals, "UTF-8")); LOG.debug("sending request {} to query for segments", request); final InputStream response; try { response = DruidStorageHandlerUtils.submitRequest(DruidStorageHandler.getHttpClient(), new Request(HttpMethod.GET, new URL(request))); } catch (Exception e) { throw new IOException(org.apache.hadoop.util.StringUtils.stringifyException(e)); } // Retrieve results final List<LocatedSegmentDescriptor> segmentDescriptors; try { segmentDescriptors = DruidStorageHandlerUtils.JSON_MAPPER.readValue(response, new TypeReference<List<LocatedSegmentDescriptor>>() { }); } catch (Exception e) { response.close(); throw new IOException(org.apache.hadoop.util.StringUtils.stringifyException(e)); } return segmentDescriptors; }
private QueryMetrics<? super Query<T>> acquireResponseMetrics() { if (queryMetrics == null) { queryMetrics = toolChest.makeMetrics(query); queryMetrics.server(host); } return queryMetrics; }
@Override public QueryRunner<Result<SelectResultValue>> preMergeQueryDecoration(QueryRunner<Result<SelectResultValue>> runner) { return new IntervalChunkingQueryRunner<Result<SelectResultValue>>(runner, config.getChunkPeriod()); }
@Override public boolean next(NullWritable key, DruidWritable value) { if (nextKeyValue()) { // Update value value.getValue().clear(); value.getValue().put(DruidConstants.EVENT_TIMESTAMP_COLUMN, current.getTimestamp() == null ? null : current.getTimestamp().getMillis() ); value.getValue().putAll(current.getValue().getBaseObject()); return true; } return false; }
@Override public boolean next(NullWritable key, DruidWritable value) { if (nextKeyValue()) { // Update value value.getValue().clear(); value.getValue().put("timestamp", current.getTimestamp().getMillis()); if (values.hasNext()) { value.getValue().putAll(values.next().getBaseObject()); } return true; } return false; }