DocWriteRequest.OpType opType = action.opType(); try (XContentBuilder metadata = XContentBuilder.builder(bulkContentType.xContent())) { metadata.startObject(); metadata.startObject(opType.getLowercase()); if (Strings.hasLength(action.index())) { metadata.field("_index", action.index()); XContentType indexXContentType = indexRequest.getContentType(); try (XContentParser parser = XContentHelper.createParser( source = XContentHelper.toXContent((UpdateRequest) action, bulkContentType, false).toBytesRef();
/** * Convert the from XContent to a Map for easy reading. */ public Map<String, Object> toMap() { return convertToMap(status, false).v2(); }
@Override public String toString() { String source = "_na_"; try { source = XContentHelper.convertToJson(content, false, xContentType); } catch (Exception e) { // ignore } return "put stored script {id [" + id + "]" + (context != null ? ", context [" + context + "]" : "") + ", content [" + source + "]}"; }
public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { builder.field("completed", completed); builder.startObject("task"); task.toXContent(builder, params); builder.endObject(); if (error != null) { XContentHelper.writeRawField("error", error, builder, params); } if (response != null) { XContentHelper.writeRawField("response", response, builder, params); } return builder; }
public static void toInnerXContent(IndexTemplateMetaData indexTemplateMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException { builder.field("order", indexTemplateMetaData.order()); if (indexTemplateMetaData.version() != null) { builder.field("version", indexTemplateMetaData.version()); builder.field("index_patterns", indexTemplateMetaData.patterns()); builder.startObject("settings"); for (ObjectObjectCursor<String, CompressedXContent> cursor : indexTemplateMetaData.mappings()) { byte[] mappingSource = cursor.value.uncompressed(); Map<String, Object> mapping = XContentHelper.convertToMap(new BytesArray(mappingSource), true).v2(); if (mapping.size() == 1 && mapping.containsKey(cursor.key)) { for (ObjectObjectCursor<String, CompressedXContent> cursor : indexTemplateMetaData.mappings()) { byte[] data = cursor.value.uncompressed(); builder.map(XContentHelper.convertToMap(new BytesArray(data), true).v2()); builder.endArray();
builder.field(Fields._SHARD, shard.getShardId()); builder.field(Fields._NODE, shard.getNodeIdText()); builder.field(Fields._INDEX, RemoteClusterAware.buildRemoteIndexName(clusterAlias, index)); XContentHelper.writeRawField(SourceFieldMapper.NAME, source, builder, params);
public static void toXContent(AliasMetaData aliasMetaData, XContentBuilder builder, ToXContent.Params params) throws IOException { builder.startObject(aliasMetaData.alias()); boolean binary = params.paramAsBoolean("binary", false); if (aliasMetaData.filter() != null) { if (binary) { builder.field("filter", aliasMetaData.filter.compressed()); } else { builder.field("filter", XContentHelper.convertToMap(new BytesArray(aliasMetaData.filter().uncompressed()), true).v2()); } } if (aliasMetaData.indexRouting() != null) { builder.field("index_routing", aliasMetaData.indexRouting()); } if (aliasMetaData.searchRouting() != null) { builder.field("search_routing", aliasMetaData.searchRouting()); } if (aliasMetaData.writeIndex() != null) { builder.field("is_write_index", aliasMetaData.writeIndex()); } builder.endObject(); }
@Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); if (docAsUpsert) { builder.field("doc_as_upsert", docAsUpsert); try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, doc.source(), xContentType)) { builder.field("doc"); builder.copyCurrentStructure(parser); try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, upsertRequest.source(), xContentType)) { builder.field("upsert");
public static Map<String, Object> convertToMap(ToXContent part) throws IOException { XContentBuilder builder = XContentFactory.jsonBuilder(); builder.startObject(); part.toXContent(builder, EMPTY_PARAMS); builder.endObject(); return XContentHelper.convertToMap(builder.bytes(), false, builder.contentType()).v2(); }
parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, SearchGuardDeprecationHandler.INSTANCE, ref, XContentType.JSON); parser.nextToken(); parser.nextToken(); if(!type.equals((parser.currentName()))) { log.error("Cannot parse config for type {} because {}!={}", type, type, parser.currentName()); return null; parser.nextToken(); return new Tuple<Long, Settings>(version, Settings.builder().loadFromStream("dummy.json", new ByteArrayInputStream(parser.binaryValue()), true).build()); } catch (final IOException e) { throw ExceptionsHelper.convertToElastic(e);
/** * Applies the defaults to the request that cannot be applied during construction. This is super inefficient because it must deserialize * and reserialize the request's source but this is the only way to do it in 2.x. */ void applyDefaults() { if (searchRequest.source() == null) { searchRequest.source(DEFAULT_SOURCE); } try { Map<String, Object> newSource = XContentHelper.convertToMap(DEFAULT_SOURCE, true).v2(); Tuple<XContentType, Map<String, Object>> sourceAndContent = XContentHelper.convertToMap(searchRequest.source(), true); XContentHelper.update(newSource, sourceAndContent.v2(), false); XContentBuilder builder = XContentFactory.contentBuilder(sourceAndContent.v1()); builder.map(newSource); searchRequest.source(builder.bytes()); } catch (IOException e) { throw new ElasticsearchException("Strange IOException while apply default source", e); } }
/** * Sets the aliases that will be associated with the index when it gets created */ public PutIndexTemplateRequest aliases(BytesReference source) { // EMPTY is safe here because we never call namedObject try (XContentParser parser = XContentHelper .createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, source)) { //move to the first alias parser.nextToken(); while ((parser.nextToken()) != XContentParser.Token.END_OBJECT) { alias(Alias.fromXContent(parser)); } return this; } catch(IOException e) { throw new ElasticsearchParseException("Failed to parse aliases", e); } }
@Override protected void parseCreateField(ParseContext context, List<IndexableField> fields) throws IOException { BytesReference originalSource = context.sourceToParse().source(); BytesReference source = originalSource; if (enabled && fieldType().stored() && source != null) { // Percolate and tv APIs may not set the source and that is ok, because these APIs will not index any data if (filter != null) { // we don't update the context source if we filter, we want to keep it as is... Tuple<XContentType, Map<String, Object>> mapTuple = XContentHelper.convertToMap(source, true, context.sourceToParse().getXContentType()); Map<String, Object> filteredSource = filter.apply(mapTuple.v2()); BytesStreamOutput bStream = new BytesStreamOutput(); XContentType contentType = mapTuple.v1(); XContentBuilder builder = XContentFactory.contentBuilder(contentType, bStream).map(filteredSource); builder.close(); source = bStream.bytes(); } BytesRef ref = source.toBytesRef(); fields.add(new StoredField(fieldType().name(), ref.bytes, ref.offset, ref.length)); } else { source = null; } if (originalSource != null && source != originalSource && context.indexSettings().isSoftDeleteEnabled()) { // if we omitted source or modified it we add the _recovery_source to ensure we have it for ops based recovery BytesRef ref = originalSource.toBytesRef(); fields.add(new StoredField(RECOVERY_SOURCE_NAME, ref.bytes, ref.offset, ref.length)); fields.add(new NumericDocValuesField(RECOVERY_SOURCE_NAME, 1)); } }
/** * Prepare the request for merging the existing document with a new one, can optionally detect a noop change. Returns a {@code Result} * containing a new {@code IndexRequest} to be executed on the primary and replicas. */ Result prepareUpdateIndexRequest(ShardId shardId, UpdateRequest request, GetResult getResult, boolean detectNoop) { final long updateVersion = calculateUpdateVersion(request, getResult); final IndexRequest currentRequest = request.doc(); final String routing = calculateRouting(getResult, currentRequest); final String parent = calculateParent(getResult, currentRequest); final Tuple<XContentType, Map<String, Object>> sourceAndContent = XContentHelper.convertToMap(getResult.internalSourceRef(), true); final XContentType updateSourceContentType = sourceAndContent.v1(); final Map<String, Object> updatedSourceAsMap = sourceAndContent.v2(); final boolean noop = !XContentHelper.update(updatedSourceAsMap, currentRequest.sourceAsMap(), detectNoop); // We can only actually turn the update into a noop if detectNoop is true to preserve backwards compatibility and to handle cases // where users repopulating multi-fields or adding synonyms, etc. if (detectNoop && noop) { UpdateResponse update = new UpdateResponse(shardId, getResult.getType(), getResult.getId(), getResult.getVersion(), DocWriteResponse.Result.NOOP); update.setGetResult(extractGetResult(request, request.index(), getResult.getSeqNo(), getResult.getPrimaryTerm(), getResult.getVersion(), updatedSourceAsMap, updateSourceContentType, getResult.internalSourceRef())); return new Result(update, DocWriteResponse.Result.NOOP, updatedSourceAsMap, updateSourceContentType); } else { final IndexRequest finalIndexRequest = Requests.indexRequest(request.index()) .type(request.type()).id(request.id()).routing(routing).parent(parent) .source(updatedSourceAsMap, updateSourceContentType).version(updateVersion).versionType(request.versionType()) .waitForActiveShards(request.waitForActiveShards()).timeout(request.timeout()) .setRefreshPolicy(request.getRefreshPolicy()); return new Result(finalIndexRequest, DocWriteResponse.Result.UPDATED, updatedSourceAsMap, updateSourceContentType); } }
@Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); indices = in.readStringArray(); indicesOptions = IndicesOptions.readIndicesOptions(in); type = in.readOptionalString(); source = in.readString(); if (in.getVersion().before(Version.V_5_3_0)) { // we do not know the format from earlier versions so convert if necessary source = XContentHelper.convertToJson(new BytesArray(source), false, false, XContentFactory.xContentType(source)); } updateAllTypes = in.readBoolean(); concreteIndex = in.readOptionalWriteable(Index::new); }
static Map<String, List<ContextMapping.InternalQueryContext>> parseContextBytes(BytesReference contextBytes, NamedXContentRegistry xContentRegistry, ContextMappings contextMappings) throws IOException { try (XContentParser contextParser = XContentHelper.createParser(xContentRegistry, LoggingDeprecationHandler.INSTANCE, contextBytes, CONTEXT_BYTES_XCONTENT_TYPE)) { contextParser.nextToken(); Map<String, List<ContextMapping.InternalQueryContext>> queryContexts = new HashMap<>(contextMappings.size()); assert contextParser.currentToken() == XContentParser.Token.START_OBJECT; XContentParser.Token currentToken; String currentFieldName; while ((currentToken = contextParser.nextToken()) != XContentParser.Token.END_OBJECT) { if (currentToken == XContentParser.Token.FIELD_NAME) { currentFieldName = contextParser.currentName(); final ContextMapping<?> mapping = contextMappings.get(currentFieldName); queryContexts.put(currentFieldName, mapping.parseQueryContext(contextParser)); } } return queryContexts; } }
.createParser(NamedXContentRegistry.EMPTY, LoggingDeprecationHandler.INSTANCE, response.getSourceAsBytesRef())) { XContentParser.Token currentToken; while ((currentToken = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (currentToken == XContentParser.Token.FIELD_NAME) { if (pathElements[currentPathSlot].equals(parser.currentName())) { parser.nextToken(); if (++currentPathSlot == pathElements.length) { listener.onResponse(ShapeParser.parse(parser));
public Builder filter(String filter) { if (!Strings.hasLength(filter)) { this.filter = null; return this; } return filter(XContentHelper.convertToMap(XContentFactory.xContent(filter), filter, true)); }
@Test public void testComplianceLicenseMap() throws Exception { SearchGuardLicense license = new SearchGuardLicense(XContentHelper .convertToMap(new BytesArray(FileHelper.loadFile("license1.json")), false, JsonXContent.jsonXContent.type()).v2(), cs); Assert.assertFalse(license.hasFeature(Feature.COMPLIANCE)); Assert.assertArrayEquals(license.getFeatures(), new Feature[0]); license = new SearchGuardLicense(XContentHelper .convertToMap(new BytesArray(FileHelper.loadFile("license3.json")), false, JsonXContent.jsonXContent.type()).v2(), cs); Assert.assertFalse(license.hasFeature(Feature.COMPLIANCE)); Assert.assertArrayEquals(license.getFeatures(), new Feature[0]); license = new SearchGuardLicense(XContentHelper .convertToMap(new BytesArray(FileHelper.loadFile("license2.json")), false, JsonXContent.jsonXContent.type()).v2(), cs); Assert.assertTrue(license.hasFeature(Feature.COMPLIANCE)); Assert.assertArrayEquals(license.getFeatures(), Feature.values()); }
/** * Converts the given bytes into a map that is optionally ordered. The provided {@link XContentType} must be non-null. */ public static Tuple<XContentType, Map<String, Object>> convertToMap(BytesReference bytes, boolean ordered, XContentType xContentType) throws ElasticsearchParseException { try { final XContentType contentType; InputStream input; Compressor compressor = CompressorFactory.compressor(bytes); if (compressor != null) { InputStream compressedStreamInput = compressor.streamInput(bytes.streamInput()); if (compressedStreamInput.markSupported() == false) { compressedStreamInput = new BufferedInputStream(compressedStreamInput); } input = compressedStreamInput; } else { input = bytes.streamInput(); } contentType = xContentType != null ? xContentType : XContentFactory.xContentType(input); try (InputStream stream = input) { return new Tuple<>(Objects.requireNonNull(contentType), convertToMap(XContentFactory.xContent(contentType), stream, ordered)); } } catch (IOException e) { throw new ElasticsearchParseException("Failed to parse content to map", e); } }