private Terms termsForField( String fieldName ) throws IOException { List<Terms> terms = new ArrayList<>(); List<ReaderSlice> readerSlices = new ArrayList<>(); for ( LeafReader leafReader : allLeafReaders() ) { Fields fields = leafReader.fields(); Terms leafTerms = fields.terms( fieldName ); if ( leafTerms != null ) { ReaderSlice readerSlice = new ReaderSlice( 0, Math.toIntExact( leafTerms.size() ), 0 ); terms.add( leafTerms ); readerSlices.add( readerSlice ); } } Terms[] termsArray = terms.toArray( new Terms[terms.size()] ); ReaderSlice[] readerSlicesArray = readerSlices.toArray( new ReaderSlice[readerSlices.size()] ); return new MultiTerms( termsArray, readerSlicesArray ); }
@Override public String toString() { return slice.toString()+":"+ postingsEnum; } }
/** This method may return null if the field does not exist or if it has no terms. */ public static Terms getTerms(IndexReader r, String field) throws IOException { final List<LeafReaderContext> leaves = r.leaves(); if (leaves.size() == 1) { return leaves.get(0).reader().terms(field); } final List<Terms> termsPerLeaf = new ArrayList<>(leaves.size()); final List<ReaderSlice> slicePerLeaf = new ArrayList<>(leaves.size()); for (int leafIdx = 0; leafIdx < leaves.size(); leafIdx++) { LeafReaderContext ctx = leaves.get(leafIdx); Terms subTerms = ctx.reader().terms(field); if (subTerms != null) { termsPerLeaf.add(subTerms); slicePerLeaf.add(new ReaderSlice(ctx.docBase, r.maxDoc(), leafIdx - 1)); } } if (termsPerLeaf.size() == 0) { return null; } else { return new MultiTerms(termsPerLeaf.toArray(Terms.EMPTY_ARRAY), slicePerLeaf.toArray(ReaderSlice.EMPTY_ARRAY)); } }
@Override public String toString() { return subSlice.toString()+":"+terms; } }
/** Merges in the fields from the readers in * <code>mergeState</code>. The default implementation skips * and maps around deleted documents, and calls {@link #write(Fields)}. * Implementations can override this method for more sophisticated * merging (bulk-byte copying, etc). */ public void merge(MergeState mergeState) throws IOException { final List<Fields> fields = new ArrayList<>(); final List<ReaderSlice> slices = new ArrayList<>(); int docBase = 0; for(int readerIndex=0;readerIndex<mergeState.fieldsProducers.length;readerIndex++) { final FieldsProducer f = mergeState.fieldsProducers[readerIndex]; final int maxDoc = mergeState.maxDocs[readerIndex]; f.checkIntegrity(); slices.add(new ReaderSlice(docBase, maxDoc, readerIndex)); fields.add(f); docBase += maxDoc; } Fields mergedFields = new MappedMultiFields(mergeState, new MultiFields(fields.toArray(Fields.EMPTY_ARRAY), slices.toArray(ReaderSlice.EMPTY_ARRAY))); write(mergedFields); }
@Override public String toString() { return subSlice.toString()+":"+terms; } }
final Fields f = new LeafReaderFields(r); fields.add(f); slices.add(new ReaderSlice(ctx.docBase, r.maxDoc(), fields.size()-1));
@Override public String toString() { return slice.toString()+":"+ postingsEnum; } }
/** This method may return null if the field does not exist or if it has no terms. */ public static Terms getTerms(IndexReader r, String field) throws IOException { final List<LeafReaderContext> leaves = r.leaves(); if (leaves.size() == 1) { return leaves.get(0).reader().terms(field); } final List<Terms> termsPerLeaf = new ArrayList<>(leaves.size()); final List<ReaderSlice> slicePerLeaf = new ArrayList<>(leaves.size()); for (int leafIdx = 0; leafIdx < leaves.size(); leafIdx++) { LeafReaderContext ctx = leaves.get(leafIdx); Terms subTerms = ctx.reader().terms(field); if (subTerms != null) { termsPerLeaf.add(subTerms); slicePerLeaf.add(new ReaderSlice(ctx.docBase, r.maxDoc(), leafIdx - 1)); } } if (termsPerLeaf.size() == 0) { return null; } else { return new MultiTerms(termsPerLeaf.toArray(Terms.EMPTY_ARRAY), slicePerLeaf.toArray(ReaderSlice.EMPTY_ARRAY)); } }
@Override public String toString() { return subSlice.toString()+":"+terms; } }
/** Merges in the fields from the readers in * <code>mergeState</code>. The default implementation skips * and maps around deleted documents, and calls {@link #write(Fields)}. * Implementations can override this method for more sophisticated * merging (bulk-byte copying, etc). */ public void merge(MergeState mergeState) throws IOException { final List<Fields> fields = new ArrayList<>(); final List<ReaderSlice> slices = new ArrayList<>(); int docBase = 0; for(int readerIndex=0;readerIndex<mergeState.fieldsProducers.length;readerIndex++) { final FieldsProducer f = mergeState.fieldsProducers[readerIndex]; final int maxDoc = mergeState.maxDocs[readerIndex]; f.checkIntegrity(); slices.add(new ReaderSlice(docBase, maxDoc, readerIndex)); fields.add(f); docBase += maxDoc; } Fields mergedFields = new MappedMultiFields(mergeState, new MultiFields(fields.toArray(Fields.EMPTY_ARRAY), slices.toArray(ReaderSlice.EMPTY_ARRAY))); write(mergedFields); }
@Override public String toString() { return slice.toString()+":"+ postingsEnum; } }
/** Merges in the fields from the readers in * <code>mergeState</code>. The default implementation skips * and maps around deleted documents, and calls {@link #write(Fields)}. * Implementations can override this method for more sophisticated * merging (bulk-byte copying, etc). */ public void merge(MergeState mergeState) throws IOException { final List<Fields> fields = new ArrayList<>(); final List<ReaderSlice> slices = new ArrayList<>(); int docBase = 0; for(int readerIndex=0;readerIndex<mergeState.fieldsProducers.length;readerIndex++) { final FieldsProducer f = mergeState.fieldsProducers[readerIndex]; final int maxDoc = mergeState.maxDocs[readerIndex]; f.checkIntegrity(); slices.add(new ReaderSlice(docBase, maxDoc, readerIndex)); fields.add(f); docBase += maxDoc; } Fields mergedFields = new MappedMultiFields(mergeState, new MultiFields(fields.toArray(Fields.EMPTY_ARRAY), slices.toArray(ReaderSlice.EMPTY_ARRAY))); write(mergedFields); }
@Override public String toString() { return subSlice.toString()+":"+terms; } }
/** Merges in the fields from the readers in * <code>mergeState</code>. The default implementation skips * and maps around deleted documents, and calls {@link #write(Fields)}. * Implementations can override this method for more sophisticated * merging (bulk-byte copying, etc). */ public void merge(MergeState mergeState) throws IOException { final List<Fields> fields = new ArrayList<>(); final List<ReaderSlice> slices = new ArrayList<>(); int docBase = 0; for(int readerIndex=0;readerIndex<mergeState.fieldsProducers.length;readerIndex++) { final FieldsProducer f = mergeState.fieldsProducers[readerIndex]; final int maxDoc = mergeState.maxDocs[readerIndex]; f.checkIntegrity(); slices.add(new ReaderSlice(docBase, maxDoc, readerIndex)); fields.add(f); docBase += maxDoc; } Fields mergedFields = new MappedMultiFields(mergeState, new MultiFields(fields.toArray(Fields.EMPTY_ARRAY), slices.toArray(ReaderSlice.EMPTY_ARRAY))); write(mergedFields); }
@Override public String toString() { return slice.toString()+":"+ postingsEnum; } }
final Fields f = r.fields(); fields.add(f); slices.add(new ReaderSlice(ctx.docBase, r.maxDoc(), fields.size()-1));
final Fields f = r.fields(); fields.add(f); slices.add(new ReaderSlice(ctx.docBase, r.maxDoc(), fields.size()-1));
final Fields f = new LeafReaderFields(r); fields.add(f); slices.add(new ReaderSlice(ctx.docBase, r.maxDoc(), fields.size()-1));
TermsEnumIndex indexes[] = new TermsEnumIndex[slices.length]; for (int i = 0; i < slices.length; i++) { slices[i] = new ReaderSlice(0, 0, i); indexes[i] = new TermsEnumIndex(subs[segmentMap.newToOld(i)], i);