SolrHighlighter highligher = rb.req.getCore().getHighlighter(); if (highligher.isHighlightingEnabled(rb.req.getParams())) SchemaField keyField = rb.req.getSearcher().getSchema().getUniqueKeyField(); if (null != keyField) if (!returnFields.contains(keyField)) fieldFilter.add(ByteBufferUtil.bytes(keyField.getName())); rb.req.getSearcher().getReader().document(docIds.get(0), selector);
public synchronized SolrCore readSchema(String indexName) throws IOException, ParserConfigurationException, SAXException { SolrCore core = cache.get(indexName); if (core == null) { // get from cassandra if (logger.isDebugEnabled()) logger.debug("loading index schema for: " + indexName); ByteBuffer buf = readCoreResource(indexName, CassandraUtils.schemaKey); //Schema resource not found for the core if (buf == null) { throw new IOException(String.format("invalid core '%s'", indexName)); } InputStream stream = new ByteArrayInputStream(ByteBufferUtil.getArray(buf)); SolrResourceLoader resourceLoader = new SolandraResourceLoader(indexName, null); SolrConfig solrConfig = new SolrConfig(resourceLoader, solrConfigFile, null); IndexSchema schema = new IndexSchema(solrConfig, indexName, new InputSource(stream)); core = new SolrCore(indexName, "/tmp", solrConfig, schema, null); if (logger.isDebugEnabled()) logger.debug("Loaded core from cassandra: " + indexName); cache.put(indexName, core); } return core; }
SchemaField uniqueField = core.getSchema().getUniqueKeyField(); writer.updateDocument(indexName, idTerm, cmd.getLuceneDocument(schema), schema.getAnalyzer(), shardedId, false); writer.addDocument(indexName, cmd.getLuceneDocument(schema), schema.getAnalyzer(), shardedId, false, rms);
final String indexedField = req.getParams().get("field"); if (indexedField == null) throw new RuntimeException("required param 'field'"); chooseTagClusterReducer(req.getParams().get(OVERLAPS)); final int rows = req.getParams().getInt(CommonParams.ROWS, 10000); final int tagsLimit = req.getParams().getInt(TAGS_LIMIT, 1000); final boolean addMatchText = req.getParams().getBool(MATCH_TEXT, false); final SchemaField idSchemaField = req.getSchema().getUniqueKeyField(); if (idSchemaField == null) { throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "The tagger requires a" + "uniqueKey in the schema.");//TODO this could be relaxes throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, getClass().getSimpleName()+" does not support multiple ContentStreams"); throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, getClass().getSimpleName()+" requires text to be POSTed to it"); Analyzer analyzer = req.getSchema().getField(indexedField).getType().getQueryAnalyzer(); try (TokenStream tokenStream = analyzer.tokenStream("", inputReader)) { Terms terms = searcher.getSlowAtomicReader().terms(indexedField); if (terms == null) throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, idSchemaField.getType().getValueSource(idSchemaField, null));
SolrParams params = req.getParams(); if (!params.getBool(COMPONENT_NAME, true)) { return; SolrIndexSearcher searcher = req.getSearcher(); throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "'start' parameter cannot be negative"); long timeAllowed = (long)params.getInt( CommonParams.TIME_ALLOWED, -1 ); String ids = params.get(ShardParams.IDS); if (ids != null) { SchemaField idField = req.getSchema().getUniqueKeyField(); List<String> idArr = StrUtils.splitSmart(ids, ",", true); int[] luceneIds = new int[idArr.size()]; int docs = 0; for (int i=0; i<idArr.size(); i++) { int id = req.getSearcher().getFirstMatch( new Term(idField.getName(), idField.getType().toInternal(idArr.get(i)))); if (id >= 0) luceneIds[docs++] = id; List<Query> filters = rb.getFilters(); if (filters != null) queries.addAll(filters); res.docSet = searcher.getDocSet(queries);
throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Illegal query type. The incoming query must be a Lucene SpanNearQuery and it was a " + origQuery.getClass().getName()); SolrIndexSearcher searcher = rb.req.getSearcher(); IndexReader reader = searcher.getIndexReader(); Spans spans = sQuery.getSpans(reader); addPassage(tvm.passage, rankedPassages, termWeights, bigramWeights, adjWeight, secondAdjWeight, bigramWeight); } catch (CloneNotSupportedException e) { throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Internal error cloning Passage", e); int rows = params.getInt(QA_ROWS, 5); SchemaField uniqField = rb.req.getSchema().getUniqueKeyField(); if (rankedPassages.size() > 0) { int size = Math.min(rows, rankedPassages.size()); idValue = searcher.doc(passage.lDocId, fields).get(idName); } else { idName = "luceneDocId"; passNL.add("field", passage.field); String fldValue = searcher.doc(passage.lDocId, fields).get(passage.field); if (fldValue != null) {
@Override public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp) throws Exception IndexSchema schema = req.getSchema(); SolrIndexSearcher searcher = req.getSearcher(); IndexReader reader = searcher.getReader(); SolrParams params = req.getParams(); int numTerms = params.getInt( NUMTERMS, DEFAULT_COUNT ); Integer docId = params.getInt( DOC_ID ); if( docId == null && params.get( ID ) != null ) { SchemaField uniqueKey = schema.getUniqueKeyField(); String v = uniqueKey.getType().toInternal( params.get(ID) ); Term t = new Term( uniqueKey.getName(), v ); docId = searcher.getFirstMatch( t ); if( docId < 0 ) { throw new SolrException( SolrException.ErrorCode.NOT_FOUND, "Can't find document: "+params.get( ID ) ); throw new SolrException( SolrException.ErrorCode.NOT_FOUND, "Can't find document: "+docId );
/** * Collects the document matching the given solr query request by using the given collection key function. * * @param query the plain solr query * @param req the request object * @param collectionKey the key to collected documents * @return the collected and grouped documents * @throws IOException if bad things happen */ private HashMap<ChronixType, Map<String, List<SolrDocument>>> collectDocuments(String query, SolrQueryRequest req, CQLJoinFunction collectionKey) throws IOException { //query and collect all documents Set<String> fields = getFields(req.getParams().get(CommonParams.FL), req.getSchema().getFields()); //we always need the data field fields.add(Schema.DATA); //add the involved fields from in the join key if (!isEmptyArray(collectionKey.involvedFields())) { Collections.addAll(fields, collectionKey.involvedFields()); } DocList result = docListProvider.doSimpleQuery(query, req, 0, Integer.MAX_VALUE); SolrDocumentList docs = docListProvider.docListToSolrDocumentList(result, req.getSearcher(), fields, null); return collect(docs, collectionKey); }
public MoreLikeThisHelper( SolrParams params, SolrIndexSearcher searcher ) { this.searcher = searcher; this.reader = searcher.getReader(); this.uniqueKeyField = searcher.getSchema().getUniqueKeyField(); this.needDocSet = params.getBool(FacetParams.FACET,false); SolrParams required = params.required(); String[] fields = splitList.split( required.get(MoreLikeThisParams.SIMILARITY_FIELDS) ); if( fields.length < 1 ) { throw new SolrException( SolrException.ErrorCode.BAD_REQUEST, "MoreLikeThis requires at least one similarity field: "+MoreLikeThisParams.SIMILARITY_FIELDS ); } this.mlt = new MoreLikeThis( reader ); // TODO -- after LUCENE-896, we can use , searcher.getSimilarity() ); mlt.setFieldNames(fields); mlt.setAnalyzer( searcher.getSchema().getAnalyzer() ); // configurable params mlt.setMinTermFreq( params.getInt(MoreLikeThisParams.MIN_TERM_FREQ, MoreLikeThis.DEFAULT_MIN_TERM_FREQ)); mlt.setMinDocFreq( params.getInt(MoreLikeThisParams.MIN_DOC_FREQ, MoreLikeThis.DEFAULT_MIN_DOC_FREQ)); mlt.setMinWordLen( params.getInt(MoreLikeThisParams.MIN_WORD_LEN, MoreLikeThis.DEFAULT_MIN_WORD_LENGTH)); mlt.setMaxWordLen( params.getInt(MoreLikeThisParams.MAX_WORD_LEN, MoreLikeThis.DEFAULT_MAX_WORD_LENGTH)); mlt.setMaxQueryTerms( params.getInt(MoreLikeThisParams.MAX_QUERY_TERMS, MoreLikeThis.DEFAULT_MAX_QUERY_TERMS)); mlt.setMaxNumTokensParsed(params.getInt(MoreLikeThisParams.MAX_NUM_TOKENS_PARSED, MoreLikeThis.DEFAULT_MAX_NUM_TOKENS_PARSED)); mlt.setBoost( params.getBool(MoreLikeThisParams.BOOST, false ) ); boostFields = SolrPluginUtils.parseFieldBoosts(params.getParams(MoreLikeThisParams.QF)); }
public void inform(SolrCore core) String a = initArgs.get( FIELD_TYPE ); if( a != null ) { FieldType ft = core.getSchema().getFieldTypes().get( a ); if( ft == null ) { throw new SolrException( SolrException.ErrorCode.SERVER_ERROR, "Unknown FieldType: '"+a+"' used in QueryElevationComponent" ); analyzer = ft.getQueryAnalyzer(); SchemaField sf = core.getSchema().getUniqueKeyField(); if( sf == null ) { throw new SolrException( SolrException.ErrorCode.SERVER_ERROR, "QueryElevationComponent requires the schema to have a uniqueKeyField" ); idField = StringHelper.intern(sf.getName()); try { searchHolder = core.getNewestSearcher(false); IndexReader reader = searchHolder.get().getReader(); getElevationMap( reader, core ); } finally {
String mt = atm.get(type); String field = params.get(QUERY_FIELD); SchemaField sp = req.getSchema().getFieldOrNull(field); if (sp == null) { throw new SolrException(ErrorCode.SERVER_ERROR,"Undefined field: "+field); Analyzer analyzer = sp.getType().getQueryAnalyzer(); TokenStream ts = analyzer.tokenStream(field, new StringReader(qstr)); throw new ParseException(e.getLocalizedMessage()); return new SpanNearQuery(sql.toArray(new SpanQuery[sql.size()]), params.getInt(QAParams.SLOP, 10), true);//<co id="qqp.spanNear"/>
public void process(ResponseBuilder rb) throws IOException { SolrParams params = rb.req.getParams(); if (!params.getBool(COMPONENT_NAME, false)) { return; rb.rsp.add(TERM_VECTORS, termVectors); boolean termFreq = params.getBool(TermVectorParams.TF, false); boolean positions = params.getBool(TermVectorParams.POSITIONS, false); boolean offsets = params.getBool(TermVectorParams.OFFSETS, false); boolean docFreq = params.getBool(TermVectorParams.DF, false); iter = list.iterator(); SolrIndexSearcher searcher = rb.req.getSearcher(); IndexReader reader = searcher.getReader(); IndexSchema schema = rb.req.getSchema(); String uniqFieldName = schema.getUniqueKeyField().getName();
@Override public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp) throws Exception SolrIndexSearcher searcher = req.getSearcher(); SchemaField uniqueKeyField = searcher.getSchema().getUniqueKeyField(); ModifiableSolrParams params = new ModifiableSolrParams(req.getParams()); configureSolrParameters(req, params, uniqueKeyField.getName()); mltFqFilters = getFilters(req, UnsupervisedFeedbackParams.FQ); } catch (SyntaxError e) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, e); throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Dice unsupervised feedback handler requires either a query (?q=) to find similar documents.");
public ValueSource getValueSource(FunctionQParser fp, String arg) { if (arg==null) return null; SchemaField f = fp.req.getSchema().getField(arg); if (f.getType().getClass() == DateField.class || f.getType().getClass() == LegacyDateField.class) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "Can't use ms() function on non-numeric legacy date field " + arg); } return f.getType().getValueSource(f, fp); }
public Query parse() throws ParseException { String field = localParams.get(QueryParsing.F); String queryText = localParams.get(QueryParsing.V); FieldType ft = req.getSchema().getFieldType(field); if (!(ft instanceof TextField)) { String internal = ft.toInternal(queryText); return new TermQuery(new Term(field, internal)); Analyzer analyzer = req.getSchema().getQueryAnalyzer(); source.reset(); } catch (IOException e) { throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, e);
@Override public void process(ResponseBuilder rb) throws IOException { if(isEnabled(rb)){ long startTime = System.currentTimeMillis(); SolrParams params = rb.req.getParams(); int topN = getTopN(params); boolean binary = getBinary(params); boolean logTfs = getLogTfs(params); boolean includeExisting = getIncludeExisting(params); final SolrIndexSearcher searcher = rb.req.getSearcher(); IndexReader ir = searcher.getIndexReader(); Analyzer analyzer = searcher.getSchema().getIndexAnalyzer(); DocListAndSet docs = rb.getResults(); DocIterator iterator = docs.docList.iterator(); String uniqueKeyField = searcher.getSchema().getUniqueKeyField().getName(); NamedList<NamedList<Double>> topPredictions = new NamedList<NamedList<Double>>(); while(iterator.hasNext()) { int docNum = iterator.nextDoc(); Map<String, Map<String,Integer>> tf = getFieldTermFrequencyCounts(fields, ir, analyzer, docNum); NamedList<Double> predictions = predict(tf, topN, binary, logTfs, includeExisting); String uniqueFieldValue = getUniqueKeyFieldValue(ir, analyzer, uniqueKeyField, docNum); topPredictions.add(String.format("%s:%s", uniqueKeyField, uniqueFieldValue), predictions); } long duration = System.currentTimeMillis() - startTime; NamedList<Object> results = new NamedList<Object>(); results.add("Time", duration); results.add("values", topPredictions); rb.rsp.add(getPrefix(), results); } }
public NamedList getStatsFields() throws IOException { NamedList<NamedList<Number>> res = new SimpleOrderedMap<NamedList<Number>>(); String[] statsFs = params.getParams(StatsParams.STATS_FIELD); boolean isShard = params.getBool(ShardParams.IS_SHARD, false); if (null != statsFs) { for (String f : statsFs) { String[] facets = params.getFieldParams(f, StatsParams.STATS_FACET); if (facets == null) { facets = new String[0]; // make sure it is something... SchemaField sf = searcher.getSchema().getField(f); FieldType ft = sf.getType(); NamedList stv; if (sf.multiValued() || ft.multiValuedFieldCache() || prefix!=null) {
throws Exception { IndexReader reader = searcher.getReader(); IndexSchema schema = searcher.getSchema(); SchemaField sfield = schema.getFieldOrNull( fieldName ); FieldType ftype = (sfield==null)?null:sfield.getType(); f.add( "type", (ftype==null)?null:ftype.getTypeName() ); f.add( "schema", getFieldFlags( sfield ) ); if (sfield != null && schema.isDynamicField(sfield.getName()) && schema.getDynamicPattern(sfield.getName()) != null) { f.add("dynamicBase", schema.getDynamicPattern(sfield.getName())); if( ttinfo != null && sfield != null && sfield.indexed() ) { Query q = new ConstantScoreRangeQuery(fieldName,null,null,false,false); TopDocs top = searcher.search( q, 1 ); if( top.totalHits > 0 ) {
/** * Retrieve the datatype query analyzers associated to this field */ private Map<String, Analyzer> getDatatypeConfig(final String field) { final Map<String, Analyzer> datatypeConfig = new HashMap<String, Analyzer>(); final ExtendedJsonField fieldType = (ExtendedJsonField) req.getSchema().getFieldType(field); final Map<String, Datatype> datatypes = fieldType.getDatatypes(); for (final Entry<String, Datatype> e : datatypes.entrySet()) { if (e.getValue().getQueryAnalyzer() == null) { throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Configuration Error: No analyzer defined for type 'query' in " + "datatype " + e.getKey()); } datatypeConfig.put(e.getKey(), e.getValue().getQueryAnalyzer()); } return datatypeConfig; }
@Override public void processAdd(AddUpdateCommand cmd) throws IOException { SolrCore core = cmd.getReq().getCore(); IndexSchema schema = core.getLatestSchema(); if (!schema.isMutable()) { throw new SolrException(BAD_REQUEST, String.format( "This IndexSchema, of core %s, is not mutable.", core.getName())); for (SirenFacetEntry entry : entries) { if (schema.getFieldOrNull(entry.toFieldName()) != null) { continue; options.put("multiValued", true); newFields.add(schema.newField(entry.toFieldName(), fieldTypeName, options)); IndexSchema newSchema = schema.addFields(newFields); cmd.getReq().getCore().setLatestSchema(newSchema); cmd.getReq().updateSchemaToLatest(); logger.debug("Successfully added field(s) to the schema.");