Here is the thing. I have a term stored in the index, which contains special character, such as '-', the simplest code is like this:
Document doc = new Document(); doc.add(new TextField("message", "1111-2222-3333", Field.Store.YES, Field.Index.NOT_ANALYZED)); writer.addDocument(doc);
And then I create a query using QueryParser, like this:
String queryStr = "1111-2222-3333"; QueryParser parser = new QueryParser(Version.LUCENE_36, "message", new StandardAnalyzer(Version.LUCENE_36)); Query q = parser.parse(queryStr);
And then I use a searcher to search the query and get no result. I have also tried this:
Query q = parser.parse(QueryParser.escape(queryStr));
And still no result.
Without using QueryParser and instead using TermQuery directly can do what I want, but this way is not flexible enough for user input texts.
I think maybe the StandardAnalyzer did something to omit the special character in the query string. I tried debug, and I found that the string is splited and the actual query is like this:"message:1111 message:2222 message:3333". I don't know what exactly lucene has done...
So if I want to perform the query with special character, what should I do? Should I rewrite an analyzer or inherit a queryparser from the default one? And how to?...
Update:
1 @The New Idiot @femtoRgon, I've tried QueryParser.escape(queryStr) as stated in the problem but it still doesn't work.
2 I've tried another way to solve the problem. I derived a QueryTokenizer from Tokenizer and cut the word only by space, pack it into a QueryAnalyzer, which derives from Analyzer, and finally pass the QueryAnalyzer into QueryParser.
Now it works. Originally it doesn't work because the default StandardAnalyzer cut the queryStr according to default rules(which recognize some of the special characters as splitters), when the query is passed into QueryParser, the special characters are already deleted by StandardAnalyzer. Now I use my own way to cut the queryStr and it only recognize space as splitter, so the special characters remain into the query waiting for processing and this works.
3 @The New Idiot @femtoRgon, thank you for answering my question.
It's used by some simpler Analyzers (but not by the StandardAnalyzer ). Lucene comes with two subclasses: a LetterTokenizer and a WhitespaceTokenizer . You can create your own that keeps the characters you need and breaks on those you don't by implementing the boolean isTokenChar(char c) method.
A query written in Lucene can be broken down into three parts: Field The ID or name of a specific container of information in a database. If a field is referenced in a query string, a colon ( : ) must follow the field name. Terms Items you would like to search for in a database.
Lucene supports single and multiple character wildcard searches within single terms (not within phrase queries). To perform a single character wildcard search use the "?" symbol. To perform a multiple character wildcard search use the "*" symbol. You can also use the wildcard searches in the middle of a term.
Lucene supports escaping special characters that are part of the query syntax. To escape a special character, precede the character with a backslash ( \ ).
I am not sure about this , but I guess you need to escape -
with \
. As per the Lucene docs.
The "-" or prohibit operator excludes documents that contain the term after the "-" symbol.
Again ,
Lucene supports escaping special characters that are part of the query syntax. The current list special characters are
+ - && || ! ( ) { } [ ] ^ " ~ * ? : \ /
To escape these character use the \ before the character.
Also remember, some characters you'll need to escape twice if they have special meaning in Java.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With