I am looking for getting Estimate of documents whose "key" property value is starting with a string which is containing special character "/"
let query =
cts.andQuery([
cts.jsonPropertyWordQuery("Key", "IBD/info/*", ["lang=en"],1),
cts.collectionQuery("documentCollection")
], [])
cts.estimate(query)
but the word-query() internally tokenizing "IBD/info/" as (cts:word("IBD"), cts:punctuation("/"), cts:word("info"), ...)
I created FIELD, with details as below
"field": [
{
"field-name": "key",
"field-path": [
{
"path": "/envelope/instance/Key",
"weight": 1
}
],
"stemmed-searches": "advanced",
"field-value-searches": true,
"field-value-positions": true,
"trailing-wildcard-searches": true,
"trailing-wildcard-word-positions": true,
"tokenizer-override": [
{
"character": "/",
"tokenizer-class": "word"
}
]
}
]
and tried below query but I am still getting false positive results
cts:search(
fn:doc(),
cts:and-query((
cts:field-value-query("key","IBD/info/*"),
cts:collection-query("documentCollection")
)),
"unfiltered"
)
How can I handle this situation?
Create a FIELD with below details
"field": [
{
"field-name": "key",
"field-path": [
{
"path": "/envelope/instance/Key",
"weight": 1
}
],
"field-value-searches": true,
"trailing-wildcard-searches": true,
"three-character-searches": false,
"tokenizer-override": [
{
"character": "/",
"tokenizer-class": "word"
},
{
"character": "_",
"tokenizer-class": "word"
}
]
}
],
"range-field-index": [
{
"scalar-type": "string",
"field-name": "key",
"collation": "http://marklogic.com/collation/",
"range-value-positions": false,
"invalid-values": "reject"
}
]
After Re-indexing completed then query as below
let query =
cts.andQuery([
cts.fieldValueQuery("key", "IBD/info/*"),
cts.collectionQuery("documentCollection")
], [])
cts.search(query,"unfiltered")
Then the query will fetch only documents with "Key" value starting with "IBD/info/"
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With