- field
- term
- term in field
Each file, and each row group within the file, has 3 bloom filters to handle these queries.
So something like:
{"user": {"name": "John", "tags": [{"type": "user"}, {"role": "admin"}]}}
Gets turned into queryable pairs of:
[{Path: "user.name", Values: ["John"]}, {Path: "user.tags.type", Values: ["user"]}, {Path: "user.tags.role", Values: ["admin"]}]
Then you can search for:
- any record that has "john" in it
- any record that has the "user.tags.type" key
- any record that has "user.tags.type"="user" and "user.tags.role"="admin"
Which bloom filters are used depends on how you build the query, but they test for whether a row matching the condition(s) is in the file/row group
You can prune based on partitions, minmax indexes, then bloom filters first. By that point the row group scan, if all other cheks suggest that the row you are after is in the block, is a very small amount of data.
https://itnext.io/how-do-open-source-solutions-for-logs-work... covers this very well
The default tokenizer is a a whitespace one: https://github.com/danthegoodman1/bloomsearch/blob/148a79967...
So {"name": "John Smith"} is tokenized to [{Path: "name", Values: ["john", "smith"]}], and the bloom filters will store:
- field: "name"
- token: "john"
- token: "smith"
- fieldtoken: "name:john"
- fieldtoken: "name:smith"
The same tokenizer must be used at query time too.
Fuzzy searches and sub-word searches could be supported with custom tokenizers (eg trigrams, stemming), but it's more generally targeting the "I know some exact subset of the record, I need all that have this exactly" searches
A way to “handle” partial substrings is to break up your input data into tokens (like substrings split in spaces or dashes) and then you can break up your search string up in the same way.
Otherwise you can happily use it in indirect backend services (e.g. your own logging) without license concerns.