Instead of using integers IDs for the hexes, we created an encoded version of the ID that has the property that removing a character gets you the containing parent of the cell. This means we can do basic containment queries by querying with a low resolution hex (short string) as a prefix query. If a gps track goes through this larger parent cell, the track will have hexes with the same prefix. You don’t get perfect control of distances because hexes have varying diameters (or rather the approximation, since they aren’t circles they are hexes), but in practice and at scale for a product that doesn’t require high precision, it’s very effective.
I think at the end of this year we’ll have about 6tb of these hex sets in a four node 8 process ES cluster. Performance is pretty good. Also acts as our full text search. Half the time we want a geo search we also want keyword / filtering / etc on the metadata of these trips.
Pretty fun system to build, and the concept works with a wide variety of data stores. Felt like a total hack job but it has stood the test of time.
Thanks uber, h3 is a great library!
* I'd frame it as "kludge", reserving "hack" for the positive HN sense. :)
Before that existed (pre 1.0 actually), I did something similar with geohashes, which are similar to h3 but based on simple string encoded quad trees. I indexed all the street segments in openstreetmap with that (~800 million at the time) and implemented a simple reverse geocoder. Worked shockingly well.
The geo_shape type uses a bkd tree in binary format. It's heavily optimized for this type of intersects/overlaps queries at scale. Basically does the same thing but using a lot less disk space and memory. It's similar to what you would find in proper GIS databases. Elasticsearch/opensearch also support h3 and geohash grid aggregations on top of geo_shape or geo_point types.
I'm guessing the author is using something like postgresql which of course has similar geospatial indexing support via post gis.
If joins are a critical performance-sensitive operation, the most important property of a DGGS is congruency. H3 is not congruent it was optimized for visualization, where congruency doesn’t matter, rather than analytical computation. For example, the article talks about deduplication, which is not even necessary with a congruent DGGS. You can do joins with H3 but it is not recommended as a general rule unless the data is small such that you can afford to brute-force it to some extent.
H3 is great for doing point geometry aggregates. It shines at that. Not so much geospatial joins though. DGGS optimized for analytic computation (and joins by implication) exist, they just aren’t optimal for trivial visualization.
If the objective is to overfit for high-performance scalable analytics, including congruency, the most capable DGGS designs are constructed by embedding a 2-spheroid in a synthetic Euclidean 3-space. The metric for the synthetic 3-space is usually defined to be both binary and as a whole multiple of meters. The main objection is that it is not an “equal area” DGGS, so not good for a pretty graphic, but it is trivially projected into it as needed so it doesn’t matter that much. The main knobs you might care about is the spatial resolution and how far the 3-space extends e.g. it is common to include low-earth orbit in the addressable space.
I was working with a few countries on standardizing one such design but we never got it over the line. There is quite a bit of literature on this, but few people read it and most of it is focused on visualization rather than analytic applications.
That said, this feels like an issue with rendering geometry rather than with the index itself. I’m curious to hear more about why you think the lack of congruency affects H3’s performance for spatial joins. Under the hood, it’s still a parent–child hierarchy very similar to S2’s — H3 children are topological rather than geometric children (even though they still mostly overlap).
Congruency allows for much more efficient join schedules and maximizes selectivity. This minimizes data motion, which is particularly important as data becomes large. Congruent shards also tend to be more computationally efficient generally, which does add up.
The other important aspect not raised here, is that congruent DGGS have much more scalable performance when using them to build online indexes during ingestion. This follows from them being much more concurrency friendly.
To me, the big selling point of H3 is that once you’re "in the H3 system", many operations don’t need to worry about geometry at all. Everything is discrete. H3 cells are nodes in a tree with prefixes that can be exploited, and geometry or congruency never really enter the picture at this layer.
Where geometry and congruency do come in is when you translate continuous data (points, polygons, and so on) into H3. In that scenario, I can totally see congruency being a useful property for speed, and that H3 is probably slower than systems that are optimized for that conversion step.
However, in most applications I’ve seen, the continuous-to-H3 conversion happens upstream, or at least isn’t the bottleneck. The primary task is usually operating on already "hexagonified" data, such as joins or other set operations on discrete cell IDs.
Am I understanding the bottleneck correctly?
Not familiar with geo stuff / DGGS. Is H3 not congruent because hexagons, unlike squares or triangles, do not tile the plane perfectly?
I mean: could a system using hexagons ever be congruent?
No, this city isn’t 4 miles from my city. There is a literal lake between us. It’s 10+ miles.
Please, invent something, do precompute, but just avoid naive-ish searches.
At 500 stations:
- H3: 218µs, 4.7KB, 109 allocs
- Fallback: 166µs, 1KB, 37 allocs
- Fallback is 31% faster
At 1000 stations:
- H3: 352µs, 4.7KB, 109 allocs
- Fallback: 312µs, 1KB, 37 allocs
- Fallback is 13% faster
At 2000 stations:
- H3: 664µs, 4.7KB, 109 allocs
- Fallback: 613µs, 1KB, 37 allocs
- Fallback is 8% faster
At 4500 stations (real-world scale):
- H3: 1.40ms, 4.7KB, 109 allocs
- Fallback: 1.34ms, 1KB, 37 allocs
- Fallback is 4% faster
Conclusion: The gap narrows as station count increases. At 4500 stations they're nearly equivalent. H3 has fixed overhead (~4.7KB/109 allocs for k=2 ring), while fallback scales linearly. The crossover point where H3 wins is likely
around 10-20K entries.They may well be using some data storage where spatial indexing is not possible or standard. Geoparquet is a common one now - a great format in many ways but spatial indexing isnt there.
Postgres may be out of fashion but still an old fashioned postgis server is the simplest solution sometimes.
Updating an R-tree is log(n) just like any other index.
I wonder how it compare with geohashing, I know it is not as efficient in term of partitioning and query end up weird since you need to manage decoding neighbor cells but finding all element of a cell is a "starts with" query which allow to put data effectively on most nosql databases with some sort of text sorting