The dataset is a 8,000 row spreadsheet.
My advice when working with such a small dataset is not to overthink it.
8,000 rows is small but the typical processing isn’t fast. Optimizing it has limited ROI. I use a custom Python library I wrote for this kind of work, which makes it a bit slow, but you constantly run across new types of inexplicable geometry issues so the ability to rapidly write custom routines is paramount, which Python excels at.
GIS data is computationally expensive to process even beyond its obvious properties.
The Parquet pattern I'm promoting makes working across a wide variety of datasets much easier. Not every dataset is huge but being in Parquet makes it much easier to analyse across a wide variety of tooling.
In the web world, you might only have a handful of datasets that your systems produce so you can pick the format and schemes ahead of time. In the GIS world, you are forever sourcing new datasets from strangers. There are 80+ vector GIS formats supported in GDAL. Getting more people to publish to Parquet first removes a lot of ETL tasks for everyone else down the line.
It was nice seeing how these stats can be calculated in sql, but this analysis would be beat by a few pivot tables in excel.
Excel can even draw a map to go along (although not as pretty)