[1] this figure is based on no research
Where `:people` is a key in a huge (larger than memory) map. This database will only touch the referenced nodes when traversing, without loading the whole thing into memory.
So the 'query language' is actually your programming language. To the programmer this database looks like an in-memory data structure, when in fact it's efficiently reading data from the disk. Plus immutability of course (meaning you can go back in history).
Point 2 doesn't tackle the reasons why there is a mismatch between in-memory representation and tabular data. Some benefits include the wins obtained from a schema built to utilize "normal form". Object databases have their place, but so do fully normalized database tables.
Point 3 doesn't strike me as useful. I don't find myself reverting rows to previous points in history that often, if I ever have. Tracking versions of rows is useful. I would argue that "reverting" is not, since the reverting would be better tracked by adding a new version as a forward update.
Overall, sure, a new, "light weight", object database that uses data structures* may have a place somewhere. But to replace SQLite? I think not.
*The Java API gives me the same recoil as Java JSON API. Pulling out data key by key feels like pulling teeth tooth by tooth.