16 pointsby radarroark20 days ago3 comments
  • DougN720 days ago
    I wouldn’t call this next-gen SQLite. How can it be when the “QL” of SQLite is “Query Language” and this doesn’t have one? This is an object serialization library.
    • beardyw20 days ago
      Exactly. SQLite is in use in millions[1] of places, all using SQL. If you want something else don't try to steal the name.

      [1] this figure is based on no research

      • 20 days ago
        undefined
    • 20 days ago
      undefined
    • delegate20 days ago
      Not really. This db allows traversing the (deeply nested) data structures without loading them into memory. Eg. In Clojure you can do ``` (get-in db [:people "john" :address :city]) ```

      Where `:people` is a key in a huge (larger than memory) map. This database will only touch the referenced nodes when traversing, without loading the whole thing into memory.

      So the 'query language' is actually your programming language. To the programmer this database looks like an in-memory data structure, when in fact it's efficiently reading data from the disk. Plus immutability of course (meaning you can go back in history).

  • explodes20 days ago
    Point 1 may be a problem if you're embedded. Otherwise, enjoy the fact that the 150,000 lines of c-code are some of the most tested lines of code on the planet.

    Point 2 doesn't tackle the reasons why there is a mismatch between in-memory representation and tabular data. Some benefits include the wins obtained from a schema built to utilize "normal form". Object databases have their place, but so do fully normalized database tables.

    Point 3 doesn't strike me as useful. I don't find myself reverting rows to previous points in history that often, if I ever have. Tracking versions of rows is useful. I would argue that "reverting" is not, since the reverting would be better tracked by adding a new version as a forward update.

    Overall, sure, a new, "light weight", object database that uses data structures* may have a place somewhere. But to replace SQLite? I think not.

    *The Java API gives me the same recoil as Java JSON API. Pulling out data key by key feels like pulling teeth tooth by tooth.

    • radarroark20 days ago
      > Point 1 may be a problem if you're embedded.

      Large dependencies are not only a problem in embedded programming. That sort of thinking is how we got to the explosion of dependencies and software complexity we're in today.

      > Object databases have their place, but so do fully normalized database tables.

      Agreed, but you can build a stricter data model on top of generic data structures. The idea is to keep them separate rather than hard-coding just one specific data model. See for example running DataScript on top of xitdb: https://gist.github.com/radarroark/663116fcd204f3f89a7e43f52...

      > Tracking versions of rows is useful. I would argue that "reverting" is not, since the reverting would be better tracked by adding a new version as a forward update.

      "Adding a new version" to revert is exactly what xitdb does. See this line, which appends a new "version" of the database whose value points to an older version:

          history.append(history.getSlot(historyIndex));
      
      It's fine if you don't find immutability useful directly, but it is also what enables reading the db while writes are happening, which is clearly useful even if you don't care about time travel.
  • damnitbuilds20 days ago
    This looks like Python's pickling: https://docs.python.org/3/library/pickle.html
    • radarroark20 days ago
      Not quite the same as the object serialization you'll find in many standard libraries, because xitdb can read/write data incrementally, so you can work with large-than-memory data sets.