196 pointsby xkgt6 days ago8 comments
  • OutOfHere5 days ago
    Deepseek is the real "open<something>" that the world needed. Via these three projects, Deepseek has addressed not only efficient AI but also distributed computing:

    1. smallpond: https://github.com/deepseek-ai/smallpond

    2. 3fs: https://github.com/deepseek-ai/3FS

    3. deepep: https://github.com/deepseek-ai/DeepEP

    • swyx5 days ago
      how many companies will actually adopt 3FS now that it's open source?

      not a hater, just know that theres a lot of hurdles to adoption even if something if open source - for example not being an industry standard. i dont know a ton about this space - what is the main alternative?

      • skeeter20205 days ago
        to me this seems to target a pretty small audience: very big data and specific problem domains, you need killer devops chops, expensive & specialized infrastructure and a desire to build out on bleeding edge architecture. I'd suspect most with these characteristics will stick with what they've got, "medium Big Data" companies should probably go with hsoted services and the rest of use stick with a single node DuckDB.
        • 0cf8612b2e1e5 days ago
          Bingo. Very few organizations have petabytes of data on which they are trying to efficiently process for machine learning. Such organizations already have personnel and technology in place offering some kind of solution. Maybe this is an improvement, but it is quite unlikely to be offering new capabilities to such teams.
          • datadrivenangel5 days ago
            And the organizations that get large enough to be sad with DuckDB performance will have options like MotherDuck for cloud hosting
      • huntaub5 days ago
        For example, in AWS, you can get a similar FSx for Lustre file system for just 11% more cost, which could be worth it to avoid the management costs of running your own storage cluster.
    • dkdcwashere5 days ago
      thank goodness, we’ve had nothing open to do efficient distributed computing with for years!
      • OutOfHere5 days ago
        At least there hasn't been anything for distributed DuckDB before it afaik. For anyone with a substantial DuckDB project, they might now go distributed without having to rewrite it in something else.
  • ogarten5 days ago
    Looks like we are approaching the "distributed" phase of the distributed-centralized computing cycle :)

    Not saying this is bad, but it's just interesting to see after being in the industry for 8 years.

    • antupis5 days ago
      Was it already happening when platforms started supporting stuff like Iceberg? But is kinda nice to see things like Snowflake have definitely their place on the ecosystem but too often at margins especially with huge workloads Snowflake creates more issues than solves them
      • greenavocado5 days ago
        Were you there when we had to work with our data in Teradata and SAS and hundreds of multi hundred MB Excel spreadsheets containing analytical data? 30+ minute queries were the norm. Snowflake was a breath of fresh air.
        • data_marsupial5 days ago
          I work with Teradata every day and can query years of event data in seconds.
      • ogarten5 days ago
        Yes, not saying this is bad at all, just kind of funny. When you think about it it makes sense though. Why wouldn't want someone have a possibility to distribute an efficient engine.
  • nemo44x5 days ago
    Isn’t the whole point of DuckDB is that it’s not distributed?
    • this_user5 days ago
      1. Our technology isn't powerful enough, we need to scale by distributing it.

      2. The distributed technology is powerful but complex, and most user don't need most of what it offers. Let's build a simple solution.

      3. GOTO 1

    • calebm4 days ago
      I had the same question - what does this add beyond normal DuckDB?
    • biophysboy5 days ago
      I thought the same thing; perhaps its distributed into fewer chunks.
  • benrutter5 days ago
    I'm not massively knowledgable about the ins and outs of DeepSeek, but I think I'm in the right place to ask. My understanding is DeepSeek:

    - Created comparable LLM performance for a fraction of the cost of OpenAI using more off-the-shelf hardware.

    - Seem to be open sourcing lots of distributed stuff.

    My question is, are those two things related? Did distributed computing allow the AI model somehow? If so how? Or is it not that simple?

    • zwaps5 days ago
      These type of models need to be trained across thousands of GPUs, which requires distributed engineering on a much higher level than "normal" distributed systems.

      This is true for DeepSeek as well as for others. There are a few companies giving insights or open-sourcing their approaches, such as Databricks/Mosaic and, well, DeepSeek. The latter also did some particularly clever stuff, but if you look into details so did Mosaic.

      OpenAI and Anthropic likely have distributed tools of even larger sophistication. They are just not open source.

      • benrutter4 days ago
        Thanks, that's a really great/helpful explanation!
  • maknee5 days ago
    Does anyone have blogs with benchmarks to show the performance of running smallpond let alone 3fs + smallpond?

    A lot of blogs praise these new systems, but don't really provide any numbers :/

  • cmollis5 days ago
    spark is getting a bit long in the tooth.. interesting to see duckdb integrated with Ray for data-access partitioning across (currently) 3FS. probably a matter of time before they (or someone) supports S3. It should be noted that duckdb (standalone) actually does a pretty good job scanning s3 parquet on its own.
  • 5 days ago
    undefined