177 pointsby jonbaer5 days ago8 comments
  • edschofield4 hours ago
    The design of Pandas is inferior in every way to Polars: API, memory use, speed, expressiveness. Pandas has been strictly worse since late 2023 and will never close the gap. Polars is multithreaded by default, written in a low-level language, has a powerful query engine, supports lazy, out-of memory execution, and isn’t constrained by any compatibility concerns with a warty, eager-only API and pre-Arrow data types that aren’t nullable.

    It’s probably not worth incurring the pain of a compatibility-breaking Pandas upgrade. Switch to Polars instead for new projects and you won’t look back.

    • sampo2 hours ago
      Historically 18 years ago, Pandas started as a project by someone working in finance to use Python instead of Excel, yet be nicer than using just raw Python dicts and Numpy arrays.

      For better or worse, like Excel and like the simpler programming languages of old, Pandas lets you overwrite data in place.

      Prepare some data

          df_pandas = pd.DataFrame({'a': [1, 2, 3, 4, 5], 'b': [10, 20, 30, 40, 50]})
          df_polars = pl.from_pandas(df_pandas)
      
      And then

          df_pandas.loc[1:3, 'b'] += 1
      
          df_pandas
             a   b
          0  1  10
          1  2  21
          2  3  31
          3  4  41
          4  5  50
      
      Polars comes from a more modern data engineering philosopy, and data is immutable. In Polars, if you ever wanted to do such a thing, you'd write a pipeline to process and replace the whole column.

          df_polars = df_polars.with_columns(
              pl.when(pl.int_range(0, pl.len()).is_between(1, 3))
              .then(pl.col("b") + 1)
              .otherwise(pl.col("b"))
              .alias("b")
          )
      
      If you are just interactively playing around with your data, and want to do it in Python and not in Excel or R, Pandas might still hit the spot. Or use Polars, and if need be then temporarily convert the data to Pandas or even to a Numpy array, manipulate, and then convert back.

      P.S. Polars has an optimization to overwite a single value

          df_polars[4, 'b'] += 5
          df_polars
          ┌─────┬─────┐
          │ a   ┆ b   │
          │ --- ┆ --- │
          │ i64 ┆ i64 │
          ╞═════╪═════╡
          │ 1   ┆ 10  │
          │ 2   ┆ 21  │
          │ 3   ┆ 31  │
          │ 4   ┆ 41  │
          │ 5   ┆ 55  │
          └─────┴─────┘
      
      But as far as I know, it doesn't allow slicing or anything.
    • satvikpendem37 minutes ago
      "If I have seen further, it is by standing on the shoulders of giants" - Isaac Newton

      Polars is great, but it is better precisely because it learned from all the mistakes of Pandas. Don't besmirch the latter just because it now has to deal with the backwards compatibility of those mistakes, because when it first started, it was revolutionary.

      • vegabook23 minutes ago
        "revolutionary"? It just copied and pasted the decades-old R (previous "S") dataframe into Python, including all the paradigms (with worse ergonomics since it's not baked into the language).
      • Xunjin33 minutes ago
        Indeed, even Rust was created learning with the mistakes of memory management and known patterns like the famous RAII.
    • data-ottawa10 minutes ago
      Pandas deserves a ton of respect in my opinion. I built my career on knowing it well and using it daily for a decade, so I’m biased.

      Pandas created the modern Python data stack when there was not really any alternatives (except R and closed source). The original split-apply-combine paradigm was well thought out, simple, and effective, and the built in tools to read pretty much anything (including all of your awful csv files and excel tables) and deal with timestamps easily made it fit into tons of workflows. It pioneered a lot, and basically still serves as the foundation and common format for the industry.

      I always recommend every member of my teams read Modern Pandas by Tom Augspurger when they start, as it covers all the modern concepts you need to get data work done fast and with high quality. The concepts carry over to polars.

      And I have to thank the pandas team for being a very open and collaborative bunch. They’re humble and smart people, and every PR or issue I’ve interacted with them on has been great.

      Polars is undeniably great software, it’s my standard tool today. But they did benefit from the failures and hard edges of pandas, pyspark, dask, the tidyverse, and xarray. It’s an advantage pandas didn’t have, and they still pay for.

      I’m not trying to take away from polars at all. It’s damn fast — the benchmarks are hard to beat. I’ve been working on my own library and basically every optimization I can think of is already implemented in polars.

      I do have a concern with their VC funding/commercialization with cloud. The core library is MIT licensed, but knowing they’ll always have this feauture wall when you want to scale is not ideal. I think it limits the future of the library a lot, and I think long term someone will fill that niche and the users will leave.

    • noo_uan hour ago
      Polars took a lot of ideas from Pandas and made them better - calling it "inferior in every way" is all sorts of disrespectful :P

      Unfortunately, there are a lot of third party libraries that work with Pandas that do not work with Polars, so the switch, even for new projects, should be done with that in mind.

      • skylurkan hour ago
        Luckily, polars has .to_pandas() so you can still pass pandas dataframes to the libraries that really are still stuck on that interface.

        I maintain one of those libraries and everything is polars internally.

        • adolph3 minutes ago
          > pandas dataframes

          Didn't Pandas move to Arrow, matching Polars, in version 2?

        • noo_uan hour ago
          to_pandas has a dependency on pandas - it is not the biggest of deals, but worth keeping in mind.
      • 17 minutes ago
        undefined
    • rich_sasha3 hours ago
      I almost fully agree. I would add that Pandas API is poorly thought through and full of footguns.

      Where I certainly disagree is the "frame as a dict of time series" setting, and general time series analysis.

      The feel is also different. Pandas is an interactive data analysis container, poorly suited for production use. Polars I feel is the other way round.

      • thelastbender122 hours ago
        I think that's a fair opinion, but I'd argue against it being poorly thought out - pandas HAS to stick with older api decisions (dating back to before data science was a mature enough field, and it has pandas to thank for much of it) for backwards compatibility.
        • ohyoutravel2 hours ago
          Well this is like saying Python must maintain backwards compatibility with Python 2 primitives for all time. It’s simply not true. It’s not easy to deprecate an old API, but it’s doable and there are playbooks for it. Pandas is good, I’ve used it extensively, but agree it’s not fit for production use. They could catch up to the state of the art, but that requires them being very opinionated and willing to make some unpopular decisions for the greater good.
          • cruffle_duffle5 minutes ago
            Why though? polars sounds like the rewrite! It’s okay to cycle into a new library. Let pandas do its thing and polars slowly take over as new projects overtake. There is nothing wrong with this and it happens all the time.

            Like jquery, which hasn’t fundamentally changed since I was a wee lad doing web dev. They didn’t make major changes despite their approach to web dev being replaced by newer concepts found on angular, backbone, mustache, and eventually react. And that is a good thing.

            What I personally don’t want is something like angular that basically radically changed between 1.0 and 2.0. Might as well just call 2.0 something new.

            Note: I’ve never heard of polars until this comment thread. Can’t wait to try it out.

        • ptman2 hours ago
          3.0 is the perfect place to break compat
      • sirfz3 hours ago
        I think that's a sane take. Indeed, I think most data analysts find it much easier to use pandas over polars when playing with data (mainly the bracket syntax is faster and mostly sensible)
    • rdedev13 minutes ago
      While polars is better if you work with predefined data formats, pandas is imo still better as a general purpose table container.

      I work with chemical datasets and this always involves converting SMILES string to Rdkit Molecule objects. Polars cannot do this as simply as calling .map on pandas.

      Pandas is also much better to do EDA. So calling it worse in every instance is not true. If you are doing pure data manipulation then go ahead with polars

    • v3ss0n4 hours ago
      Sounds too much like an advertisement. Also we need to watch out when diving into Polars . Polars is VC backed Opensource project with cloud offering , which may become an opencore project - we know how those goes.
      • gkbrk3 hours ago
        > we know how those go

        They get forked and stay open source? At least this is what happens to all the popular ones. You can't really un-open-source a project if users want to keep it open-source.

        • stingraycharles3 hours ago
          Depends on your definition of popular; plenty of examples where the business interests don't align well with open source.
    • lairv2 hours ago
      I would agree if not for the fact that polars is not compatible with Python multiprocessing when using the default fork method, the following script hangs forever (the pandas equivalent runs):

          import polars as pl
          from concurrent.futures import ProcessPoolExecutor
      
          pl.DataFrame({"a": [1,2,3], "b": [4,5,6]}).write_parquet("test.parquet")
      
          def read_parquet():
              x = pl.read_parquet("test.parquet")
              print(x.shape)
      
          with ProcessPoolExecutor() as executor:
              futures = [executor.submit(read_parquet) for _ in range(100)]
              r = [f.result() for f in futures]
      
      
      Using thread pool or "spawn" start method works but it makes polars a pain to use inside e.g. PyTorch dataloader
      • skylurkan hour ago
        You are not wrong, but for this example you can do something like this to run in threads:

          import polars as pl
          
          pl.DataFrame({"a": [1, 2, 3]}).write_parquet("test.parquet")
          
          
          def print_shape(df: pl.DataFrame) -> pl.DataFrame:
              print(df.shape)
              return df
          
          
          lazy_frames = [
              pl.scan_parquet("test.parquet")
              .map_batches(print_shape)
              for _ in range(100)
          ]
          pl.collect_all(lazy_frames, comm_subplan_elim=False)
        
        (comm_subplan_elim is important)
      • ritchie4643 minutes ago
        Python 3.14 "spawns" by default.

        However, this is not a Polars issue. Using "fork" can leave ANY MUTEX in the system process invalid (a multi-threaded query engine has plenty of mutexes). It is highly unsafe and has the assumption that none of you libraries in your process hold a lock at that time. That's an assumption that's not PyTorch dataloaders to make.

      • schmidtleonardan hour ago
        I can't believe parallel processing is still this big of a dumpster fire in python 20 years after multi-core became the rule rather than the exception.

        Do they really still not have a good mechanism to toss a flag on a for loop to capture embarrassing parallelism easily?

        • skylurk43 minutes ago
          This is one of the reasons I use polars.
        • ritchie4642 minutes ago
          Polars does that for you.
        • lairvan hour ago
          Well I think ProcessPoolExecutor/ThreadPoolExecutor from concurrent.futures were supposed to be that
    • 2 hours ago
      undefined
    • bhadassan hour ago
      why not just go full bore to duckdb?
      • vegabook10 minutes ago
        because method chaining in Polars is much more composable and ergonomic than SQL once the pipeline gets complex which makes it superior in an exploratory "data wrangling" environment. While DuckDB now has its own new expressions pipeline implementation it's way worse than Polars'. DuckDB has other advantages though but Polars is a much cleaner Pandas replacement. Earlier versions of DuckDB were also crashy whereas polars feels carved out of granite.
  • postalcoder5 hours ago
    I've migrated off of pandas to polars for my workflows to reap the benefit of, in my experience a 10-20x speedup on average. I can't imagine anything bringing me back short of a performance miracle. LLMs have made syntax almost a non-barrier.
    • lvl1554 hours ago
      Went from pandas to polars to duckdb. As mentioned elsewhere SQL is the most readable for me and LLM does most of the coding on my end (quant). So I need it at the most readable and rudimentary/step-wise level.

      OT, but I can’t imagine data science being a job category for too long. It’s got to be one of the first to go in AI age especially since the market is so saturated with mediocre talents.

      • iugtmkbdfil8342 hours ago
        << It’s got to be one of the first to go in AI age especially since the market is so saturated with mediocre talents.

        This is interesting. I wanted to dig into it a little since I am not sure I am following the logic of that statement.

        Do you mean that AI would take over the field, because by default most people there are already not producing anything that a simple 'talk to data' LLM won't deliver?

        • mynameisashan hour ago
          Not GP, but as a data engineer who has worked with data scientists for 20 years, I think the assessment is unfortunately true.

          I used to work on teams where DS would put a ton of time into building quality models, gating production with defensible metrics. Now, my DS counterparts are writing prompts and calling it a day. I'm not at all convinced that the results are better, but I guess if you don't spend time (=money) on the work, it's hard to argue with the ROI?

    • mritchie7125 hours ago
      also migrated, but to duckdb.

      It's funny to look back at the tricks that were needed to get gpt3 and 3.5 to write SQL (e.g. "you are a data analyst looking at a SQL database with table [tables]"). It's almost effortless now.

    • howling4 hours ago
      Same. I don't even use LLM normally as I found polars' syntax to be very intuitive. I just searched my ChatGPT history and the only times I used it are when I'm dealing with list and struct columns that were not in pandas.
      • postalcoder4 hours ago
        iirc part of pandas’ popularity was that it modeled some of R’s ergonomics. What a time in history, when such things mattered! (To be clear, I’m not making fun of pandas. It was the bridge I crossed that moved me from living in Excel to living in code.)
        • iugtmkbdfil8342 hours ago
          I learned about pandas with R in my class way back when. At the time, it seemed like magic. In a sense, it still does, but things evolve.
    • thibaut_barrere3 hours ago
      Polars being so fast, and embeddable into other languages, has made it a no brainer for me to adopt it.

      I have integrated Explorer https://github.com/elixir-explorer/explorer, which leverages it, into many Elixir apps, so happy to have this.

    • gHA55 hours ago
      Do you not experience LLM generated code constantly trying to use Pandas' methods/syntax for Polars objects?
      • edschofield4 hours ago
        Yes, ChatGPT 5.2 Pro absolutely still does this. Just ask it for a pivot table using Polars and it will probably spit out code with Pandas arguments that doesn’t work.
        • 4 hours ago
          undefined
      • postalcoder4 hours ago
        There were some growing pains in gpt-3.5 to gpt-4 era, but not nowadays (shoutout to the now-defunct Phind, which was a game changer back then).
        • crimsoneer4 hours ago
          The fact they pivoted away from their very compelling core offering (AI stack overflow) to complete with loveable etc in the "AI generated apps" giant fight continues to baffle me. Though I guess model updates ate their lunch.
          • postalcoder4 hours ago
            My guess is that their pivot came after distress, and was not the cause of it. It'd be great to have @rushingcreek write a post-mortem. I think it'd benefit a lot of people because I honestly don't have a monday morning playbook of what could have saved them.

            Like you said, perhaps the demise of phind was inevitable, with large models displacing them kind of like how Spotify displaced music piracy.

    • thegabriele2 hours ago
      " 10-20x speedup on average. "

      Is this everyone's experience?

      • OGWhales19 minutes ago
        It depends on the specifics, but I converted a couple of scripts recently that would take minutes to run with Pandas that only took seconds to run with Polars. I was pretty impressed.
      • mynameisashan hour ago
        That was probably about what I got when I migrated some heavy number crunching code from Pandas to Polars a few years ago. Maybe even better than that.
    • alex7o5 hours ago
      Same, also polars works on typescript which I used at some point out move my data from backend to frontend
    • OutOfHere4 hours ago
      The speedup you claim is going to be contingent on how you use Pandas, with which data types, and which version of Pandas.
  • QuadmasterXLII6 minutes ago
    Ugh, I'm still recovering from numpy breaking changes with 2.0
  • jtrueb2 hours ago
    That timestamp resolution discrepancy is going to cause so many problems
    • EForEndeavour21 minutes ago
      Do you mean the new default datetime resolution of microseconds instead of the previous nanosecond resolution? Obviously this will require adjustments to any code that requires ns resolution, but I'd bet that's a tiny minority of all pandas code ever written. Do you have a particular use case in mind for the problems this will cause?
  • alexcasalboni2 hours ago
    Haven't used pandas in a while, but Copy-on-Write sounds pretty cool! Is there any public benchmark I can check in 2026?
  • optimalsolver5 hours ago
    How soon will the leading LLMs ingest the updated documentation? Because I'm certainly not going to.
    • uncletoxa4 hours ago
      Use context7 mcp. It'll do the trick
      • leadingthenet4 minutes ago
        I've been sleeping on this, works like a charm!
    • g-morkan hour ago
      This is the most misunderstood aspect of how marketing has changed recently
    • OutOfHere4 hours ago
      In my experience, it would take a year to ingest it natively, and two years to also ingest enough coding examples.
  • OutOfHere4 hours ago
    s/impactfull/impactful
    • Havocan hour ago
      Regex is great when one is communicating with machines
  • teekertan hour ago
    I have deep respect for Pandas, it, and Jupyter-lab were my intro to programming. And it worked much better for me, I did some "intro to Python" courses, but it was all about strs and ints. And yes, you can add strs together! Wow magic... Not for me. For me it all clicked when I first looped through a pile of Excel files (pd.read_excel()), extracted info I needed and wrote a new Excel file... Mind blown.

    From there, of course, you slowly start to learn about types etc, and slowly you start to appreciate libraries and IDEs. But I knew tables, and statistics and graphs, and Pandas (with the visual style of Notebooks) lead me to programming via that familiar world. At first with some frustration about Pandas and needing to write to Excel, do stuff, and read again, but quickly moving into the opposite flow, where Excel itself became the limiting factor and being annoyed when having to use it.

    I offered some "Programming for Biologists" courses, to teach people like me to do programming in this way, because it would be much less "dry" (pd.read.excel().barplot() and now you're programming). So far, wherever I offered the courses they said they prefer to teach programming "from the base up". Ah well! I've been told I'm not a programmer, I don't care. I solve problems (and that is the only way I am motivated enough to learn, I can't sit down solving LeetCode problems for hours, building exactly nothing).

    (To be clear, I now do the Git, the Vim, the CI/CD, the LLM, the Bash, The Linux, the Nix, the Containers... Just like a real programmer, my journey was just different, and suited me well, I believe others can repeat my journey and find joy in programming, via a different route.)