345 pointsby ksec4 days ago34 comments
  • lenerdenator4 days ago
    > 512GB of RAM

    Keep these things the hell away from the people who develop Chrome and desktop JS apps.

    • whizzter4 days ago
      In 2025 the question isn't "will it run crysis", it's "will it run a simple CRUD app".
      • esafak4 days ago
        Will it run Electron?
        • bloomingkales4 days ago
          Gonna pile on:

          At this point we may need TSMC to make a specialized chip to run Electron.

          • unilynx4 days ago
            • therein4 days ago
              This was discussed before and interesting but apparently the name of that instruction is misleading. Someone had chimed in and talked about how having Javascript in its name is entirely unnecessary as that exact same floating point representation is commonly used outside Javascript as well.

              If you disassemble some armv8 binaries that aren't dealing with Javascript, you do see still see FJCVTZS.

          • zitterbewegung4 days ago
            It has been done for Java [1] and as we make smaller and smaller chips who knows.

            [1] https://en.wikipedia.org/wiki/Jazelle

          • vvillena4 days ago
            There are already specialized instructions in the Apple Silicon chips. IIRC there's something tailored for the Objective-C runtime, and something useful for Javascript runtimes.
            • favorited4 days ago
              Uncontended acquire-release atomic operations are basically free on Apple Silicon, which synergizes with the Objective-C (and Swift!) runtimes, where every retain/release is an atomic increment/decrement.

              https://web.archive.org/web/20201119143547/https://twitter.c...

              • throwaway20373 days ago

                    > Uncontended acquire-release atomic operations are basically free on Apple Silicon
                
                While I don't doubt you, the poster, specifically, how is this possible? To be clear, my brain is x86-wired, not ARM-wired, so I may have some things wrong. Most of the expense of atomic inc/dec is "happens before", which essentially says before the current core reads that memory address, it will be guaranteed to be updated to the latest shared value. How can this be avoided? Or is it not avoided, but just much, much faster than x86? If the shared value was updated in a different core, some not-significant CPU cycles are required to update L1 cache on current current with latest shared value.
                • throwaway20373 days ago
                  EDIT:

                      > some not-significant CPU cycles
                  
                  should say:

                      > some not-insignificant CPU cycles
          • imgabe4 days ago
            JSOC - javascript on a chip
        • belter4 days ago
          You need an AWS Region for that...
      • gjsman-10004 days ago
        In the future, we’ll decide HTML, CSS, and JS are too much of an inconsistent burden; so every website will bundle their own renderers into a <canvas> tag running off a WASM blob. Accessibility will be figured out later - just like it was for the early JavaScript frontends.

        I am looking forward to the HTML Frameworks explosion. You thought there were too many JS options? Imagine when anyone can fork HTML.

        • lynx974 days ago
          <canvas> is already a middle finger in the direction of accessibility. You don't need wasm to put blind people at an extra disadvantage. SVG Accessibility anyone? No? What a surprise. Classical web accessibility has basically ended. We (blind people) are only using sites which are sufficiently old to be still usable.
          • rikroots4 days ago
            I'm genuinely trying to do something about <canvas> element accessibility. Whether it's enough ...? Probably not. But if I can do the work to try and show that <canvas> elements can be made more accessible, then there's no excuse for developers working on far more popular JS canvas libraries from making an attempt to better my efforts.

            I do strongly agree that <canvas> elements should not be used to replace HTML/CSS! My personal web hierarchy is 1. HTML/CSS/images; 2. Add (accessibility-friendly) JS if some fancy interaction is useful; 3. More complex - try SVG/CSS; 4. use <canvas> only if nothing else meets the project requirements.

            https://github.com/KaliedaRik/Scrawl-canvas

          • moi23883 days ago
            You are blind? Could you perhaps point me to a good resource to make my websites or apps more accessible, perhaps even to test it in these regards?

            I’ve found some resources but when I look at them I also hear stories of blind people saying these guidelines only make things worse.

            • lynx973 days ago
              Well, I am not a web dev... At least, my know-how ends when SPAs begin. All I can point you to are the WCAG, but I am sure you already know about them...

              Regarding the vague criticism you mention, I'd need something more concrete to tell you if the rumors are truish...

              • moi23883 days ago
                Ah my bad. Yes I was aware of the WCAG, but I also read some criticisms regarding them. I guess it’s still a good starting point then, thanks!
          • nicoburns4 days ago
            There has been some exploration around developing a JavaScript API for accessibility. If implemented, that would allow <canvas> renderers to be accessible. I hope people will consider that blocking for shipping canvas renderers, but we'll see.
          • mattl4 days ago
            Deaf person here working full time to try and make some consumer websites not terrible at the least.
            • lynx973 days ago
              Great, thanks. Keep up the good work, we need everybody motivated!
        • jsheard4 days ago
          Why stop there? LLMs will free us from the shackles of having to ship actual code, instead we'll ship natural language descriptions and JIT them at runtime. It may use orders of magnitude more resources and only work half of the time but imagine the Developer Velocity™
          • hnthrow903487654 days ago
            The LLM created code will then be consumed by my AI agent which will rewrite the application to filter all of the bullshit and be fit for my minimalist preferences like a Reader Mode for CRUD apps.
          • gjsman-10004 days ago
            In fact, with AI becoming more powerful, the <canvas> tag might soon become even more viable; because nobody will need ARIA tags or similar to tell them what’s on screen. The AI screen reader will look at the website as a whole and talk to the user. With accessibility no longer required, and with any UI being just a dumb framebuffer, we’ll finally see perfect chaos.
            • lynx974 days ago
              And blind people will be the first test subjects for the "we see everything you read" project. Sweet. A small enough group that has no way out. Besides, after the initial giveaways, imagine the revenue if you can charge for every single pageview.
          • Zekio4 days ago
            can't wait for all the imaginary features
            • tempodox4 days ago
              You can hallucinate them right now already. Just ask WebGPT.
        • peatmoss4 days ago
          The state of web deployment in 2025 is the universe punishing me for calling java applets and other java web deployment tech "heavyweight" back in the day.
          • fumar4 days ago
            What would we dev areas are important to you?
        • asdajksah21234 days ago
          > every website will bundle their own renderers into a <canvas> tag running off a WASM blob

          Isn't that Flutter?

        • catapart4 days ago
          Not that I intend to scale this in any way, but I'm working on an in-game UI rendered on the canvas, and I am thinking I might be able to hack something together based on this youtuber's library and excellent explainer video[0]. The thought had definitely occurred to me that if someone wanted to really roll up their sleeves and maintain a js port of the library, it would provide a translate-able UI from native C to native JS and back. At least, I can imagine a vite/webpack-like cli that reads the C implementation and spits out a js implementation.

          Of course, I could also imagine one that reads the C and provides the equivalent html/css/js. And others might scoff "why not just compile the whole C app into wasm", which would certainly be plenty performant in a lot of cases. So I guess I don't know why it isn't already being done, and that usually means I don't know enough about the problems to have any clue what it would actually take to make such things.

          In any case, I'm also looking forward to a quantum leap in web app UI! I'm not quite as optimistic that it's ever going to happen, I guess, but I can see a lot of benefit, if it did.

          [0]https://www.youtube.com/watch?v=by9lQvpvMIc

        • fumar4 days ago
          I'm thinking about this space now. Ideally, I want a new browser like platform with stricter security properties than browsers but better rendering out of the box capabilities.
        • 4 days ago
          undefined
        • swiftcoder4 days ago
          You jest, but isn't this Web Components? Or alternately, Flutter
          • gjsman-10004 days ago
            Web Components was too verbose and nobody could figure it out. Flutter is just the beginning of the newest scheme by RAM manufacturers to bloat our RAM usage. We’ve stagnated at 8GB on midrange computers for too long.
            • prisenco4 days ago
              Web Components aren't that bad, but they could definitely use a DX makeover.

              For simple components, I much prefer them to firing up the React ecosystem.

        • threatofrain4 days ago
          Soon it'll be all 3D content anyway... the old world of a graph of documents is going away. The web breathed a sigh of relief when Apple's Vision Pro bombed.
      • cellularmitosis4 days ago
        Speaking of CRUD, would Apple’s on-chip memory have significant advantages for running Postgres vs a threadripper with a mobo full of ram?

        It seems like vertical scaling has fallen out of fashion lately, and I’m curious if this might be the new high-water mark for “everything in one giant DB”.

        • vaxman3 days ago
          Better get to the bottom of the mystery surrounding Apple's ECC patents on LPDDR ECC or you will have to make a leap of faith that your database on their chips won't wind up cruddy in a Bad Way. All we have now are assumptions and educated guesses about what they may be doing. It's also going to be an issue with AMD 395+ and nVidia+MediaTek GB10 (but I would assume NO ECC on those SoCs, based on their history).

          It may only be a few mm to the LPDDR5 arrays inside the SoC, but there are all sorts of environmental/thermal/power and RFi considerations, especially on tiny (3-5nm) nodes! Switch on the numerical control machine on the other side of the wall from your office and hope your data doesn't change.

        • hot_gril4 days ago
          There are already big servers designed for huge single databases, for example the 8-socket Xeon types. Tbh I don't understand exactly why RAM is such a concern, but these machines have TBs of it.
          • throwaway20373 days ago
            Woah, 8x Xeon CPUs on a single motherboard. That is a new record for me.

            I found one here from Supermicro: https://www.supermicro.com/en/products/motherboard/X13OEI-CP...

            Has anyone see one of these in action? What was the primary use case? Monolithic database server?

            • whizzter3 days ago
              I think a bigger business case is virtual machine hosting, say one of these is maxed out (8 Xeons with 56 cores ie 448 cores and 32tb of memory), say it's divided into a 1000 machines you can run each VM with 40% cpu utilization and 3gb of memory, considering many VM offerings have less RAM (and add a bit of overselling on top of it with regards to CPU) it could probably house over 2000 VM's.
              • hot_gril3 days ago
                You can do that more cheaply with separate machines. The use case for this mega one really is monolothic DB or server.
        • ffsm84 days ago
          I'm not sure how this would impact the server market in any way considering that epyc thread ripper has supported 4 TB for over 5 yrs now.

          Is it the usual Apple distortion effect where fanboys just can't help themselves?

          It's definitely a sizeable amount of RAM though, and definitely enough to run the majority of websites running on the web. But so would a budget Linux server costing maybe 100-200 bucks per month.

          • Moto74514 days ago
            The question is about embedded DRAM, not trying to put a Mac in the data center. I am unaware of an apples to apples comparison here, but on the same Intel and AMD platform there can be a performance increase associated with embedded high speed LPDDR5 vs something on an SODIMM, which is why CAMM is being developed for that space.

            I would be interested as well in what an on chip memory bank would do for an EPYC or similar system since exotic high performance systems are fun even if all I’ll ever touch at this point is commodity stuff on AWS and GCP.

            • ffsm84 days ago
              He edited his comment. The previous version did reference the 512 GB being so big that it'd be a game changer for servers.
              • jmb994 days ago
                Yeah, 512GB was a game changer for servers... with DDR3...

                And that wasn’t even where it topped out, there were servers supporting 6TB of DDR3 in a single machine. DDR4 had at least 12TB in a single (quad-CPU) machine that I know of (not sure if there were any 96*256GB DDR4 configs). These days, if money’s no object, there exist boards supporting 24TB of DDR5. I think even some quad-CPU DDR2-era SKUs could do 1.5TB. 512GB is nothing.

                (Not directly in response to you, just adding context.)

              • cellularmitosis3 days ago
                While I did make a couple of cosmetic edits within a few minutes of posting (before there were any replies), even the original was referring to the speed of the memory ("on-chip"), not its size.

                You misunderstood my post, and I don't appreciate the tone of your reply.

                • ffsm83 days ago
                  I didn't appreciate you removing everything I responded to either, replacing it with something making my comment look entirely out of context.

                  While I believe you that you meant to write about the different performance profile of on chip memory, that's not what you did at the time I wrote my reply. What you actually did write was how 512 GB of RAM might revolutionize i.e. database servers. Which I addressed.

                  And if you hadn't written that, I wouldn't have written my comment either, because I'm not a database developer that could speculate on performance side-grades of such kind (less memory, but closer to the CPU)

                  • cellularmitosis3 days ago
                    This is ridiculous, I changed like 3 words. While I did originally mention 512GB, the context (“on-chip”) made it clear I was referring to the speed, not the size.
            • 4 days ago
              undefined
      • Maken4 days ago
        Will it run Discord?
    • layer84 days ago
      They should make a “webdev” edition with like 4 GB.
    • kees994 days ago
      Chrome has to run on chromebooks, quite a few of which are still-supported models with 4GB of non-upgradeable RAM.
      • superjan4 days ago
        So that means it can run with 4GB. Is there a way to block it from using more?
        • NikkiA3 days ago
          If you have unused ram, why would you want an app not to use it?
          • saagarjhaa day ago
            Yes, but not one app to use all of it.
        • amelius3 days ago
          You could try to use cgroups to accomplish that.
        • lippihom3 days ago
          Now wouldn't that be the dream.
        • thesmok2 days ago
          Run it in a VM.
      • ant6n4 days ago
        These chromebooks won’t run chrome, they’ll meander it.
        • lenerdenator4 days ago
          I wouldn't even call it meandering.

          Know that scene from one episode of Aqua Teen Hunger Force where George Lowe (RIP) is a police officer and has his feet amputated, so he drags himself while pursuing a suspect?

          Yeah. It does that.

          • ewoodrich4 days ago
            Hmm, that hasn't been my experience. My Mediatek 4gb Chromebook is surprisingly snappy (and gets incredible battery life, better than my Macbook that cost 10x as much). Starts to slow down a bit if I go over a dozen tabs while having a video playing but otherwise, it's solid.

            I can even use VS Code remote on it in a pinch, though that's pushing it...

    • reustle4 days ago
      That’s almost the full deepseek r1!
      • seunosewa4 days ago
        Almost is a painful word in this case. Imagine if it could actually run R1. They'd make so much money. Falling short by a few dozen GB is such a shame.
    • amy_petrik3 days ago
      my first thought was, "what does it look like fully specced out, 512 GB RAM cannot be cheap" fully specced out it's ~$15k now I bet that'd be a fine $15k AI machine but if I wanted a CPU AI rig a cobbling of multi-core motherboards could get higher performance at a lower cost, and/or some array of used nvidia cards. the good news is 3 or 4 years from now hardware specs such as this will be much cheaper, which is exciting
    • singularity20014 days ago
      512GB only available on M3
    • asah4 days ago
      $10k and up
    • Mistletoe4 days ago
      Who do you think buys these? :)
      • doublerabbit4 days ago
        Renderfarms Animation Studios

        We had some hefty rigs at the last studio I worked at.

        • nicce4 days ago
          Are these really cost effective for that usecase?
          • whywhywhywhy4 days ago
            Not really. Smaller scenes you’d use nvidia GPU, larger scenes you’d probably save money doing normal servers.
        • yjftsjthsd-h4 days ago
          You run a render farm made of macs?
          • doublerabbit4 days ago
            Did. Until I locked all the artists out of their work accidentally when I went one lunch break. Screwed up a Fw config.

            The old xeon stations were power houses.

    • draw_down4 days ago
      I come here for the tech news, but also the assmad potshots you guys always take at JS. Never change, HN.
      • dylan6044 days ago
        If legitimate complaints about your faveLanguage hurts your feelings, then how do you survive code reviews?
        • eyelidlessness4 days ago
          Not OP, and not my favorite language, but I don’t see how “Apple ships large amount of RAM in expensive workstation” is a legitimate complaint about any language. It isn’t even in the same universe of topics. Completely off-topic JS (and Rust!) drama permeating every single discussion isn’t something that happens in code review. It’s very much an expression of the HN community and its culture. And it’s really tiresome, especially when there are both better complaints and better topical venues for these languages and more.
          • dylan6044 days ago
            Everyone knows that an Electron app consumes a lot of RAM. Take Slack for example. Running slack in a tab in a browser uses less RAM than running the Electron app for Slack.

            The joke being that Apple realized that so many apps are built in Electron and made a decision to provide a shit ton of RAM to just to handle Electron. It seems very on point to the discussion

            • eyelidlessness4 days ago
              A joke whose punchline can be and frequently is retrofitted to any setup… isn’t a particularly funny joke.
              • dylan6044 days ago
                A joke is not deemed funny by everyone that hears it. Those that do enjoy it.

                At this point, it's more satirical than haha funny. Electron is so bloated that it requires way more RAM than say native apps. To poke fun of its inefficiencies isn't going to win Last Comic Standing, but it is valid criticism even if attempted to be told in a humorous manner. Just because it's stuck in your craw doesn't mean the rest of us are in the same place as you, yet you are unwilling to accept that your view isn't the only view.

                • eyelidlessness4 days ago
                  It’s not stuck in my craw. It permeates damn near every discussion no matter how remote the connection. It detracts from actual discussion of the actual topic in the process.

                  I actually almost totally agree with the perspective the “joke” comes from! I just don’t see it as a topic that warrants so frequently disrupting otherwise interesting discussion.

        • draw_down4 days ago
          [dead]
    • Waterluvian4 days ago
      Something I’ve been surprised to find over the years working at software companies is just how many C++ writing, Linux using senior engineers there are who simply do not understand how allocation works and what HTOP is actually telling them.

      I really think a sizable chunk of people in the “omg my RAM!” camp are basing it on vibes, backed up by a misread of reported usage.

      This reminds a long time ago when I was trying to figure out why the heck my Intel Mac was allocating all my RAM and most of my swap to Preview or Chess.

      • klik994 days ago
        I’m surprised how many people bring up “Erm that memory is actually not being used” as if there isn’t plenty of knock on effects from how memory pressure is actually done. For example, if I keep chrome open long enough my builds slowly use less and less threads because the build thinks there’s less memory available, so I have to periodically close chrome and reopen and restore last session.

        It’s true reported memory allocation does not equal actual memory used and that’s very clever of everyone who brings it up, but it does actually cause real annoyances.

        • NoMoreNicksLeft4 days ago
          >so I have to periodically close chrome and reopen and restore last session.

          I thought ublock was forced out of Chrome months ago... how are you people still using it? I switched back to Firefox a couple years ago already, even if it's occasionally painful.

          • klik994 days ago
            Yeah I actually use Firefox almost exclusively, I only use Chrome when websites mysteriously don't work in firefox, 99% of the it's lack of firefox support.

            But in my example I was thinking of a particular 2-month stretch where this kept biting me and I was using Chrome at that point. In terms of memory usage, Firefox is no better though (at one point it was, but not any more).

            Now I'm afraid of saying "memory usage" lest someone pops out to comment "that's not how memory works" like whack-a-mole.

          • recursive4 days ago
            Don't visit sites with ads.
            • hyperbrainer3 days ago
              You don't visit youtube?
              • recursive3 days ago
                I do, but I don't get ads because my subscription to that effect.
          • ewoodrich4 days ago
            uBlock Origin Lite
      • masfuerte4 days ago
        Maybe we're all idiots who look at some irrelevant numbers and declare the sky is falling. Or maybe we notice everything running really slowly because the computer is constantly paging.
      • cmrdporcupine4 days ago
        Top-ish utilities should just be preconfigured to only show RSS unless you absolutely need to know what's virt. A lot of griping would diminish.

        There are many specialized allocation patterns -- especially for larger system things like DBs, virtual machine / runtimes etc. -- that will mmap large regions and then only actually use part of it. Many angry fingers get pointed often without justification.

        • klik994 days ago
          I think the griping occurs when things actually slow down, or system perceived available memory cripples resources. Maybe they point the finger to the wrong place due to misreading virtual, but I doubt people are getting angry when their system is running smoothly.

          And this attitude "oh memory usage problems are a misreading of top" promotes poor memory management hygiene - and I think there's a strong argument that's all good in server applications / controlled environments but for desktop environments this attitude causes all sorts of knock on effects.

  • geerlingguy4 days ago
    The buzz is all around AI and unified memory... but after editing 4K content on an M4 mini (versus my M1 Max Mac Studio), I've realized the few-generations-newer media processing in the M4 is a huge boost over the M1.

    Coupled with the CPU just having more oomph, I ordered an M4 Max with 64 GB of RAM for my video/photo editing; I may be able to export H.265 content at 4K/high settings with greater-than-realtime performance...

    I'm a little sad that the AI narratives have taken over all discussion of mid-tier workstation-ish builds now.

    • cosmic_cheese4 days ago
      It feels a bit like we entered the “consumer grade workstation” era a while back when AMD started selling 16-core CPUs that will happily socket into run of the mill consumer motherboards and that continued with the higher end M-series SoCs.

      It really is cool to see. It’s nice that that kind of horsepower isn’t limited to the likes proper “big iron” like it once was and can even be reasonably be packaged into a laptop that is decent at being mobile and not an ungainly portable-on-a-technicality behemoth.

    • TylerE4 days ago
      The one thing that has me a bit bummed with this is that the Ultra, which I had planned to upgrade to, is only an M3 not an M4. Bit disappointing after waiting this long.
      • tromp4 days ago
        Not all that disappointing considering that most of the performance improvement in M4 seems to come from increased power consumption. In some applications, M4 performs worse per watt than M3.
        • Uehreka4 days ago
          Yeah but if you’re buying an Ultra you’re probably more concerned with raw performance than perf-per-watt. These aren’t exactly used in laptops.
          • TylerE4 days ago
            Exactly. And especially for my incredibly mixed use case, which includes some light gaming in crossover, the gpu improvements in m4 are apparently non-trivial.

            I’m sure whichever I end up with will be a pretty big upgrade over my basest of base model 32GB M1.

            The thing I’m most curious about on the new machines, especially the Ultra, is the thermals. I only care about perf per watt if it becomes unfavorable enough that the fan spins up above idle during normal Tasks. On my M1 the only way to get it to audibly spin up is to get the machine to near total load nd hold it there for some time.

        • raydev4 days ago
          > Not all that disappointing considering that most of the performance improvement in M4 seems to come from increased power consumption

          Disappointing for those of us who don't care about power consumption in a desktop.

    • 4 days ago
      undefined
    • perfmode4 days ago
      Is the M4's media processing superior to the M3? Would the M3 Ultra not perform as well on video editing?
      • schainks4 days ago
        The M3 Ultra is two M3 chips together on one die. In aggregate they should outperform an m4 max by quite a bit.
        • perfmode4 days ago
          I meant single-core performance.
          • Synaesthesia4 days ago
            These chips actually have hardware video encode/decode separate from the CPU.
    • 2OEH8eoCRo04 days ago
      I'm surprised you use Macs since you usually lean toward more open HW and FOSS.
  • sdf4j4 days ago
    The first paragraph that talks about the OS itself is depressing:

    >macOS Sequoia completes the new Mac Studio experience with a host of exciting features, including iPhone Mirroring, which allows users to wirelessly interact with their iPhone, its apps, and notifications directly from their Mac.

    So that's their highlight for a pro workstation user.

    • Nevermark4 days ago
      Just be glad they didn't focus on movies, music and cute apps. Macs seems to be the only product line that continues to semi-dodge Apple's myopic media/services/social kiosk lens they now view all their other product lines through.

      If that sounds too negative, compare their current vision for their products with Steve Jobs old vision of "a bicycle for the mind". iOS-type devices are very useful, but unleashing new potential, enabling generational software innovation, just isn't their thing.

      (The Vision Pro is "just" another kiosk product for now, but it is hard to tell. The Mac support suggests they MIGHT get it. They should bifurcate:

      1. A "Vision" can be the lower cost iOS type device, cool apps and movies product. Virtual Mac screen.

      2. A future "Vision Pro that is a complete Mac replacement, the new high end Apple device, filled out spacial user interface for real work, etc. No development sandbox, Mx Ultra, top end resolution and field of view, raise the price, raise the price again, please. It could even do the reverse kind of support, power external screens that continued working like first class virtual screens, when you needed to share into the real world.

      The Vision Pro should become a maximum powered post-Mac device. Not another Mac satellite. Its user interface possibilities go far beyond what Mac/physical screens will ever do. The new nuclear powered bicycle for the mind. But I greatly fear they want to box it in, "iPad" everything, even the Mac someday.)

      • nullpoint4203 days ago
        I agree, except I wonder how they'll do this securely. Imagine if a VS Code plugin could spy on everything in front of me. Opens up a whole new level of security concerns.
    • rubslopes4 days ago
      It’s like they’re marketing a pro workstation as a glorified iPhone accessory.
    • rafram4 days ago
      They use a similar line on the MacBook Air page. If you're buying an (up to) $13,000 Mac, hopefully you already understand macOS and its features, I guess.
  • silvestrov4 days ago
    The SSD prices are insane.

    $400 to go from 1TB to 2GB.

    $307/TB to go from 1TB to 16TB.

    That is 3 times the Amazon prices: https://diskprices.com/?locale=us&condition=new&capacity=4-&...

    • rsynnott4 days ago
      Given that it's a desktop, most people should just get it with the default size and get an external thunderbolt NVMe disk. Only if you need >Thunderbolt 5 speeds (ie 80 Gbit/sec) do you really need the internal drive, and most NVMe is slower than this in any case.
      • staplung4 days ago
        I did this recently with a new Mac Mini that I set up. MacOS recently added the ability to locate the home directories on any volume. There's a somewhat hidden feature too that if you drag the Applications directory onto an external drive it will move selected apps there (the larger ones like Pages, etc.); combine that with the option in the App Store to keep large downloads on a separate disk.

        So far it's been working quite well with the exception that VSCode does not seem to understand how to update itself if you keep it in the external Applications folder: every time it tries to update itself it just deletes itself instead. Moved it back into the /Applications folder and it's been fine.

        • zuhsetaqi4 days ago
          Instead of dragging them over you should create a link. This way it's the same as before for Applications like VS Code.
          • rafram4 days ago
            Last I tried this, Spotlight didn't play well with symlinked application bundles.
        • wpm4 days ago
          > MacOS recently added the ability to locate the home directories on any volume

          Mac OS has always been able to do this.

        • rsynnott4 days ago
          You can just use it as a boot drive, IIRC.
      • WWLink4 days ago
        Or if you don't want ugly ass external boxes cluttering up your desk.

        I don't get why they couldn't be arsed to stuff a few m.2 slots in there. They could keep the main nand their weird soldered on BS with the firmware stuffed in a special partition if they want. Just give us more room!

        • rsynnott4 days ago
          https://en.wikipedia.org/wiki/Mac_Pro#Apple_silicon_(2023), kinda. The most ultra-niche of Apple's products.
          • hot_gril4 days ago
            I remember this but never looked into it enough to get what the point is. They still sell it with an M2 chip.

            "includes six internal PCIe 4.0 slots for expansion. It does not support discrete GPUs over PCIe." uhhh, so in case people want an AS chip with most stuff soldered on but also really need certain PCIe cards that aren't GPUs?

            • wpm3 days ago
              Audio interfaces, video capture interfaces, network interfaces, etc etc etc.
              • hot_gril3 days ago
                Would a TB-PCIe enclosure like the Sonnet ones not cut it? I get that it's still easier to have PCIe built in, but that's a big premium to pay.
                • wpm6 hours ago
                  Those cost hundreds of dollars and often only have 1 slot. The premium can’t be avoided.
        • outime4 days ago
          You seriously don't get why?
        • bruhWithDeth3 days ago
          [dead]
      • ravetcofx4 days ago
        I don't know about thunderbolt, but the Apple Silicon macs I help my clients with have something really wrong and screwed up with how macOS or the firmware deals with USB 3.1+ external drives with constant disconnects despite sleeping hard-drives setting turned off etc. Searching on forums leads to similar issues others are having.
        • Citizen83963 days ago
          What brand and model of drive? This sounds similar to a hardware defect in some SanDisk Extreme SSDs; IIRC it was caused by firmware and/or overheating.
      • moralestapia4 days ago
        This is also quite convenient when you buy a new laptop and just unplug/plug and that's it, you have everything.
    • ohgr4 days ago
      Yeah they really need to get that under control. It's a complete rip off at this point.

      I don't mind them charging say $50 "Apple premium" for the fact it's a proprietary board and needs firmware loading onto the flash but the multiplicative pricing is bullshit price gouging and nothing more.

      • LeafItAlone4 days ago
        Get what under control? People (me included) still pay it.

        And most (me included) would still end up buying the device anyways, maybe just with less storage than they want. And then need to upgrade earlier.

        From Apple’s perspective, they seem to have figured it out.

        And maybe the upgraded configurations somewhat subsidize the lower end configurations?

        • xtracto4 days ago
          Exactly!! The prices are a result of extensive market research. Apple prices this things at a price they know people will buy it.

          It's the beauty of having a product with no real competition in the market.

          (BTW, I use Linux as my home and work OS But I'm a super geek and 20+ years full stack dev... not their target market, as I can handle the quirks and thousand papercuts of Linux)

        • ohgr3 days ago
          I don't. I've got a 256 gig M4 mini with a 2TB disk hanging off it.
          • LeafItAlone2 days ago
            >I've got a 256 gig M4 mini

            So you agree with me.

    • rootbear4 days ago
      Years ago, someone on Usenet explained that Apple upgrade prices are so high because they use components made from the powdered bones of Unicorns and I truly believe that is the truth.
    • Lammy3 days ago
      They've obviously done the math on what percentage of Mac buyers will subscribe to what tier of iCloud storage, times how long people tend to keep each computer, then priced the local storage options above that: https://support.apple.com/en-us/108047
    • protocolture4 days ago
      I remember being a PC enthusiast in high school, spending my lunch hours pricing up Mac's, comparing them to market pc component prices, to laugh at the cost of addons. Seems like nothing has changed.
    • naikrovek4 days ago
      the Studio doesn't use nvme but it does put its storage on a removable card. The mac mini does as well. So you don't have to pay Apple for the storage you want. There are places which sell storage upgrades for the Mini and the M1 Studio, and they, of course, are cheaper than what Apple charges for the upgrade when you buy the machine. dosdude1 on youtube has some videos of this exact upgrade, and a bit of googling will help you find vendors. I am assuming that this M3 and M4 Studio will be the same, but that's not a guarantee.
    • canucker20164 days ago
      One can upgrade the SSD storage for a M1/M2 Mac Studio through a third party for a lot less money than what Apple requires at purchase time.

      I'd expect an upgrade route for the new Mac Studio will appear.

      Here's one YouTube video showing an upgrade to 8TB of SSD storage. see https://www.youtube.com/watch?v=HDFCurB3-0Q

    • metadat4 days ago
      Are the SSDs soldered in place for the desktop machines? Criminal.
  • FloatArtifact4 days ago
    They didn't increase the memory bandwidth. You can get the same memory bandwidth, which is available on the M2 Studio. Yes, yes, of course you can get 512 gigabytes of uRAM for 10 grand.

    The the question is if a llm will run with usable performance at that scale? The point is there's diminishing returns despite having enough uRAM with the same amount of memory bandwidth even with increased processing speed of the new chip m3 for AI.

    • espadrine4 days ago
      > if a llm will run with usable performance at that scale?

      Yes.

      The reason: MoE. They are able to run at a good speed because they don't load all of the weights into the GPU cores.

      For instance, DeepSeek R1 uses 404 GB in Q4 quantization[0], containing 256 experts of which 8 are routed to[1] (very roughly 13 GB per forward pass). With a memory bandwidth of 800 GB/s[3], the Mac Studio will be able to output 800/13 = 62 tokens per second.

      [0]: https://ollama.com/library/deepseek-r1

      [1]: https://arxiv.org/pdf/2412.19437

      [2]: https://www.apple.com/newsroom/2025/03/apple-unveils-new-mac...

      • _aavaa_4 days ago
        This doesn’t sound correct.

        You don’t know which expert you’ll need for each layer, so you either keep them all loaded in memory or stream them from disk

        • espadrine4 days ago
          In RAM, yes. But if you compute an activation, you need to load the weights from RAM to the GPU core.
          • _aavaa_4 days ago
            Got you, yeah I misread you commend the first time around
        • kgwgk4 days ago
          Note that 404 < 512
      • fullstackchris4 days ago
        You seem like you know what you are talking about... mind if I ask what your thoughts on quantization are? Its unclear to me if quantization affects quality... I feel like I've heard yes and no arguments
    • jazzyjackson4 days ago
      I returned an M2 Max studio with 96GB RAM, unquantized llama 70B 3.1 was dog slow, not an interactive pace. I'm interested in offline LLM but couldn't see how it was going to produce $3,000 ROI.
      • FloatArtifact4 days ago
        It would be really cool if there was awebsite "we there yet" for reasonable offline AI.

        It could track different hardware configurations and reasonably standardized benchmark performance per model. I know there's benchmarks buried in github Llama repository.

        • robbomacrae4 days ago
          There seems to be a LOT of interest in such a site in the comments here. There seem to be multiple IP issues with sharing your code repo with an online service so I feel a lot of folks are waiting for the hardware to make this possible.

          We need a SWE-bench for open source LLM's and for each model to have 3Dmark like benchmarks on various hardware setups.

          I did find this which seems very helpful but is missing the latest models and hardware options. https://kamilstanuch.github.io/LLM-token-generation-simulato...

          • FloatArtifact3 days ago
            Looks like he bases the benchmarks off of https://github.com/ggml-org/llama.cpp/discussions/4167

            I get why he calls it a simulator, as it can simulate token output. It's an important aspect for evaluating use case if you need to get a sense of how much token output is relevant beyond the simple tokens per second text.

    • slama4 days ago
      The M3 Ultra is the only configuration that supports 512GB and it has memory bandwidth of 819GB/s.
    • wkat42424 days ago
      True, I also noticed that bigger models run slower at the same memory bandwidth (makes sense).
    • memhole4 days ago
      Yeah, I don’t think RAM is the bottleneck. Which is unfortunate. It feels like a missed opportunity for them. I think Apple partly became popular because it enabled creatives and developers.
      • throw-qqqqq4 days ago
        > I don’t think RAM is the bottleneck

        Not the size/amount, but the memory bandwidth usually is.

    • Ecko1234 days ago
      [dead]
  • blobbers4 days ago
    The previous ranking article said the M3 ultra was the most powerful chip ever.

    Mac ecosystem is starting to feel like the PC world. Just give me 3 options. Cheap, good, and expensive. Having to decide how many dedicated graphic cores for a teenagers laptop is impossible.

    • bee_rider4 days ago
      For chips, Ultra and Max are like their workstation chips or something, right? It seems expected that they should be a little more differentiated, they are specialist, aren’t they?
      • erickhill4 days ago
        The way I think about it is if I buy a Max chip I'm getting the performance of the generation that will be released a year later now in the current form factor, and then some.

        For example, I got the M1 Max when it was new. A year later the M2 came out. Spec-wise, the M1 Max was still a bit better than the M2 Pro in many regards. To me, getting a Max buys you some future proofing if you or your company can afford it (and you need that kind of performance). I use the Max with a lot of video work, and it's been fantastic.

    • fckgw4 days ago
      They have that. On laptops they have the M4, M4 Pro and M4 Max. Cheap, good and expensive.
    • eyelidlessness4 days ago
      My buying strategy has been the same since they started soldering RAM: buy the lowest spec CPU/GPU they offer with the amount of RAM I will need (which all but once has always been the maximum RAM they offer, which unfortunately usually means also buying the max CPU/GPU).
    • zitterbewegung4 days ago
      If you are in college or school getting a MacBook Air would be best and the size of your screen is going to have a higher impact (13 to 15 inch) than the Dedicated Graphics cores. Would advise not getting an MacBook Pro.
    • hot_gril4 days ago
      For the teen's laptop, you can simply get the base model. Even a base M1 is more than fast enough.
  • hart_russell4 days ago
    $14,000 fully configured by the way
  • shrx4 days ago
    Why don't they provide performance comparisons between the two chips offered, M3 Ultra and M4 Max?
    • relium4 days ago
      It'll likely be very workload dependent. The M4 Max will probably do a little better in single threaded tasks like browser benchmarks and the M3 Ultra will do better in things like video transcoding and 3D rendering.
      • shrx4 days ago
        Yes but I'd still like to know what tradeoffs I am making when deciding to get one or the other option. Right now it's all hand-wavy.
  • WorldWideWebb4 days ago
    So they wouldn’t put the power button on the back of the latest Mini, but they did on the Studio? That’s frustrating (yes, minor nit).
    • zitterbewegung4 days ago
      This was always part of the original design of the Mac Studio so they have never changed the design. This is a spec bump.
    • wpm3 days ago
      I have an old style M1 Mac Mini on my desk and I could probably count on one hand the number of times I had to hit the power button, and Apple knows this, so they decided it wasn’t worth the machining cost to drill a hole in the back of the top shell and engineer a power button to the tolerances you’d expect.

      Imagine, my Apple TV doesn’t even have a power button! My MacBook tells at me if I accidentally press it when doing a TouchID!

      • pourred3 days ago
        I have to hit that power button multiple times a day, because the Mac mini just won't wake up from the USB keyboard/mouse...

        Worst of all, it always worked fine on my previous Hackintosh!

    • jonnrb4 days ago
      Power buttons are for power users. lol
      • bigtex4 days ago
        You are holding it wrong - Steve Jobs
  • cyberlimerence4 days ago
    What model can you run realistically with 512 GB of unified memory ? I'm curious who is even the market for such an offering.
    • wkat42424 days ago
      DeepSeek R1 for one, quantised but not too cripplingly.
      • numpad04 days ago
        the full R1 takes >512GB and the 1.52bit takes >128GB. So enough for agent + app to realize a fully autonomous monolithic AGI humanoid head, potentially, but then it'll be compute limited...
        • wkat42424 days ago
          Yeah I was thinking more about q6_0 or so. The q4_K_M is 404GB so you can still push it a bit higher than that. Obviously the 1.52 bit doesn't make sense.

          I'm never going to pay 10k for that though. Hopefully cheaper hardware options are coming soon.

    • saganus4 days ago
      I assume they are getting ready for the next year or two of LLM development.

      Maybe there's not much market right now, but who knows if DeepkSeek R3 or whatever will need something like this.

      It would be awesome to be able to have a high-performance local-only coding assistant for example (or any other LLM apllication for that matter).

    • mlboss4 days ago
      The future is local AI agents running on your desktop 24x7. This and NVIDIA Digits will be the hardware to do that.
  • adamredwoods4 days ago
    >> Mac Studio with M3 Ultra starts with 96GB of unified memory

    I still see laptops selling with 8GB memory, and IMO we should be well past this by now with, IMO, 32GB minimum. My work laptop still only has 16GB.

  • fluidcruft4 days ago
    At this point, having the power button not be on the bottom is a major selling point for me vs the annoying-as-hell mini.
    • electriclove4 days ago
      I’ve had the new Mini for a few months and can’t recall having to use the power button.

      How often are you using the power button on your Mini? What is your use case?

      • fluidcruft4 days ago
        It's a shared computer in a hospital used for research data management. Basically, every time I walk up to it to use it, it's turned off.

        Maybe Apple should remove power off from the UI menus if they're claiming it uses less energy to leave it on.

        (I'm dubious of that claim people are repeating here, but what the hell do I know I'm just a physicist. Reality distortion isn't my thing.)

        • marci4 days ago
          If you have a laptop, do you turn it off or just close the lid?

          The mini is probably less power hungry than the macbooks (less components). I have some wifi 5/ac routers that consume more power at idle (nothing connected to them) than apple laptops.

        • 11235813214 days ago
          Get a label maker and print "LEAVE ON" on the monitor.
      • dewey4 days ago
        > How often are you using the power button on your Mini? What is your use case?

        Every single day, not by choice but because it's constantly waking up in sleep mode to do maintenance task then overheating and shutting down again. Something about macOS and Bluetooth devices not playing nice.

      • mort964 days ago
        How do you turn it on?

        If you never turn off your computer, it makes sense that you never use the power button. But some people do turn their computers off, and for us, it's really useful to be able to turn them on again.

        • dylan6044 days ago
          I'm still on a wired USB full sized keyboard from at least a decade ago, but didn't the newer keyboards see the return of the power button? Did I dream that?
          • mort964 days ago
            I did some quick googling before answering, and from what I could find, people are generally saying that you can't power on the Mac Mini in other ways than by pushing its power button.

            Even if you can power it on using a wired keyboard though, I'm certain that you can imagine people who prefer wireless keyboards but also turn their computer off.

            • dylan6044 days ago
              I could have worded that more clearly. I wasn't disputing power button on the bottom is odd as much as thinking that Apple brought the keyboard power button back. Maybe it was the TouchID on the keyboard, but on their laptops that is also the power button, so possibly just an assumption on my part.
              • mort964 days ago
                Oh, no I think you were clear enough, it's probably me that wasn't clear. I tried to find evidence that people were able to power on their Mac Mini in ways other than via the power button on the machine, such as a button on a keyboard. I couldn't find that, everyone just said that the physical power button on the machine is the only way.
        • TylerE4 days ago
          Why? Sleep/suspend on macs is incredibly good, and power usage rounds to zero.
          • mort964 days ago
            Because I like to turn off my desktop at night. I like to come back to a fresh start. I sometimes reboot just to get a clean slate.
            • JumpCrisscross4 days ago
              > Because I like to turn off my desktop at night

              Put simply, more people like the aesthetic of no visible power button than like the aesthetic of daily rebooting their computer.

              If I were you, and I really couldn’t let go of that, I’d put the Mac in sleep and have it scripted to restart at e.g. 6AM each day. You get the best of both worlds. Feel like you have a “fresh” Mac every morning. Let it do its updates and whatnot behind the scenes.

              • alpaca1284 days ago
                The power button was already invisible when it was on the back.
        • 7e4 days ago
          An idle Mac doesn't use much power. Why are you turning it off?
          • randcraw4 days ago
            I like my computer to be secure when I'm not using it. Powered down is secure.
          • mort964 days ago
            I like to turn my desktop off at night. I don't need a better reason than that.
    • yborg4 days ago
      I think another tier needs to be added to the Maslow pyramid for this particular class of complaint. I have had to reboot the M4 Mini on my desk a number of times now and it takes less than 3 seconds to lift the corner an inch and depress the switch.
    • zie4 days ago
      My thinking is, why would you ever turn it off? They go to sleep and wake up great and barely even sip power when on, let alone when asleep.
      • fluidcruft4 days ago
        It's a lab computer. You can tell people not to shut it off, but it's still always turned off when I try to use it. Could be being shut down via ITs management tools/policies for all I know.
  • 4 days ago
    undefined
  • slt20214 days ago
    It mentions AI, but Macs dont support CUDA.

    What people are using for LLMa on macs, is it ggml ?

    • bri3d4 days ago
      GGML has a Metal (Apple's GPU interface layer) backend, yes, using MPS (Metal Performance Shaders), which are pre-baked shaders provided by Apple in a way similar to cuDNN. This is probably the most popular method for large-scale inference with modern bleeding-edge models.

      There's also Apple CoreML, which is sort of like ONNX in that it provides a limited set of primitives but if you can compile your model into its format, it does good low-power edge inference using custom hardware (Neural Engine).

      Apple also provide PyTorch with MPS, as well as a bunch of research libraries for training / development (axlearn, which is built on JAX/XLA, for example).

      They also have a custom framework, Accelerate, which provides the usual linear algebra primitives using a custom matrix ISA (AMX), and on top of that, MLX, which is like fancy accelerated numpy with both Metal and AMX backends (and slower CPU backends for NEON and AVX).

      Overall, there's a lot you can do with AI on Apple Silicon. Apple are clearly investing heavily in the space and the tools are pretty good.

    • sampton4 days ago
      ggml and pytorch support mps. aside from bleeding edge most ml workloads can run on mps these days.
  • parsimo20104 days ago
    I found this part about PCIe expansions interesting: "For those who rely on PCIe expansion cards for their workflows, Thunderbolt 5 allows users to connect an external expansion chassis with higher bandwidth and lower latency."

    (Maybe this is a feature that Apple has supported for a while, but I am unaware) Does this mean they will be officially supporting all PCIe devices like GPUs? Or do they only mean certain PCIe components like SSD expansions and network interfaces?

    • shellac4 days ago
      This has been a thing for quite a while. Thunderbolt is, in part, PCIe over serial (now USB C). There have been GPUs in external boxes, so it is undoubtedly possible, but I don't think they have many users.

      Edit: BlackMagic was what I was thinking of https://support.apple.com/en-gb/102103. 'Requires intel processor'

  • xnx4 days ago
    Since Apple has always positioned itself as a tool for creatives, is it likely that the Mac Studio my be a good tool for AI video generation using open weight models like Alibaba Wan?
    • 4 days ago
      undefined
  • snovymgodym4 days ago
    I wonder what kind of sales volumes Apple sees with the desktop Mac variants compared to Macbooks. I know the Pro and Mini probably see decent sales number as servers, but I wonder just how many people are still buying iMacs or these Mac Studios.

    For me, the main value proposition from Macs are in their laptop offerings.

  • bananapub4 days ago
    in case people are unaware, this is more exciting than other random computer updates, since the Mac Studio is probably the best system in the world for running LLMs, as it can come with >> 100GB of ram and that's all accessible to the graphics/neural accelerator at high speed.

    this new one comes with up to 512GB of unified RAM!

    • bryanlarsen4 days ago
      Or you could wait a couple months and get a Strix Halo based desktop from HP, Framework, Asus or GMK for > 100GB of unified memory.
      • noelwelsh4 days ago
        Doesn't Strix Halo top out at 192GB RAM? The new M3 Ultra seems a lot more powerful on paper.
        • Scramblejams4 days ago
          I believe it tops out at 128, of which 96 can be used as GPU VRAM. Hoping AMD will open the memory floodgates on the next rev.
          • data-ottawa4 days ago
            110 with Linux, but 256gb/s memory bandwidth.

            The M3 Ultra seems strictly better but is also significantly more expensive.

            • Scramblejams3 days ago
              110! Good to know, thanks for the info.
        • kccqzy4 days ago
          I'm not aware that Strix Halo has shipped on desktop. I thought it was only shipped on laptops and tablets? And Framework announced a desktop but did not ship it yet.
          • bryanlarsen4 days ago
            It hasn't shipped anywhere yet. You can find previews of Strix Halo on laptops and tablets, but you can't buy them yet.
    • jazzyjackson4 days ago
      Have you used a 100GB model on a Mac studio? Tokens per second is single digit, I didn't find it usable at all, found myself going back to cloud APIs where 3000$ goes a much longer way

      I'm looking forward to trying Nvidia's little set top box if it actually ships, should have higher memory bandwidth, but still Ill probably set up a system where I email a query with attachments and just let DeepSeek email me back once it's finished with reasoning at 10T/s

      • clonky4 days ago
        It might blow your mind that you can run a quantized DeepSeek-R1 (671B) at over 15 t/s on an M2 Ultra 192GB and still get around 9000 context.
    • angoragoats4 days ago
      I disagree with the “best system in the world for running LLMs” claim. The Mac platform offers high memory bandwidth, but especially with large models you start to quickly run into the fact that the CPU/GPU themselves are very slow compared to discrete GPUs. For models up to 70b parameters, I’d much rather use any PC with a couple of 3090/4090/5090s, and for anything bigger I’d rather 1) use cloud services and pay by the hour, or 2) run a larger Epyc/Xeon system with more GPUs if my use case absolutely requires local/offline support.
    • blobbers4 days ago
      Better than cloud API calls?
    • short_sells_poo4 days ago
      It has terrible compute performance...
  • cxie4 days ago
    The M3 Ultra with 512GB unified memory is a monster for AI development, while the new M4 MacBook Air makes AI features accessible to the mainstream. Apple's approach of building their own silicon shows massive dividends - they can optimize the entire stack from hardware to software for specific workloads like AI inference
    • ssijak4 days ago
      Is this AI written? :)
    • mort964 days ago
      If only it was built for workloads with value instead, I'm sick and tired of being unable to buy hardware that doesn't claim to have been made specifically for bullshit
      • naikrovek4 days ago
        AI isn't everything that it was hyped to be when Copilot first came out, but AI does indeed have valid uses. It's like any other tool in a toolkit.
        • hot_gril4 days ago
          What is your workload that this machine helps with?
        • mort964 days ago
          I disagree.
  • flenserboy4 days ago
    This seems like a much better deal than a blinged-out Mini. Interesting choices this generation.
  • jimnotgym4 days ago
    Isn't it time Apple moved forward with its design aesthetic? Apple devices always used to look cutting edge. Now they just look samey
    • bfrog4 days ago
      They’ve reached peak design perhaps. The milled alloy bodies do have negatives but not many.
      • jimnotgym4 days ago
        That's an interesting take. A design outside of fashion. I can't see that being true myself. I want to see something as bold as an iMac again.
  • speedylight4 days ago
    If only it didn’t cost as much as a used car to get max ram capacity I’d get one. It’s so ridiculously small for how much power it has.
  • fadedsignal4 days ago
    Why do they always compare their new-gen CPUs with M1??
    • dylan6044 days ago
      Are you a lawyer? Why do you only ask questions to which you already know the answer?

      Snark aside in case you're seriously asking, it's a PR thing. Generation to generation might not show much differences in direct comparisons that makes the crowd ooh and ahh. The M1 chip was the first Apple Silicone chip, so going back to their first one for basis of comparison provides for more oohs and ahhs. The charts look pretty this way too.

    • altairprime4 days ago
      For people who upgrade annually, comparisons don’t matter.

      For people who are making a forced purchase, comparisons don’t matter.

      For people who are content with what they’ve got, comparing with the oldest-popular market segment offers a clear statement of improvement and helps long-term users calibrate when to make their next purchase.

    • Insanity4 days ago
      Guess they have usage statistics to know a lot of their users are still on M1.

      My partner and I are both still on M1 (our personal machines) and don’t really see the need to upgrade.

    • 4 days ago
      undefined
    • TylerE4 days ago
      Well, in this case it's totally relevant as the extant studios are all M1. They never got M2 or M3.
  • makz4 days ago
    What LLMs are people running on this kind of setups?
  • singularity20014 days ago
    256GB, 512GB only available with M3 Ultra not M4 WTH
  • OnionBlender4 days ago
    Are there any good sites or channels that do performance comparisons of new Mac hardware?
  • ein0p4 days ago
    So basically the entire stash of M4 Ultra got used up in Apple Private Cloud, I guess.
  • 4 days ago
    undefined
  • deadbabe4 days ago
    So DeepSeek R1 running locally at like 4 tokens per second? Okay
  • speckx3 days ago
    I wish this had 8K resolution at 120Hz.
    • niek_pas3 days ago
      Wow, I had no idea they even made 8k 120hz screens, that’s wild.
  • 486sx334 days ago
    So M3 Ultra > M4 Max ?
  • erickhill4 days ago
    People with the 2023 lattice Mac Pro be like... can I upgrade mine?
  • vr464 days ago
    Apple's CPU nomenclature is getting more deliberately confusing than BMW's.
    • el_benhameen4 days ago
      Open AI’s naming conventions are still the best-in-worst-in-class, though.
      • vr464 days ago
        When you say "conventions", do you actually mean, "insanity"?
  • petesergeant4 days ago
    > featuring M4 Max and new M3 Ultra

    I hate that this naming shit has gotten so bad

    • caconym_4 days ago
      Has it? These names are basically (architecture version, amount of compute) tuples. Much better than what AMD, Intel, Nvidia, etc. are currently doing.
      • InitialLastName4 days ago
        Which is a bigger amount of compute, "max" or "ultra"?
        • caconym_4 days ago
          This question is trivially answered by visiting Apple's website, which is---to my point---not generally true for their competition. If you have some further point to make, I recommend stating it more clearly so we can avoid wasting time here.
          • alexjplant4 days ago
            Why is the "Ultra" more powerful than the "Max"? I would expect "Max" to mean "maximum" in this context but it seems to mean "directly subordinate to that which is the maximum". This is pretty obviously a point of confusion. Just because other CPU manufacturers do goofy stuff with naming doesn't mean that Apple is exempt from criticism for doing something so obviously bereft of common sense.
            • caconym_4 days ago
              Well then, it's a good thing I didn't say anywhere that they should be exempt from criticism. Incidentally, it's not something I believe. But I do think they are still doing a much better job at naming than other CPU manufacturers, despite the obnoxious Pro/Max/Ultra stuff.

              I suppose I commented here because I think people are letting their subjective distaste for those terms sway their opinion of a superior naming scheme.

            • brianmurphy4 days ago
              I would expect the M4 being a newer generation of the chip to be faster than a M3, but apparently that's not what Apple did.
              • wpm4 days ago
                Why would you expect that? Also, the M4 is faster than the M3, but that doesn't mean Apple couldn't or didn't want to fuse two of them together for an M4 Ultra.

                Is an Intel 10700K faster than a 12400F? The generations are different but the chips have vastly different capabilities and features.

                M4 is the generation. The modifier modifies the generation. M4 Pro is an M4 with some extra pizzaz. M4 Max is an M4 with lots of extra pizzaz.

                • hot_gril4 days ago
                  The M3 vs M4 thing makes sense, but pro vs ultra vs max is ???
            • ulbu4 days ago
              ultra means beyond. so it can make sense if you choose to. but the choice does suck, imo.
          • hot_gril4 days ago
            Same with Intel and others, but it's still annoying.
          • kllrnohj4 days ago
            > This question is trivially answered by visiting Apple's website, which is---to my point---not generally true for their competition.

            wtf are you talking about? Intel, Nvidia, and AMD all absolutely have complete specs for their products readily available on their respective websites. Much, much more complete ones than Apple does as well.

            • caconym_4 days ago
              I guarantee you that a quick scan of any of those companies' websites will not equip one with a useful general understanding of the naming scheme they use for their products, in the sense that one can see a product name and immediately know where it falls in their lineup and what workloads it's meant to handle. That is what "the fuck" I am talking about.
              • hot_gril4 days ago
                • caconym_4 days ago
                  This page really just proves my point. I'm glad their conventions at least can be explained, but it's all quite complicated!
                  • hot_gril4 days ago
                    Intel desktop options are quite simple. My only complaint is on laptops, where the i3, i5, i7 thing conflicts with U vs H, and almost seems intentionally misleading. Like, why does i7-U even exist?

                    But the nice thing is you search the model name, and Intel gives you all the specs upfront.

                  • wtallis3 days ago
                    I like how the table of suffixes hasn't been updated to add "V" but the section on Core Ultra uses 288V as an example. The document's too big to stay in sync with itself.
        • perfmode4 days ago
          Ultra. Ultra consists of two Max chips operating as one.
        • seanmcdirmid4 days ago
          An Ultra is two Maxes. A Max is two Pros.
      • drdaeman4 days ago
        The bit I found confusing is that I don't immediately understand what's more performant in various scenarios, M4 Max or M3 Ultra. The former has higher architecture version but lower amount of compute, the latter is previous arch but more compute.
        • hot_gril4 days ago
          Different tiers within the same generation are much easier to compare than across gens. Apple could benchmark them all and name them accordingly, but even that's misleading because it can be workload-dependent.
        • caconym_4 days ago
          Fair, but it's not like other CPU/GPU manufacturer naming schemes give you that either. At least Apple's scheme clearly tells you e.g. "previous arch but more compute".
    • dialup_sounds4 days ago
      If you think that's bad, have you looked at Intel and AMD's naming scheme lately?
      • Insanity4 days ago
        Or.. USB naming schemes..
      • kps4 days ago
        Consider, if you will, ARM7 and ARMv7.
    • blobbers4 days ago
      Agree. This ship is adrift.

      Remember when it was just a MacBook, or air, or pro and it had a year.

      • whynotminot4 days ago
        It’s really not that complicated guys.

        It’s definitely a little odd to have M3 Ultra > M4 Max, but I feel like anyone complaining about this must have never bought any other manufacturers’ wares in their lives. Obtuse complication is kind of the norm in this industry.

      • rsynnott4 days ago
        Oh, but things were far worse back then, in terms of knowing what you were getting. For instance, let's say you bought a 13" MacBook Pro in 2016. Do you have a dual core or quad core processor? Depends on whether you have a touchbar!

        (For reasons best known to themselves, Apple made two completely different 13" MBPs that year, both new, with the loathed butterfly keyboard, weighing a different amount, with different processors, and the same name.)

      • pram4 days ago
        Literally not even the same thing you're comparing. There were more processor and graphics options. Want an i5, i7, or i9 and what about a RX 580, 5300, 5500, 5700, or 5700XT?