In that thread, the topic of macOS performance came up there. Basically Anukari works great for most people on Apple silicon, including base-model M1 hardware. I've done all my testing on a base M1 and it works wonderfully. The hardware is incredible.
But to make it work, I had to implement an unholy abomination of a workaround to get macOS to increase the GPU clock rate for the audio processing to be fast enough. The normal heuristics that macOS uses for the GPU performance state don't understand the weird Anukari workload.
Anyway, I finally had time to write down the full situation, in terrible detail, so that I could ask for help getting in touch with the right person at Apple, probably someone who works on the Metal API.
Help! :)
Well, I read it all and found it not too long, extremely clear and well-written, and informative! Congrats on the writing.
I've never owned a Mac and my pc is old and without a serious GPU, so it's unlikely that I'll get to use Anukari soon, but I regret it very much, as it looks sooo incredibly cool.
Hope this gets resolved fast!
wonder if com.apple.developer.sustained-execution also goes the other way around...
Just my two cents: have you considered using a server/daemon process that runs separately and therefore more controllably outside a DAW (and therefore a client-server approach for your plugin instances)? It could allow you to have a little bit more OS-based control.
> have you considered using a server/daemon process that runs separately and therefore more controllably outside a DAW
I'm slowly coming to the same conclusion, for audio plugins on GPUs.
> It would be great if someone can connect me with the right person inside Apple, or direct them to my feedback request FB17475838 as well as this devlog entry.
Sounds like the OP is trying to get #2 to happen, which is probably his best bet.
https://anukari.com/blog/devlog/productive-conversation-appl...
Great that you have a workaround now, but the fact that you can't even share what the workaround is, ironically speaks to the last line in https://news.ycombinator.com/item?id=43904921 of how Apple communicates
>there’s this trick of setting it to this but then change to that and it’ll work. Undocumented but now you know
When you do implement the workaround, maybe you could do it in an overtly-named function spottable via disassembly so that others facing similar constraints of latency-sensitive GPU have some lead as to the magic incantation to use?
Congratulations and good luck with your project!
The team we talked to at Apple never ever cared about our problems, but very often invited us to their office to discuss the latest feature they were going to announce at WWDC to strong arm us into supporting it. That was always the start and stop of their engagement with us. We had to burn technical support tickets to ever get any insight into why their buggy software wasn’t working.
Apples dev relations are not serious people.
Seems like there might be a private API for this. Maybe it's easier to go the reverse engineering route? Unless it'll end up requiring some special entitlement that you can't bypass without disabling SIP.
> The Metal profiler has an incredibly useful feature: it allows you to choose the Metal “Performance State” while profiling the application. This is not configurable outside of the profiler.
How would the Metal profiler be able to do that if not for a private API? (Could some debugging tool find out what's going on by watching the profiler?)
Sorry about that!
GPU audio is extremely niche these days, but with the company mentioned in TFA releasing their SDK recently it may become more popular. Although I don't buy it because if you're doing thing on GPU you're saying you don't care about latency, so bump your i/o buffer sizes.
This does not follow. Evidently it is possible to have low-latency audio processing on the GPU today (per the SDK).
And default deny at the OS level for Zoom, Teams and web browsers :)
It's better to trust, the amount of people that won't abuse it far outweigh the ones that do.
1. Go through WWDC videos and find the engineer who seems the most knowledgable about the issue you're facing.
2. Email them directly with this format: mthomson@apple.com for Michael Thomson.
> Lol on the second day it's out, you have already absolutely demolished all of the demos I've made with it and I've used it every day for two years
That looks to be a smoother chalkboard than I’ve ever encountered. If I had been using such chalkboards, I suspect I’d agree, but based purely on my experiences to this point, my opinion has been that chalkboards are significantly better for most art due to finer control and easier and more flexible editing, but whiteboards are better for most teaching purposes (in small or large groups), mostly due to higher contrast. But there’s a lot of variance within both, and placement angles and reflection characteristics matter a lot, as do the specific chalk, markers and ink you use.
2. Users can arbitrarily connect objects to one another, so each object has to read connections and do processing for N other entities
3. Using the full CPU requires synchronization across cores at each physics step, which is slow
4. Processing per object is relatively large, lots of transcendentals (approx OK) but also just a lot of features, every parameter can be modulated, needs to be NaN-proof, so on
5. Users want to run multiple copies of Anukari in parallel for multiple tracks, effects, etc
Another way to look at it is: 4 GHz / (16 voice * 1024 obj * 4 connections * 48,000 sample) = 1.3 cycles per thing
The GPU eats this workload alive, it's absolutely perfect for it. All 16 voice * 1024 obj can be done fully in parallel, with trivial synchronization at each step and user-managed L1 cache.
Perhaps there's something in this video that might help you? They made a lot of changes to scheduling and resource allocation in the M3 generation:
Have you tried buffering for 5ms? Was result bad? 1 ms?
Is that a limitation of the audio plug-in APIs?
Possibly what you describe is a bit more like double-buffering, which I also explored. The problem here is latency: any form of N-buffering introduces additional latency. This is one reason why some gamers don't like triple-buffering for graphics, because it introduces further latency between their mouse inputs and the visual change.
But furthermore, when the GPU clock rate is too low, double-buffering or pipelining don't help anyway, because fundamentally Anukari has to keep up with real time, and every block it processes is dependent on the previous one. With a fully-lowered GPU clock, the issue does actually become one of throughput and not just latency.
That’s why I asked about the plug-in APIs. They may have to be async, with functions not returning when they’re fully done processing a ‘packet’ but as soon as they can accept more data, which may be earlier.
But in general no, you can't begin processing a buffer before finishing the previous buffer because the processing is stateful and you would introduce a data race. And you can't synchronize the state with something simple like a lock, because locking the audio playback is forbidden in real time.
You can buffer ahead of time, this introduces latency. You can't do things ahead of time without introducing delay, because of causality - you can't start processing packet #2 while packet #1 is in flight because packet #2 hasn't happened yet.
To make it a bit more clear why you can't do this without more latency:
Under the hood there is an audio device that reads/writes from a buffer at a fixed interval of time, call that N (number of samples, multiply by sample rate to get in seconds). When that interval is up, the driver swaps the buffer for a new one of the same size. The OS now has exactly (N samples * sample_rate) to fill the buffer before its swapped back with the device driver.
The kernel maps or copies the buffer into virtual memory, wake the user space process, call a function to fill the buffer, and return back to kernel space to commit it back to the driver. The buffer you read/write from your process is packet #1. Packet #2 doesn't arrive until the interval ticks again and buffers are exchanged.
Now say that processing packet #1 takes longer than N samples or needs at least M samples of data to do its work and M > N. What you do is copy your N samples of packet #1 into a temporary buffer, what until M samples have been acquired to do your work, but concurrently read out of your internal buffer delayed by M - N samples. You've successfully done more work, but delayed the stream by the difference.
Or perhaps you're missing that there's an in event as part of this, like a MIDI instrument? It's an in->effect->out sequence. So minimizing latency means that the "effect" part must be as small as possible, which means it's desired for it to happen faster than "in" can feed it data
That's quite the hack and I feel for the developers. As they state in the post, audio on the GPU is really new and I sadly wouldn't be holding my breath for Apple to cater to it.
Ableton engineers already evaluated this in the past: https://github.com/Ableton/AudioPerfLab
While I feel for the complaints about the Apple lack of "feedback assiting" The core issue itself is very tricky. Many years ago, before being an audio developer, I've worked in a Pro Audio PC shop...
And guess what... interrupts, abusive drivers (GPUs included) and Intels SpeedStep, Sleep states, parking cores... all were tricky.
Fast forward, We got asymmetric CPUs, arm64 CPUs and still Intel or AMDs (especially laptops) might need bios tweaks to avoid dropouts/stutters.
But if there's a broken driver by CPU or GPU... good luck reporting that one :)
Proprietary technologies, poor or no documentation, silent deprecations and removals of APIs, slow trickle feed of yearly WWDC releases that enable just a bit more functionality, introducing newer more entrenched ways to do stuff but still never allowing the basics that every other developer platform has made possible on day 1.
A broken UI system that is confusing and quickly becomes undebuggable once you do anything complex. Replaces Autolayout but over a decade of apps have to transition over. Combine framework? Is it dead? Is it alive? Networking APIs that require the use of a 3rd party library because the native APIs don’t even handle the basics easily. Core data a complete mess of a local storage system, still not thread safe. Xcode. The only IDE forced on you by Apple while possibly being the worst rated app on the store. Every update is a nearly 1 hour process of unxipping (yes, .xip) that needs verification and if you skip it, you could potentially have bad actors code inject into your application from within a bad copy of Xcode unbeknownst to you. And it crashes all the time. Swift? Ha. Unused everywhere else but Apple platforms. Swift on server is dead. IBM pulled out over 5 years ago and no one wants to use Swift anywhere but Apple because it’s required.
The list goes on. Yet, Apple developers love to be abused by corporate. Ever talk to DTS or their 1-1 WWDC sessions? It’s some of the most condescending, out of touch experience. “You have to use our API this way, and there’s this trick of setting it to this but then change to that and it’ll work. Undocumented but now you know!”
Just leave the platform and make it work cross platform. That’s the only way Apple will ever learn that people don’t want to put up with their nonsense.
Now a lot of people may reply to this that Windows isn't that bad with ASIO (third party driver framework) or modern APIs like WASAPI (which is still lacking), or how pipewire is changing things on Linux so you don't need jack anymore (but god forbid, you want to write pipewire native software in a language besides C, since the only documented API are macros). Despite these changes you have to go where the revenue is, which is on MacOS.
People used to say this about video pros too, until Apple royally screwed the pooch by failing to refresh its stale Mac Pro hardware lineup for many years, followed by a lackluster Final Cut release. An entire industry suddenly realized Windows was viable after all, they just hadn't bothered to look.
In this case, the users of these tools seem perfectly ok with them and aren't going to just explore something as disruptive as an entirely different OS just for kicks.
> In this case, the users of these tools seem perfectly ok with them
That wasn't my takeaway from the article. The plugin is outright broken on the latest hardware, even with the workaround.
> [...]something as disruptive as an entirely different OS just for kicks
I don't think switching OSes is less disruptive than switching software packages. Cubase or Ableton on Windows is not much different from the respective DAWs on Mac OS. Modern desktop OS UI paradigms map 1:1, so switching isn't a big deal
The FCP upgrade didn’t just break the main app, but the plugin ecosystem was wiped out too. (From what I read, I’m not a movie pro). And that was disruption forced upon the users.
So in that scenario, they didn’t have much of a choice.
But in this scenario, the audio apps work well and it’s just the developers complaining.
And even though I’m a developer, I would say as long as the users are happy then I can see why there is less concern about dev happiness
One of the worst things about Apple is how much time and effort they spend trying to lock you into their platform if you want to support it. There's no excuse for it. Even once they have you on their system, they're doing everything in their power to lock you in to their workflows and development environments. It's actually insane how shamelessly hostile OSX is.
The Apple developer experience is an abject horror because they believe everyone who is capable of developing high value applications for Apple devices works at Apple, or will work at Apple. 3P devs are a nuisance they tolerate rather than a core value-add for their services and devices. I assume it's less bad within Apple, but I really have no idea.
Apple explicitly disallows cross compilation in their Terms of Service. Even if you managed to get clang compiling for Mac on another Unix, even if you figure out how to get your app bundles signed outside of OSX, they'll revoke your developer license and invalidate your certs because you're in violation of their ToS. You're right they don't care about third party devs, but the amount of hoops you have to jump through for devops on Mac is almost certainly designed as a gluetrap.
I think Apple is actually one of the few companies that you should anthropomorphize because they have shown a long history of making decisions based on long term strategy rather than short term profits. They also react emotionally sometimes. Best example coming to mind is Steve Jobs on Accessibilty, "I don't care about the bloody ROI." I of course cheered that attitude, and still do for a11y, but that is a very human-like thing to do. Also lets not forget his hatred toward Android and vengeful attempt to kill it. Hence I don't think Apple is a lawnmower. They're more like an elephant with it's objectives and they know they're going to squash a lot of lesser life in the process but "you can't have an omellette without breaking a few eggs."
Windows has had 3rd party developers built into its DNA since the beginning though. Even today, Windows goes to great lengths to maintain backwards compatibility. I think this comes from the fact that MS has always been a software first company built around market domination.
That is not being developer hostile. Apple does many other things that don't help developers but forcing their hardware is just an entry cost.
They have amazing hardware that is far superior to the competition and that they can build at very competitive prices while still making good money.
Building a PC in 2025 absolutely sucks. The prices are getting insane. Plus Windows 11 is super hated. It is the perfect time for Apple to win over people.
They just need to stop kneecaping their great hardware but the shitty software side. Just open it up a little bit. Add Vulkan support. Actually make your GPU usable. Actually help Steam to do their magic like they did with Linux, no one is going to buy games on the bloody Apple store anywhere. Show some respect to the developers.
Shareholders giving up massive growth for short term profits. So frustrating.
On Mac, I can use bash/zsh mostly how I would on linux. The main compatibility issues come from BSD tools vs GNU, which are very simple to replace if you want. On Windows, they use PowerShell, which is totally proprietary.
On Mac, web & infra development can use completely open source tooling which can be shared with Linux.
You can still use VS Code to edit Swift (or C#), but the more "proprietary dev environments" (Xcode or Visual Studio) are probably more powerful with system level integrations.
Heck, you can use PyQT on mac if you don't like Swift or Xcode.
Not to mention all of the great Android tablets that I can’t get or the much faster Android devices…
Now you might say this is problematic, Apple doesn't want third-party developers locking their platform behind some conditionally compiled set of abstractions that ruin everything they've worked for. Putting aside how ridiculous that is given system APIs are often wrapped for normal abstraction reasons anyways, that's totally fine. But then, it's also not my problem because I'm not Apple. I don't mind supporting their platform, I'll even turn a blind eye to the audacity of charging a developer fee while offering abysmal documentation and support. But I'm not going to crawl and beg for the privilege.
> Do you want them to use cross platform frameworks that are not optimized for their system?
Just like everybody else, because it hardly matters. Outside of Apple-land, Intel, AMD and Nvidia all get along just fine with rewriting SPIR-V to their microarchitectures. CPUs get along just fine rewriting abstract instruction sets like AMD64 and the various ARMs to their microarchitectures. Code is by-default compiled for instruction compatibility. APIs like CUDA and ROCm explicitly exist for vendor lock-in reasons. There's absolutely no reason why the throughput of these APIs can't be generically applied to compute shaders. None at all. The hardware vendors just want to capture the market.
Apple isn't exactly working with exotic hardware. The M1 is yet another ARM chip, not some crazy graph-reduction machine. These standards are fine and used across a wide-derth of hardware to no real detriment. I would suggest you may over-estimate how much they actually care about this idea of "specially optimized APIs." Consider that Apple pushes Swift as the primary language you Should be Using to ship software on OSX, and yet garbage collection is still handled in software. That's not what vertical integration for engineering purposes looks like.
Again, it all hardly matters. I wouldn't mind just wrapping these APIs, they're not particularly special or exotic any more than their hardware is. But the fact of the matter is that as a non-mac user, they go through a lot of effort to ensure putting software on their platform is as unattractive as possible.
My computer, the processor that runs in it, the operating system, much of the software and the rest of my computer life - phone, watch, set top device, tablet work together. I can copy text from one and paste it into the other. My watch unlocks my computer. My iPad can be used as a second monitor without any third party software.
I bet you HN tested it on iOS if not the Mac and not just hope the site looks fine
WASAPI requires exclusive mode to be useable for pro applications, or else your latency will suffer and they may be doing some resampling behind the scenes.
Latency is a valid concern, but is it really bad? PCs are fast now.
I don't use windows for audio anymore so I can't comment on this in win11, but it used to be that WASAPI suffered unless you set your PC in "performance" mode in your power settings whereas ASIO was unaffected.
And yes, latency matters! For live performance you're looking for < 2.5ms of one-way latency to get a roundtrip of under 5ms. After that point it starts being perceptible to players. This is not a performance floor so much as a scheduling one, and ime windows audio scheduling under dsound/wasapi was always shakey.
That said, Reaper and many others have done great things with DAWs and other audio processing in C++. Maybe getting a "native" look is too difficult, but I figured I'd throw it out there.
I've read that Zig can wrap C macros. So maybe there is some hope.
You go to a different market.
> there simply isn't an alternative for pro audio developers.
Tell me you don't work on live audio without telling me you don't work on live audio. Windows has always been usable if you have a suitable ASIO (same as you used to use on Mac). Most shows will use some permutation of Windows boxen to handle lighting, visuals rendering, compositing and audio processing. The ratio of Macs to Windows machines is at least 1:10 in my experience.
Heck, nowadays even Linux is viable if you're brave enough. Pipewire has all the same features Coreaudio was lauded for back in the day, in theory you can use it to replace just about anything that isn't conjoined at the waist with AU plugins. Things are very different from how they were in 2012.
This is pretty rude, I'm among the (probably small) subset of HN users who has developed real professional audio software.
All I can talk about is my experience, which is that in the plugin market a plurality of your revenue will be from MacOS users. My last job in this market had zero Windows/Linux users.
Now I have done a good bit of live work on Windows machines with ASIO, but I also do a bit of work there myself from time to time in venues with musicians - and I don't really know any musicians that are carrying around Windows laptops. 100% of them are using Mainstage and Ableton on Macbooks.
The gold standard being RME hardware and drivers. Not a single issue ever on windows.
There is no revenue in MacOS, there is only revenue in machines that run A free OS, that they consistently lock their loyal customers out of.
In fact, I'm now working on a USB hardware replacement for what used to be a macOS app, simply because Apple isn't allowing enough control anymore. Their DX has degraded to the point where delivering the features as an app has become impossible.
Also, USB gadgets are exempt from the 30% app store tax. You can even sell them with recurring subscriptions through your own payment methods. Both for the business owner and for the developer, sidestepping Apple is better than jumping through their ridiculous hoops.
And yea, over the years you could tell Apple stopped giving a shit except to turn everything into an app store where they can earn 30% and it's lessened the experience.
Core Data threading? Well, it has got its pitfalls, but those are known, and anyway, nothing is forcing you to use it.
Xcode is so slim these days, it a ~3 GB download, it doesn't take an hour to unxip, and it can be dowloaded from the developer website.
Swift? It might be needed for a bunch of new frameworks, Objective-C isn't going anywhere anytime soon either.
Core Data threading? Does Linux even attempt something like Core Data? How well is that going?
Swift? I remember when Linux diehards invented Vala. The Swift of Linux, but with none of the adoption.
As for UI code, Linux is finally starting to get a little more stable there. GTK 2 to 3 was a disaster; Qt wasn't fun between major upgrades; if you weren't using a framework, you needed to have fun learning the quirks of Xorg; nobody who builds for Linux gets to lecture Mac about UI stability.
Or, for that matter, app stability in general. Will a specific build of Blender outside of a Flatpak still work on the Linux desktop after 2 release cycles? No? Then don't lecture me about good practices. Don't lecture me about how my website or app was sloppily engineered because it has dependencies.
Are the target users for this likely to use Linux (rather than Windows) if the ditched Apple?
> Swift? I remember when Linux diehards invented Vala. The Swift of Linux, but with none of the adoption
Plenty of languages used on Linux. Why pick one that did not gain traction?
> f you weren't using a framework, you needed to have fun learning the quirks of Xorg;
Who does that?
> GTK 2 to 3 was a disaster; Qt wasn't fun between major upgrades
But they are cross platform.
> Will a specific build of Blender outside of a Flatpak still work on the Linux desktop after 2 release cycles?
Does that matter? Maybe a bit of extra work for packagers - and people can use Flatpack or Snap.
People use Apple’s dev tools because they are the only/best way to deliver apps on Apple’s OSes.
If we changed the situation, so that Apple Dev Tools could be used to create applications for non Apple OSes, or non Apple Dev tools were first class citizens for creating Apple apps, I bet the vast majority of people would use the non Apple dev tools to create both Apple and non Apple apps.
What’s keeping Apple Dev Tools in the game is their privileged position in the Apple OS ecosystem.
It is absolutely deserved here - Apple built a 100 foot tower, and it's grown hairy over the last few decades. Linux built 7 30 foot towers without stairs in the same timeframe; but yelling about the overgrowth on the 100 foot tower is still somehow defensible.
If they can't build their own towers correctly, they have no right to act like the main tower was built worse than their own.
(Edit, posting too fast: For the complaint that Apple has money, Linux does too. 90%+ of work on Linux comes from corporate sponsorship, and has since 2004 when it was first counted. They are fully capable of doing better.)
But more relevant is the fact that their donations are focused on running Linux as servers and there Linux is miles ahead of anything Apple provides, to the point that Apple has abandoned its server OS.
> 90%+ of work on Linux comes from corporate sponsorship
And approximately 0% of these corporate contributors care about the “Linux desktop” experience. Unlike Apple their goal is not to build a consumer-targeted OS.
Linux on the desktop is very, very niche, and even among the people who do use it, a lot of them will spend almost all their time in just a few windows (e.g. terminal, browser, emacs), not a rich array of desktop applications.
Whatever rough edges you may encounter will keep being sanded down at a speed I haven't witnessed since when linux was the hot new thing in the 90s. Linux desktop felt stale and abandoned trough-out the 2010s but nowadays its pretty marvelous how fast it's becoming a real alternative to windows and mac. I truly believe that if it had the proper developer adoption and first class hardware support from OEM vendors it would already be a true alternative.
(I'm a pretty happy desktop Linux user, mostly because I don't think commercial OS vendors' incentives are properly aligned in the B2C space.)
And yet OP did.
Swift on the server is for Apple ecosystem developers, to share code, just like all those reasons to apparently use JavaScript on the server instead of something saner.
I wonder if it is a generation gap, as many apparently learn coding via videos, however that it is not enough to go deep.
By the way, Microsoft suffers from the same diseas, they reduced their team size, ans unless one is coding since the 16 bit days, there are many things no one will find.
Some of it is gone forever, as they kept replacing their documentation, blogs and video platforms.
Other is still there, but you have to have actually used that in practice, to find out the Win32, or .NET Framework documentation that nowadays only gives the most recent version.
Or even Microsoft Systems Journal articles, as another example.
Google on Android is also a mixed bag, depending on what one is looking for.
I think they threw the towel when they realize the mess they've built. In contrast you have things like RHEL, FreeBSD, etc, where there's a drive to keep things small and neat just to be able to document them.
JS on the server is actually really fast and well supported. Not really sure what you're driving at here.
https://blog.kevinchisholm.com/javascript/javascript-wat/
Thankfully we can thank someone with the lifetime experience of designing Turbo Pascal, Delphi, J++, C#, and his team, to have made the experience bearable.
I don't think that's apt. What you find to be "abuse" others might find to be the kind of obstacles/issues that every platform/ecosystem has.
It probably helps if you never put Apple on a pedestal in the first place, so there's no special disappointment when they inevitably turn out to be imperfect. E.g., just because Apple publishes a new API/framework, that doesn't mean you need to jump on board and use it.
Anyway, developers are adults who can make their own judgements about whether it's worth it to work in Apple's ecosystem or not. It sounds like you've made your decision. Now let everyone else make theirs.
Units sold in the smartphone world uses the same function video game consoles' market does: you simply offer a bigger and better software offerings, not just hardware.
If you, as a developer, have a worse time contributing to that ecosystem, then it is just a matter of time before the users themselves have a worse time with their device.
I take the comment above as a signal that something is clearly not working towards Apple's goals. Of course, you make your own judgements to support a platform or not, but this indicates that decision is a lot easier than it should be. In detriment of Apple's ecosystem.
All in all I wouldn't discount it.
I mean in late stage capitalism their single biggest priority is to become a rent seeking monopoly by regulatory capture. If they can accomplish that, then user experience is a distant concern.
Luckily it looks like apple is having some problems with that recently.
Right, that's why judges are making criminal recommendations to the US prosecutors. No abuse at all.....
Oh poor Apple. If only they had the resources and engineers to fix that. /s
Apple's also been deleting more and more of its old documentation. Much of the it can only be found on aging DVDs now, or web/FTP archives if you're lucky. Even more annoying is how some of the deleted docs are _still_ referenced modern docs and code samples.
Apple has done nothing and continues to do nothing to engender any confidence in their platform as a development target.
You're missing the forest for the trees. Apple is very difficult to work with indeed, but they have a shit-ton of paying users. Still to this day, iOS is a better revenue maker than Android. Same for macOS compared to Windows. You want to make a living? Release on macOS. People there pay for software.
This hasn't ever been my experience. Maybe if you're in a really specific market niche where most of the userbase is on Mac. Only 5% of users on Windows paying for the software still absolutely dwarfs 100% of Mac users paying for it. We have more sales on Linux than we do Mac.
That's interesting, what's your product? There are a few pieces of software on Macs that I would love to pay for on Linux but the option isn't there.
iOS is 27% of the mobile market; but total revenue through the App Store in 2024 was $103 billion. For Google Play, it was $46 billion. Double the sales, from a market 1/3rd the size. Whether we like it or not, the whole open platform of Windows being a breeding ground for viruses and piracy, and the ongoing cultural expectations that set, caused a direct effect on people's willingness to buy Windows software from unknown publishers without a third party (Steam, Microsoft Store) vetting them.
I expect it's highly situational. Don't expect to sell many games on Mac. However, I do find it interesting that services like SetApp exist on Mac, but nobody has tried anything with that level of quality on Windows. SetApp also hasn't shown any interest in expanding to Windows.
I’d imagine that people have failed to attract users who pay on Linux or windows and developers know that people use their software via piracy.
Once upon a time I thought either GNOME or KDE would win, and we could all enjoy the one Linux distribution, I was proven wrong.
Then again, I have been back on Windows as main OS since Windows 7.
The engineering standards, and churn within the Linux desktop, are hilariously bad.
Nobody who uses it has a right to complain about how node_modules has a thousand dependencies and makes your JavaScript app brittle. Their superior Linux desktop won't even be capable of running the same software build outside of a Flatpak without crashes in three years.
As for lack of documentation, good luck pulling together all the pieces you need to write a fully native Linux application without using Qt, GTK, or a cross-platform solution. Maybe you have your own UI stack that needs porting. A simple request, fairly accomplishable on Mac. The lack of documentation on Linux outside of that privileged route will make Apple's documentation look like a gold standard. Heck, even if you stay on the privileged route, you're still probably in for a bad time.
Isn't those the native stacks? Unless you're going for system programming. The nice thing about GTK and Qt is that you have access to the source code when you're trying to find the behavior of a component (if the docs is lacking). No such luck with AppKit.
My Asus Linux netbook, bought with Linux support, never had the same OpenGL support level as on the Windows drivers.
And in what concerns hardware video decoding, it only worked during Flash glory days, never managed to get it working with VAAPI.
Linux users don't want one to win. As soon as one gained any traction, the users would switch just for the sake of it. It's also crazy how neither ever actually improves because they are so focused on copying whatever windows and mac are doing instead of continuously improving. The linux desktop experience isn't any better now than it was 20 years ago.
You can't say that with a straight face. 20 or so years ago you would barely have hardware support for anything you wanted to use, or have to go trough a battery of guides just to get 50% of your computer working. Nowadays you just boot a live environment and likely 99% of your computer works out of the box, even tough your OEM gave ZERO shits about linux support.
Wi-fi was between impossible or pray it works and use a bunch of disparate of cli commands to properly join a network. Nowadays I see linux being casually used on random machines without a single problem regarding Wi-fi, and the GUIs for managing it are as cromulent as what you get on other OSes.
X kept being patched to make it do modern things it was never meant to do, thus creating a huge technical debt that is finally being payed off with proper wayland implementations.
Linux audio went from a complete turd to best in class with the "merging" (more of a complete rewrite but with full backward compatibility backed in) of pulseaudio and jack into pipewire.
It's now easy to acquire random linux desktop apps, and they keep working between upgrades! What a concept! Developers are actually finally having a decent time developing apps for desktop linux. Maybe it's no WIN32 but hey, you can run those too with WINE and PROTON trough Steam, Lutris, Bottles and so on :)
I could keep going... Honestly, just give it a try if you haven't in a while.
Sure on weird hardware, but it you had something that was decently supported like a thinkpad, everything mostly just worked, same as now. A lot of your "linux improved..." stuff doesn't matter to end users for the most part. It's nice that they're growing the tent, but it doesn't change the fact that actual desktop experience hasn't improved much, despite it having been "year of the linux desktop" for the last 20ish years.
So audio is the best in class, how many industry DAWs support Linux, and are used at any random audio studio? Not that many.
The netboook I had until 2024, never handled our router without issues, rebooting the wlan daemon was a common activity during "heavy" downloads, like e.g. a new Rust version.
What works without issues on my place are Android/Linux, WebOS/Linux, and Sony/Linux (BlueRay).
Proton is Valve's failure to nurture developers to target GNU/Linux, even though Android/NDK has the same technology stack for game development, and Sony's OrbitOS is close enough with its FreeBSD roots, even with its proprietary 3D API.
For example there are over a dozen ways to define a string and you constantly are having to convert between them depending on the API you are using.
https://www.reddit.com/r/cpp_questions/comments/10pvfia/look...
It’s honestly nuts that so many developers continue to try to make software using a bloated JavaScript framework and thousands of Node dependencies.
That might also be true but that misses the point - programming is not engineering; nothing is done to an engineer’s preferred standard; and probably never will.
It’s like being a CNC Technician and complaining about how 90% of stuff on store shelves is plastic. A metal gallon of milk would be so much more durable! Less milk would be spilled from puncturing! Production costs, and how they go downstream, are being ignored.
(Edit for the downvotes, dispute me if you care enough, but literally nobody other than computer programmers ogles your clean code. Just like how nobody other than CNC mechanics are going to ogle the milk carton made on a lathe.)
Wasteful? Wasteful is whichever solution takes the most money while giving the least in return. From the perspective of any rational business, not using Electron is an opportunity cost. Any Mac user knows the truth well, the web has been a more reliable runtime than native since Mojave.
And we've got Sketch, Things 3, Bear, Omnigaffle and the whole Omnigroup suite, cleanshot, Alfred,... I'm not trying to defend Apple's ecosystem, but if opensource can deliver Libreoffice, calibre, VLC,... on all platforms, there's little defense for others to burden users with Electron.
This is nonsense. I've been a professional Mac and iOS developer for well over a decade, and even in the days of NSURLConnection, I've never needed a 3rd party networking library. Uploading, downloading, streaming, proxying, caching, cookies, auth challenges, certificate validation, mTLS, HTTP/3, etc. – it's all available out of the box.
>The Metal API could simply provide an option on MTLCommandQueue to indicate that it is real-time sensitive, and the clock for the GPU chiplet handling that queue could be adjusted accordingly.
Realtime scheduling on a GPU and what the GPU is clocked to are separate concepts. From the article it sounds like the issue is with the clock speeds and not how the work is being scheduled. It sounds like you need something else for providing a hint for requesting a higher GPU clock.