In particular, I don't love it when an article attacks a best practice as a cheap gotcha:
"and this time it was super easy! After some basic reversing of the Tapo Android app, I found out that TP-Link have their entire firmware repository in an open S3 bucket. No authentication required. So, you can list and download every version of every firmware they’ve ever released for any device they ever produced"
That is a good thing - don't encourage security through obscurity! The impact of an article like this is as likely to get management to prescribe a ham-handed mandate to lock down firmware as it is to get them to properly upgrade their security practices.
Not a bad blogpost because of this, but you need to be careful reading. I've noticed most of the article on the HN front page are written with AI assistance.
Nobody tell them about Linux!
The blogger will blow a gasket when they discover that the likes of GitHub provides access to both installers and software. A hacker's candy store!
Edit: just want to add, the “how I got the firmware” part of this is also the least interesting part of this particular story.
Here, I'll emphasize the words that elicit the tone:
> After some basic reversing of the Tapo Android app, I found out that TP-Link have their entire firmware repository in an open S3 bucket. No authentication required. So, you can list and download every version of every firmware they’ve ever released for any device they ever produced: [command elided] The entire output is here, for the curious. This provides access to the firmware image of every TP-Link device - routers, cameras, smart plugs, you name it. A reverse engineer’s candy store.
Highlighting (repeatedly) the ease and breadth of access is a basic writing technique to illustrate the weakness of a security system.
Replace [firmware] with [random popular GitHub repo] and nobody would blink. Replace [firmware] with [customer email address] and it would be a legal case. Differentiating here is important.
Furthermore, the repeated use of every when discussing the breadth of access seems like it would easily fall into the "absolutes are absolutely wrong" way of thinking. At least without some careful auditing it seems like another narrative flourish to marvel at this treasure trove (candy store) of firmware images that has been left without adequate protection. But it seems like most here agree that such protection is without merit, so why does it warrant this emphasis? I'm only left with the possible thought that the author considered it significant.
Sure an open bucket is bad, if it's stuff you weren't planning on sharing with the whole world anyway.
But how is an open, read-only S3 bucket worse than a read-only HTTPS site hosting exactly the same data?
The only thing I can see is that it is much easier to make it writeable by accident (for HTTPS web site or API, you need quite some implementation effort).
Only to gullible, clueless types.
Full blown production SPAs are served straight from public access S3 buckets. The only hard requirement is that the S3 bucket enforces read-only access through HTTPS. That's it.
Let's flip it the other way around and make it a thought experiment: what requirement do you think you're fulfilling by enforcing any sort of access restriction?
When you feel compelled to shit on a design trait, the very least you should do is spend a couple of minutes thinking about what problem it solves and what are the constraints.
When in fact TP-Link is doing the right thing with keeping older versions available. So this risks some higher up there thinking 'fuck it, we can't win, might as well close it all off'.
It's a firmware distribution system. It's read-only access to a public storage account designed to provide open access to software deployment packages that the company wishes to broadcast to all products. Of course there is no auth requirement at all. The system is designed to allow everyone in the world to install updates. What compells anyone to believe the system would be designed to prevent public access?
I don't see why. Support for firmware upgrades literally involve querying available packages and downloading the latest ones (i.e., apply upgrades). Either you use something like the S3 interface, or you waste your time implementing a clone of what S3 already supports.
Sometimes simple is good, specially when critics can't even provide any concrete criticism.
What worries me more is security through herd mentality, where everyone copies the same patterns, tooling, and assumptions. When one breaks, they all break. Some obscurity, used deliberately, can raise the bar against casual incompetence and lazy attacks, which, frankly, account for far more incidents than sophisticated adversaries. We should absolutely design systems that are easy to operate safely, but there is a difference between “simple to use” and “safe to run critical infrastructure.” Not every button should be green, and not every role should be interchangeable. If an approach only works when no one understands it, that is bad security. But if it fails because operators cannot grasp basic layered defenses, that is a staffing and governance problem, not a philosophy one.
Isn’t the complaint that the location of the repo is not publicized?
Nobody would complain if it were linked directly from the company’s web page, I assume?
This page[1] lists the C200 as last having a firmware update in October, but also lists the latest version as 1.4.4 while the article lists 1.4.2. It seems like they have pushed other updated in this time, but not these security fixes.
[1]https://community.tp-link.com/us/smart-home/kb/detail/412852
https://www.hydrogen18.com/blog/hacking-zyxel-ip-cameras-pt-...
https://www.hydrogen18.com/blog/hacking-zyxel-ip-cameras-pt-...
Definitely a problem for regular users.
For anyone concerned about their TP-Link cameras, consider: 1. Disable UPnP on your router 2. Use VLANs to isolate IoT devices 3. Block all outbound traffic except specific required endpoints 4. Consider replacing stock firmware with open alternatives when available 5. Regularly check for firmware updates (though as this article shows, updates can be slow)
The hardcoded keys issue is particularly troubling because it means these vulnerabilities persist across the entire product line. Thanks for the detailed writeup - this kind of research is invaluable for the security community.
It's a little bit of a pain to set up the cameras because of the mobile app. I have to connect to the AP on my phone and as it doesn't have internet access my phone nags me, and this specific model doesn't have an external antenna. If it did I think it might be the ideal setup.
When he opened his front door the conversation went something like this:
Him: "Ah hello, thanks for coming round to do this. It should be fun, come in and we can get started."
Me: "OK, but I'm already done."
Him: "What?"
Me: "I'm done. I've already got root on the machine and I left a little text file in root's home directory as proof."
Him: "What? But ... what? Wifi?"
Me: "Nope. Let me in and I'll explain how."
The short story is he had an PoE IP-based intercom system on his front gate. I remembered this from when he was going on about his plans for his home network setup and how amazing PoE was and how he was going to have several cameras etc. I also remember seeing the purple network cable sticking out of the gate pillar whilst the renovation work was being done and the intercom hadn't yet been installed.I'd arrived 45 minutes early, unscrewed the faceplate of the intercom system and, with a bit of wiggling, I got access to a lovely Cat-5 ethernet jack. Plugging that into my laptop I was able to see his entire home network, the port for the intercom was obviously not on its own VLAN. Finding and rooting the target machine was a different matter but those details are not relevant to this story.
I suppose I got lucky. He could have put the IoT devices on separate VLANs. He could have had some alerting setup so that he'd be notified that the intercom system had suddenly gone offline. He could have limited access to the important internal machines to a known subset of IPs/ports/networks.
He learned about all of the above mitigations that day.
I've always wondered just how many people have exposed their own internal network in a similar way when trying to improve their external security (well, deterrent, not really security) but configuring it poorly.
[1]. https://www.defcon.org/images/defcon-19/dc-19-presentations/...
802.1x is commonly deployed with macsec. will it be also trivial to bypass ?
But it’s worth trying
Neither of these seem like good ideas for someone like me, who is relatively hardware naïve and has small children running around making it hard to concetrate for more than 30 minutes at a time.
The question is genuine. I want to do this but don't actually know by which method.
I’m more than happy to ditch the scrappy RTSP setup that I have to support these cheap cameras!
I generally try not to be a huge Rust cheerleader but seriously. Yikes.
I assume any Wi-Fi camera under $150 has basically the same problems. I guess the only way to run a security camera where you don't have Ethernet is to use a non-proprietary Wi-Fi <-> 1000BASE-T adapter. Probably only something homebuilt based on a single board computer and running basically stock Linux/BSD meets that requirement.
The camera sells for $17.99 on their website right now.
Subtract out the cost of the hardware, the box, warehousing, transit to the warehouse, assembly, testing, returns, lost shipments, warranty replacements, support staff, and everything else, then imagine how much is left over for profit. Let's be very optimistic and say $5 per unit.
That $5 per unit profit would mean an additional $100,000 invested in software development would be like taking 20,000 units of this camera and lighting them on fire. Or they could not do that and improve their bottom line numbers by $100,000.
TP-Link has a huge lineup of products and is constantly introducing new things. Multiply that $100,000 across the probably 100+ products on their websites and it becomes tens of millions of dollars per year.
The only way these ultra-cheap products are getting shipped at these prices is by doing the absolute bare minimum of software development. They take a reference design from the chip vendor, have 1 or 2 low wage engineers change things in the reference codebase until it appears to work, then they ship it.
The parent rightly suggested that there is the obvious intention to exploit these devices:
> This is so bad that it must be intentional, right? Even though these are dirt cheap, they couldn't come up with $100,000 to check for run-of-the-mill vulnerabilities?
You explained that there could be an economic reason for the appalling absence of security:
> The only way these ultra-cheap products are getting shipped at these prices is by doing the absolute bare minimum of software development.
But the parent's point is more convincing, based on the observable evidence and the very clear patterns of state-sponsored exploitation.
The vendors could set default passwords to be robust. The vendors could configure defaults to block upstream access. But maybe the vendors in this particular supply chain are more like the purveyors of shovels in a Gold Rush.
A less-charitable metaphor is possible where state-sponsored motives are unambiguously known.
For the tech savvy, there is thingino as a firmware alternative - works local only, no cloud, and supports mqtt etc.
How does this happen? Doesn’t pretty much every ISP give a router with their modem? How do people manage this?
In IPv6 they likely will auto configure onto a public ip address which may not have a stateful firewall.
(Phones is one notable exception. I need contactless payments to work.)
Is it wrong to judge people for their choice of ai providers?
Every single AI company in my opinion is committing fairly grave misdeeds with the ruthless scraping of the internet and lack of oversight.
Not to mention the shady backdoor deals going on with big tech and the current administration.
Grok is also pretty bad with its whole gas turbines in one state and datacenter in another and some possible environmental issues
It's more of a pick your poison at this point
But doesn't it need to have such free usage in order to overcome image problems? Referring to itself as a Nazi [1][2] for example.
[1] https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-...
[2] https://www.politico.com/news/magazine/2025/07/10/musk-grok-...
Can they? I thought they could only do it if they're in the same LAN.
If you don't want untrustable black boxes hanging around, then your options become pretty limited.
You can DIY something with an SBC like a Raspberry Pi or whatever. You can hang USB cameras off of your computers like it's 2002 again. You can try to find something that OpenIPC or thingino or whatever supports. (You'll never finish with this project as the years wear on, the hardware fails, product availability ebbs and flows, and the scope changes. Maybe that sounds like a fun way to burn time for someone, but it doesn't sound like fun to me.)
Or, you can accept that the world is corrupted -- and by extension, the cameras are also all corrupted.
The safe solution is then actually pretty simple: Use wired-only cameras that work with Frigate (or whatever your local NVR of choice may be), keep them on their own private VLAN that lacks Internet access, and don't worry about it.
The less-safe solution is also pretty simple: Do what everyone else is doing, and just forget the problem exists at all. Switch your brain off, buy whatever, and use it. (And if there's an area that you don't want other people to see, then: Don't put a camera there.)
(We probably are not as interesting as we may think we are, anyway.)