Someone at my org used their main company email address for a root user om an account we just closed and a 2nd company email for our current account. We are past the time period where AWS allows for reverting the account deletion.
This now means that he isn’t allowed to use SSO via our external IdP because the email address he would use is forever attached to the deleted AWS account root user!
AWS support was rather terrible in providing help.
And on the flip side I can easily see why not allowing email addresses to be used again is a reasonable security stance, email addresses are immutable and so limiting them only to one identity seems logical.
Sounds quite frustrating for this user of course but I guess it sounds a bit silly to me.
Have you ever worked in a company of any size or complexity before?
1. Multiple accounts at the same company, spun up by different teams (either different departments, regions, operating divisions, or whatever) and eventually they want to consolidate
2. Acquisitions: Company A buys Company B, an admin at Company A takes over AWS account for Company B, then they eventually work on consolidating it down to one account
I'm not arguing that it was impossible to know the long term outcome here, but it doesn't mean it isn't frustrating. If you've spent any length of time working in AWS, you know that documentation can be difficult to find and parse.
I can certainly understand why the policy exists. What I think should be possible is in these situations to provide proof of ownership of the old email address so it can be released and reused somehow.
AWS has been around for quite a while now. It’s also not impossible to believe that there are companies out there that might have moved from aws to gcp or something, and maybe it’s time to move back.
If they aren't actually deleting the account in the background and so no longer have a record of that e-mail address, then they must allow re-activation of the account tied to that e-mail address using the sign-up process.
The author probably misunderstood what "account name" is in Azure Storage's context, as it's pretty much the equivalent of S3's bucket name, and is definitely still a large concern.
A single pool of unique names for storage accounts across all customers has been a very large source of frustration, especially with the really short name limit of only 24 characters.
I hope Microsoft follows suit and introduces a unique namespace per customer as well.
I've never really understood S3's determination not to have a v2 API. Yes, the V1 would need to stick around for a long time, but there's ways to encourage a migration, such as having all future value-add on the V2, and maybe eventually doing marginal increases in v1 API costs to cover the dev work involved in maintaining the legacy API. Instead they've just let themselves, and their customers, deal with avoidable pain.
And with no meaningful separator characters available! No dashes, underscores, or dots. Numbers and lowercase letters only. At least S3 and GCS allow dashes so you can put a little organization prefix on them or something and not look like complete jibberish.
Storage accounts are one of the worst offenders here. I would really like to know what kind of internal shenanigans are going on there that prevent dashes to be used within storage account names.
I’ve lost track of servers in Azure because the name suddenly changed to all uppercase ave their search is case sensitive but whatever back-end isn’t.
This approach goes a long way toward democratizing the name space, since nobody can "own" the tag prefix. (10000 people can all share it). This can also be used to prevent squatting and reuse attacks - just burn the full account name if the corresponding user account is ever shut down. And it prevents early users from being able to snap up all the good names.
Their stated reason[1] for doing so being:
> This lets you have the same username as someone else as long as you have different discriminators or different case letters. However, this also means you have to remember a set of 4-digit numbers and account for case sensitivity to connect with your friends.
[1]: https://support.discord.com/hc/en-us/articles/12620128861463...
> Starting March 4, 2024, Discord will begin assigning new usernames to users who have not chosen one themselves. If your username still has a discriminator (username*#0000*), Discord will begin assigning you a new, unique username as soon as March 4, 2024. We will try to assign you a unique username that is similar to your current username.
Just some days ago I received warning from Discord that they'll delete my account since I haven't logged in for two years.
> Your Discord account has been inactive for over 2 years, and is scheduled to be deleted on $DATE. But don’t worry! Dust off the cobwebs and prevent your account from being deleted just by logging in.
Imagine trying to connect with your friends... by telephone.
For buckets I thought easy to use names was a key feature in most cases. Otherwise why not assign randomly generated single use names? But now that they're adding a namespace that incorporates the account name - an unwieldy numeric ID - I don't understand.
In the case of buckets isn't it better to use your own domain anyway?
For particularly high risk activities if circumstances permit you can sidestep the entire issue by adding a layer of verification using a preshared public key. As an arbitrary example, on android installing an app with the same name but different signing key won't work. It essentially implements a TOFU model to verify the developer.
It won't surprise you the scheme never caught on and has been decommissioned (you can now register any available domain as an individual as well). The difference is probably few people use a personal TLD, but many use a name on some social media.
I'm excited for IaC code libraries like Terraform to incorporate this as their default behavior soon! The default behavior of Terraform and co is already to add a random hash suffix to the end of the bucket name to prevent such errors. This becoming standard practice in itself has saved me days in not having to convince others to use such strategies prior to automation.
[1] https://aws.amazon.com/blogs/aws/introducing-account-regiona...
GCP, however, has does this to itself multiple times because they rely so heavily on project-id, most recently just this February: https://www.sentinelone.com/vulnerability-database/cve-2026-...
When a name becomes free and somebody else uses it, it points to another thing. What that means for consumers of the name depends on the context, most likely it means not to use it. If you yourself reassign the name you can decide that the new thing will be considered to be identical to the old thing.
“While account IDs, like any identifying information, should be used and shared carefully, they are not considered secret, sensitive, or confidential information.” https://docs.aws.amazon.com/accounts/latest/reference/manage...
But probably best to not advertise it too much.
This is where IaC shines.
Edit: crossout incorrect info
In either case, the subdomain you use in DNS requests are not private. Attackers can collect those from passive DNS logs or in other ways.
"Leak" is maybe a bit over-exaggerated, although if someone MitM'd you they definitely be able to see it. But "leak" makes it seem like it's broadcasted somehow, which obviously it isn't.
You'd need to check the privacy policy of your DNS provider to know if they share the data with anyone else. I've commonly seen source IP address consider as PII, but not the content of the query. Cloudflare's DNS, for example, shares queries with APNIC for research purposes. https://developers.cloudflare.com/1.1.1.1/privacy/public-dns... Other providers share much more broadly.
How does one execute this "passive DNS" without quite literally being on the receiving end, or at least sitting in-between the sending and receiving end? You're quite literally describing what I'm saying, which makes it less of a "leak" and more like "others might collect your data, even your ISP", which I'd say would be accurate than "your DNS leaks".
> Passive DNS is a historical database of how domains have resolved to IP addresses over time, collected from recursive DNS servers around the world. It has been an industry-standard tool for more than a decade.
> Spamhaus’ Passive DNS cluster handles more than 200 million DNS records per hour and stores hundreds of billions of records per month, providing you with access to a vast lake of threat intelligence data.
https://www.spamhaus.com/resource-center/what-is-passive-dns...
If anyone wants them to be user facing resources, then treat them as such, and ensure they're secure, and don't store sensitive info on them. Otherwise, put a service infront of them, and have the user go through it.
The S3 protocol was meant to make the lives of programmers easier, not end users.
My pet conspiracy theory: this article was written by bucket squatters who want to claim old bucket names after AI agents read this and blindly follow.
Namespaces are annoying but at least let you reorganize or fix mistakes. If you want to prevent squatting, rate limiting creation and deletion or using a quarantine window is more practical. No recovery path just rewards trolls and messes with anyone whose processes aren't perfect.
Not to mention the ergonomics would suck - suddenly your terraform destroy/apply loop breaks if there’s a bucket involved
a) AWS will need to maintain a database of all historical bucket names to know what to disallow. This is hard per region and even harder globally. Its easier to know what is currently in use rather know what has been used historically.
b) Even if they maintained a database of all historically used bucket names, then the latency to query if something exists in it may be large enough to be annoying during bucket creation process. Knowing AWS, they'll charge you for every 1000 requests for "checking if bucket name exists" :p
c) AWS builds many of its own services on S3 (as indicated in the article) and I can imagine there may be many of their internal services that just rely on existing behaviour i.e. allowing for re-creating the same bucket name.
As for c), I assume it's not just AWS relying on this behaviour. https://xkcd.com/1172/
I think that's an important defense that AWS should implement for existing buckets, to complement account scoped bucket.
This is not me criticising you. I totally understand the urge to say it. We're all thinking the thing you're thinking of. It takes effort not to give into it ;)
The reason I personally would refrain from making such comments is that they have the potential to end up as highest ranked comment. That would be a shame. Topic of S3 bucketsquatting is rather important and very interesting.
If you mean to use a "secret" prefix (i.e. pepper) then, that would generate effectively globally unique names each time (and unpredictable too) but you can't change the pepper and it's only a matter of time it'd leak.
The public/private distinction seems moot here, too: the salt is a throwaway since you just need the bucket name.
Even if you do need to keep track of the salt, it should be safe for the attacker to know, at least with respect to this attack, because you already own the bucket which the attacker would otherwise hoard.