One of the most memorable things they shared is they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
Phishing isn't really that different.
Great reminder to setup Passkeys: https://help.x.com/en/managing-your-account/how-to-use-passk...
Never mind that that 10% is still 1500 people xD
It’s gone so far that they’re now sending them from our internal domains, so when the banner to warn me it was an external email wasn’t there, I also got got.
So, of course, we got to a point as a company where no one opened any email or clicked any link ever. This caused HR pain every year during open-enrollment season, for other annual trainings, etc.
At one point they started putting “THIS IS NOT A PHISH” in big red letters at the top of the email body to get folks to open emails and handle paperwork.
So then our trainers stole the “NOT A PHISH” header and got almost the entire company with that one email.
I got got when they sent out a phishing test email disguised as a survey of user satisfaction with the IT department. Honestly I couldn't even be mad about it - it looked like all those other sketchy corporate surveys complete with a link to a domain similar to Qualtrics (I think it was one or two letters off).
(Speaking as one of the technical users here. Of course, it wouldn't happen to ME! :P )
Congrats on the loot, though! Your former company can't be all bad. ;)
This pisses me off when the company I work for as a website for the new application for the week. I couldn't even begin to tell you how many websites we have. They don't have a list of them anywhere.
Are your phishing emails good? If so if you don't mind name dropping the company so I can make a pitch to switch to them.
These are so obviously useless. When the majority of your email has a warning banner it stops to be any sort of warning. It's like being at "code orange" for 20 years after 9/11; no-one maintained "heightened security awareness" for decades, it just became something else to filter.
All they've done is teach me to spot the phishing tests, because our email is configured to let the test bypass the banner.
One of my favorite quotes is from an unnamed architect of the plan in a 2012 article about Stuxnet/the cyber attacks on Iran's nuclear program:
"It turns out there is always an idiot around who doesn't think much about the thumb drive in their hand."
Relevant: https://www.schneier.com/blog/archives/2016/10/security_desi...
I know that at least on Linux mounting filesystems can lead to nasty things, so there's FUSE, but ... I have no idea what distros and desktop environments do by default. And then there's all the preview/thumbnail generators and metadata parsers, ...
The U stands for Universal, and it's awfully convenient, but it contributes to the security nightmare.
A CD we can just passively read the bytes off, but if we want our keyboards to just work when we plug them in, then it's going to be harder to secure a supposedly dumb storage device.
When I see these sophisticated phishing messages I like to click through and check out how well-made the phishing site itself is, sometimes I fill their form with bogus info to waste their time. So I opened the link in a sandboxed window, looked around, entered nothing into any forms.
It turns out the email was from a pen testing firm my employer had hired, and it had a code baked into the url linked to me. So they reported that I had been successfully phished, even though I never input any data, let alone anything sensitive.
If that's the bar pen testing firms use to say that they've succeeded in phishing, then it's not very useful.
For what it's worth, all vendors I've worked with in that space report on both. I'm pretty sure even o365's built-in (and rather crude) tool reports both on "clicked link" and "submitted credentials". I'd estimate it's more likely your employer was able to tell the difference, but didn't both differentiating between the two when assigning follow-up training because just clicking is bad enough.
These devices want a physical interaction (this is called "User present") for most operations, typically signified by having a push button or contact sensor, so the attacker needs to have a proof of identity ready to sign, send that over - then persuade the user to push the button or whatever. It's not that difficult but it's one more step and if that doesn't work you wasted your shot.
Definitely use FIDO2, but understand that it's not foolproof. Malware, OAuth phishing, XSS, DNS hijacking, etc. will still pwn you.
It feels to me more like OSes ought to be more secure. But USB devices are extremely convenient.
New USB-HID keyboard? Ask it to input a sequence shown on screen to gain trust.
Though USB could be better too; having unique gadget serial numbers would help a lot. Matching by vendor:product at least means the duplicate-gadget attack would need to be targeted.
But, haven’t there been bugs where operating systems will auto run some executable as soon as the USB is plugged in? So, just to be paranoid, I’d classify just plugging the thing in as “running random executables.” At least as a non-security guy.
I wonder if anyone has tried going to a local staples or bestbuy something, and slipping the person at the register a bribe… “if anyone from so-and-so corp buys a flash drive here, put this one in their bag instead.”
Anyway, best to just put glue in the USB ports I guess.
Good systems these days won't accept such a "keyboard" until it's approved by the user.
Working thus far on NetBSD, FreeBSD, and Linux. OpenBSD to come when I can actually get it to successfully install on the hardware that I have.
* https://jdebp.uk/Softwares/nosh/guide/user-virtual-terminal-...
In principle there's no reason that X11 servers or Wayland systems cannot similarly provide find-grained control over auto-configuration instead of a just-automatically-merge-all-input-devices approach.
The first possible approach is to start off with a non-empty ruleset that simply uses the "old model" (q.v.) and then switch to "opt-in" before commissioning the machine.
The second possible approach is to configure the rules from empty having logged in via the network (or a serial terminal).
The third possible approach is actually the same answer that you are envisaging for the laptop. On the laptop you "know" where the USB builtin keyboard will appear, and you start off having a rule that exactly matches it. If there's a "known" keyboard that comes "in the box" with some other type of machine, you preconfigure for that whatever it is. You can loosen it to matching everything on one specific bus, or the specific vendor/product of the supplied keyboard wherever it may be plugged in, or some such, according to what is "known" about the system; and then tighten the ruleset before commissioning the machine, as before.
The fourth possible approach is to take the boot DASD out, add it to another machine, and change the rules with that machine.
The fifth possible approach is for there to be a step that is part of installation that enumerates what is present at installation time and sets up appropriate rules for it.
The enrichment facility had an air-gapped network, and just like our air-gapped networks, they had security requirements that mandated continuous anti-virus definition updates. The AV updates were brought in on a USB thumb drive that had been infected, because it WASN'T air-gapped when the updates were loaded. Obviously their AV tools didn't detect Stuxnet, because it was a state-sponsored, targeted attack, and not in the AV definition database.
So they were a victim of their own security policies, which were very effectively exploited.
I can't find any sources saying that..
There are a _lot_ of drivers for devices on a default windows install. There are a _lot more_ if you allow for Windows Update to install drivers for devices (which it does by default). I would not trust all of them to be secure against a malicious device.
I know this is not how stuxxnet worked (instead using a vulnerability in how LNK files were shown in explorer.exe as the exploit), but that just goes to show how much surface there is to attack using this kind of USB stick.
And yeah, people still routinely plug random USBs in their computers. The average person is simultaneously curious and oblivious to this kind of threat (and I don't blame them - this kind of threat is hard to explain to a lay person).
They'll pick up the SD/TF card and put it into a card reader that they already have, and end up running something just by opening things out of curiosity to see what's on the card.
One could pull this same trick back in the days of floppy discs. Indeed, it was a standard caution three decades ago to reformat found or (someone else's) used floppy discs. Hell, at the time the truly cautious even reformatted bought-new pre-formatted floppy discs.
This isn't a USB-specific risk. It didn't come into being because of USB, and it doesn't go away when the storage medium becomes SD/TF cards.
I'm not, because I am talking about a USB-specific risk that has been described repeatedly throughout the thread. In fact, my initial response was to a comment describing that risk:
> A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.
The discussion is not simply about people running malware voluntarily because they have mystery data available to them. It is about the fact that the hardware itself can behave maliciously, causing malware to run without any interaction from the user beyond being plugged in.
The most commonly described mechanism is that the USB device represents itself to the computer as a keyboard rather than as mass storage; then sends data as if the user had typed keyboard shortcuts to open a command prompt, terminal commands etc. Because of common controller hardware on USB keys, it's even possible for a compromised computer to infect other keys plugged into it, causing them to behave in the same way. This is called https://en.wikipedia.org/wiki/BadUSB and the exploit technique has been publicly known for over a decade.
A MicroSD card cannot represent anything other than storage, by design.
1. SD is not storage-only, see SDIO cards. While I don’t think windows auto-installs drivers for SDIO device on connection, it still feels risky.
2. It’s worth noting stuxxnet would have worked equally well on a bog standard SD drive, relying only on a malformed file ^^.
I wouldn’t plug a random microsd in a computer I cared about.
(They will run programs, though. They always do.)
Live. On stage. In minutes. People fall for it so reliably that you can do that.
When we ran it we got fake vouchers for "cost coffee" with a redeem link, new negative reviews of the company on "trustplot" with a reply link, and abnormal activity on your "whatapp" with a map of Russia, and a report link. They were exceptionally successful even despite the silly names.
They have no import/export so you are stuck in the iOS/Android ecosystem or have to do the passkey setup for all pages all over again
Use passkeys for everything, like Thomas says.
I’d like to write a follow-up that covers authentication apps/devices, but I need to do some research, and find free versions.
> They just rely on you being busy, or out, or tired, and just not checking closely enough
For example, Twitter relatively recently changed from authenticating on twitter.com to redirecting you to x.com to authenticate (interestingly, Firefox somehow still knows to auto fill my password, but not my username on the first page).
Now on a few occasions I’ve had to copy passwords in order to access things in a different browser, and I think I did encounter one site some years ago where autofill didn’t work, but I really do find autofill almost completely reliable.
In Texas I've had more than one site where create the login on one site, but use that same login on multiple different domains that are NOT directly connected to a singular authentication site (id.me in the example).
You'll identify on id.me
People have just gotten used to this sort of thing unfortunately
For password safe users, auth being handled entirely on a different origin is completely fine, so long as the credentials are bound to (only used on, including initial registration) that origin. The hazard is only when login occurs via multiple domains—which in this case would mean if you had <input> elements on both tax.gov and id.me taking the same username and password, which I don’t believe you do. Your password safe won’t care if you started at https://tax.gov, the origin you created the credentials on was https://id.me, and so that’s the origin it will autofill for.
Example: Citi bank has citibankonline.com, citi.com, citidirect.com, citientertainment.com, etc. Would you be suspicious of a link to citibankdirect.com? Would you check the certificate for each link going there, and trace it down, or just assume Citi is up to their shenanigans again and paste the password manually? It's jungle out there.
What do you get from checking a certificate? Oh yeah, must really be citibank because they have a shitton of SANs? I'd guess most banks do have a cert with an organization name, but organization names can be misleading, and some banks might use LetsEncrypt?
For example, BitWarden has spent the past month refusing to auto fill fields for me. Bugs are really not uncommon at all, I'd think my password manager is broken before I thought I'm getting phished (which is exactly how they get you).
Luckily the only things I don't use passkeys or hardware keys for are things I don't care about, so I can't even remember what was phished. It goes to show, though, that that's what saved me, not the password manager, not my strong password, nothing.
Any site that wants to phish you will either just not show the passkey flow and hope you forget or show it and make it look like it failed and pop up a helpful message about being able to register a new Passkey once you're logged in. And Passkeys are so finicky in browsers that I'd buy it.
According to WebAuthn, this is not true. Such passkeys are considered "synced passkeys" which are distinct from "device bound" passkeys, which are supposed to be stored in an HSM. WebAuthn allows for an RP to "require" (scare quotes) that the passkey be device bound. Furthermore, the RP can "require" that a specific key store be used. Microsoft enterprise for example requires use of Microsoft Authenticator.
You might ask, how is this enforced? For example, can't KeepassXC simply report that it is a hardware device, or that it is Microsoft Authenticator?
The answer is, there are no mechanisms to enforce this. Yes, KeepassXC can do this. So while you are actually correct that it's possible, the protocol itself pretends that it isn't, which is just one of the many issues with passkeys.
A few years ago, I managed to get our InfoSec head phished (as a test). No one is safe :)
We also ended up dropping our email security provider because they consistently missed these. We evaluated/trialed almost a dozen different providers and finally found one that did detect every X phishing email! (Check Point fyi, not affiliated)
It was actually embarrassing for most of those security companies because the signs of phishing are very obvious if you look.
It's much much harder to block emails that aren't actually phishing but have components that would flag them anyway.
It's pretty incredible the level of UI engineering that went into it.
Some screenshots I took: https://x.com/grinich/status/1963744947053703309
Sounds easy enough in theory. How do you do that in practice?
That’s it. The single working Defense against credential theft.
I'm don't know much about crypto so I'm not sure what makes them call the scam 'not very plausible' and say it 'probably generated $0 for the attackers', is that something that can be verified by checking the wallet used in that fake landing page?
"Understanding the Efficacy of Phishing Training in Practice" https://arianamirian.com/docs/ieee-25.pdf
It's like those 2FA SMS that say "don't tell this token to anyone!" while you literally share it with the website that you login to. I am always so frustrated when I receive those
Bullseye. At least they take it with good humor.
Code-based 2FA, on the other hand, is completely useless against phishing. If I'm logging in, I'm logging in, and you're getting my 2FA code (regardless of whether it's coming from an SMS or an app).
"I went to the link which is on mailchimp-sso.com and entered my credentials which - crucially - did not auto-complete from 1Password. I then entered the OTP and the page hung. Moments later, the penny dropped, and I logged onto the official website, which Mailchimp confirmed via a notification email which showed my London IP address:"
> the 1Password browser plugin would have noticed that “members-x.com” wasn’t an “x.com” host.
But shared accounts are tricky here, like the post says it's not part of their IdP / SSO and can't be, so it has to be something different. Yes, they can and should use Passkeys and/or 1password browser integration, but if you only have a few shared accounts, that difference makes for a different workflow regardless.
"Properly working password managers" do not provide a strong defense against real world phishing attacks. The weak link of a phishing attack is human fallibility.
The key here is the hacker must create the most incisive, scary email that will short circuit your higher brain functions and get you to log in.
I should have realized the fact that bitwarden did not autofill and take that as a sign.
... and specifically by using the link in the email, yes?
But it's also why sites that don't work well with a password manager are actively setting their users up to be phished.
Same with every site that uses sketchy domains, or worse redirects you to xyz.auth0.com to sign in.
It's always possible to have issues, of course, and to make mistakes. But there's a risk profile to this kind of stuff that doesn't align well with how certain people work. Yet those same people will jump on these to fix it up!
Blaming some attribute about user as why they fell for a phishing attempt is categorically misguided.
I have been almost got, a couple of times. I'm not sure, but I may have realized that I got got, about 0.5 seconds after clicking[0], and was able to lock down, before they were able to grab it.
The avenue for catching this is that the password manager’s autofill won’t work on the phishing site, and the user could notice that and catch that it’s a malicious domain
Whether that’s via a hotkey or not seems totally irrelevant.
By removing the expectation that my password manager is going to autofill something, I'm now making the conscious decision to always try to fill it myself.
This makes me think more about what I'm doing, and prevents me from making nearly as many mistakes. I don't let my guard down to let the tools do all the work for me. I have to think: ok, I'll autofill things now, realize that it isn't working, and then look more closely at why it wasn't working as I expected.
I won't just blindly copy/paste my credentials into the site because whoops, I think it might have worked previously.
Obviously SSO-y stuff is _better_, but autofill seems important for helping to prevent this kind of scam. Doesn't prevent everything of course!
Since this attack happened despite Kurt using 1Password, I'm really not all that receptive to the idea that 1Password is a good answer to this problem.
We can always make mistakes of course. And yeah, sometimes we just haven't done something.
Honestly it sounds like you are missing the point while simultaneously using a bad password manager.
* "We've received reports about the latest content" - weird copy
* "which doesn't meet X Terms of Service" - bad grammar lol
* "Important:Simply ..." - no spacing lol
* "Simply removing the content from your page doesn't help your case" - weird tone
* "We've opened a support portal for you " - weird copy
There should so many red flags here if you're a native english speaker.
There are some UX red flags as well, but I admit those are much less noticeable.
* Weird and inconsistent font size/weight
* Massive border radius on the twitter card image (lol)
* Gap sizes are weird/small
* Weird CTA
The whole theory of phishing, and especially targeted phishing, is to present a scenario that tricks the user into ignoring the red flags. Usually, this is an urgent call to action that something negative will happen, coupled with a tie-in to something that seems legit. In this case, it was referencing a real post that the company had made.
A parallel example is when parents get phone calls saying "hey it's your kid, I took a surprise trip to a tiny island nation and I've been kidnapped, I need you to wire $1000 immediately or they're going to kill me". That interaction is full of red flags, but the psychological hit is massive and people pay out all the time.
It's x.com/leighleighsf, we've tried every channel but for filing a small claims lawsuit in Texas to get her account back.
> ...
> If you were inclined to take us up on an “airdrop” to “claim a share” of the “token” powering Fly.io, the site is still up. You can connect your wallet it [sic] it! You’ll lose all your money. But if we’d actually done an ICO, you’d have lost all your money anyways.
> Somebody involved in pulling this attack off had to come up with “own a piece of the sky!”, and I think that’s punishment enough for them.
I was amused by all of this, but I still feel like they should care more about how impactful this was for anyone who got crypto-scammed at the link. I mean, yes, those are people who would believe the story and also click a link like that. But what if fly.io were found to share liability?
Sure Twitter is rubbish, but it's still a huge platform, still tied to your brand, you're still using it, so it can still hurt you. Either take it seriously or stop using it.
We shouldn't have, and we do take it seriously now.
Wouldn't that also require convincing your customers to follow that account?
https://apnews.com/article/myanmar-usaid-thailand-trump-rubi...
You gotta take the Legos away from the CEO! Being CEO means you stop doing the other stuff! Sorry!
And yes they have their silly disclaimer on their blog, but this is Yet Another "oh lol we made a whoopsie" tone that they've taken in the past several times for "real" issues. My favorite being "we did a thing, you should have read the forums where we posted about it, but clearly some of you didn't". You have my e-mail address!
Please.... please... get real comms. I'm tired of the "oh lol we're just doing shit" vibes from the only place I can _barely_ recommend as an alternative to Heroku. I don't need the cuteness. And 60% of that is because one of your main competitors has a totally unsearchable name.
Still using fly, just annoyed.
The "CEO" thing is just a running joke. Kurt's an engineer. Any of us could have been taken by this. I joke about this because I assume everybody gets the subtext, which is that anything you don't have behind phishing-resistant authentication is going to get phished. You apparently took it on the surface level, and believe I'm actually dunking on Kurt. No.
I was thinking about, IIRC, back in 2023[0], where you all were suffering a lot of issues. And I _believe_ I saw some chatter about Fly building out a team of support/devops-y/SRE engineers around that time. And I had just assumed up until there that, as a company about operations, that you would already have a team that is about reliability.
I am not a major user of you (You're only selling me like 40 bucks a month of compute/storage/etc), but I had relatively often been hitting weird stuff. Some of it was me, some of it was your side. But... well... I was using Heroku for this stuff before and it seemed to run swimmingly for very long. So I was definitely a bit like "oh OK so you just didn't care about reliability until then?" I mean this lightly, but I started basically anti-recommending you after the combo of the issues and the statements your team was making (both on this kind of operations and also communications after the fact).
I think you all generally do this better now though, so maybe I'm just bringing up old grudges.
> You apparently took it on the surface level, and believe I'm actually dunking on Kurt.
No, I took it in the same tone I take a lot of your company's writing.
> The "CEO" thing is just a running joke. Kurt's an engineer.
I think if you are the CEO of a company above a certain (very low!) headcount you put down the Legos. There are enough "running a company" things to do. Maybe your dynamics are different, since your team is indeed quite small according to the teams page.
Every startup engineer has had to deal with "The CEO is the one with admin rights on this account and he's not doing the thing because somehow we haven't pried the credentials from him so that people doing the work does it". And then the dual of this, "The CEO fixes the thing at 2AM but does it the wrong way and now thing is weird". A way you avoid this is by yanking all credentials from the CEO.
I'm being glib here, because obviously y'all have your success, the Twitter thing "doesn't matter", etc. I just want to be able to recommend you fully, and the issues I hit + the amateur hour comms in response (EDIT: in the past) gets on my nerves and prevents me from doing it!
Anyways, I want you all to succeed.
[0]: https://community.fly.io/t/reliability-its-not-great/11253
Now that Kurt doesn't have commit access, who do I ask to get internal Fly Slack bot fizz off of my behind.
I was in a devrel channel for a short while and ever since it has asked me to write updates in a channel I don't have access to. Frequently.
Feels like this kind of detection is hard to balance, and calling legit websites possible phishing might be problematic...
We would like to think that we're the smart ones and above such low level types of exploits, but the reality is that they can catch us at any moment on a good or bad day.
Good write up
They literally admit they pay a Zoomer to make memes for Twitter. I think you are falling for the PR.
MFA is not in general phish-resistant. But Passkeys, U2F, and FIDO2 generally are, because they mutually authenticate; they're not just "one time passwords" you type into a field, but rather a cryptographic protocol running between you and the site.
For everyone reading though, you should try fly. Unaffiliated except for being a happy customer. 50 lines of toml is so so much better than 1k+ lines of cloudformation.
We will get to this though.
> Fly.io supports Google and GitHub as Identity Providers[1]
How about you just support SAML like a real enterprise vendor, so IdP-specific support isn't your problem anymore? I get it, SAML is hard, but it's really the One True Path when it comes to this stuff.
I'm not exaggerating; you can use the search bar and find longer comments from me on SAML and XMLDSIG. You might just as well ask when we're going to implement DNSSEC.
My favourite slop-generator summarizes this as "While SAML is significantly more complex to implement than OIDC, its design for robust enterprise federation and its maturity have resulted in vendors converging on a more uniform interpretation of its detailed specification, reducing the relative frequency of non-standard implementation quirks when dealing with core B2B SSO scenarios." That being said, if your org is more B2C, maybe it makes sense you haven't prioritized this yet. You'll get there one day :)
tru tru
Every system is only as secure as its weakest link. If the company's CEO is idiotic enough to pull credentials from 1Password and manually copy-past them on a random website whose domain does not match the service that issued it, what is to say they won't do the same for an MFA token?
With this setup, you can't fuck up.
That’s what makes it phishing-resistant.
And calling it $FLY like a crypto thing is part of the joke.