Yes, at the end of the day you're going to need to move stuff from non-air-gapped devices to air-gapped devices and vice-versa. You can assume the non-air-gapped devices are completely compromised. But why is the air-gapped device not configured to always show file extensions?
This is literally working because Windows is configured to hide common file extensions, and the attack relies on hiding a folder and replacing it with an executable with a folder icon and the same name +.exe.
If you're designing an airgapped system, this is literally the first thing you should be worried about after ensuring that the system is actually airgapped.
At least windows explorer should have been configured to show extensions (and some training delivered to ensure that the people using these systems have the diligence to notice unusual file extensions with familiar looking icons).
It would be even better if the file explorer was replaced with something less easy to fool. Something which does not load custom icons, never hides directories, and maybe even prevents access if the flash drive has indications of shenanigans (unusually named files, executables, hidden folders) which would indicate something weird was going on.
It's a good job that unlike with Stuxnet nobody plugged in a flash drive from the literal car park, but this is pretty poor on the part of the people designing/implementing the airgapped environment.
And next time if some other airgapped vuln is reported, that will be literally the first thing people should be worried about! God, people are so stupid, if only they would just do literally the first things they should be worried about.
as the sibling comments to this one pointed out, most people change the default Explorer settings, first thing.
In fact changing the default Explorer settings has been a security warning for years.
In conclusion, yes I believe if something is in a common list of things you should do to make your windows system more secure (for like, people who are not security experts) and you don't do it then probably "God, people are so stupid" is a reasonable response.
I wouldn't blame most people for not changing this setting.
Except air-gapped systems should be setup by security experts, so stupidity all 'round.
I would also consider disabling USB ports in air-gapped systems. You can still buy PS/2 keyboards and mice. Server and maybe some workstation motherboards still have PS/2 ports (and there are PS/2 PCI cards). For sneakernet file transfer you can allow use of an SD card. That way, if you see a USB cable or other device in an air-gapped environment, it should be an immediate red flag.
I don't think they're think the extension is "ugly". I think they expect their users to think that way, on average, and don't want to deal with being told such (or getting support requests because ignorant users tried to remove that part of the filename and now can't open Word or whatever).
I wouldn't call it the result of backwards compatibility, either - although Windows' level of backwards compatibility is insane and does impose a continuous tax. But in the current case, there would need to be a new system for inferring executability before we could talk about removing the existing one. AFAIK Windows uses file headers to determine the format of an executable file (i.e. how to load it), but not to decide whether a given file should be deemed executable at all. And the attrib bits, also AFAIK, don't include anything for execution either.
>I would also consider disabling USB ports in air-gapped systems.
I assume they aren't worried about "BadUSB" type attacks because they're in control of the physical media used for transfer.
Funny side story, windows pops an error confirmation message if you change or remove the extension of a file name as part of a rename operation.
There's no way to disable this message outside of writing an autohotkey script to check for the prompt and auto-accept it. (I did this once, no I don't have the AHK script, but I don't recall it being hard to write.)
On on a similar funny side note, there's no way to tell windows to always open files with no extension in a specified application (e.g. gvim). But you can edit the registry to do it.
> On on a similar funny side note,
Those things are only funny if you don't have to use this "Operating System" hours a day. Because then it becomes a PITA.
Well, no. This bulshit was introduced later by Microsoft to make Windows more "user friendly". In the same line like truncating URLs in briwser bars by Google and Mozilla.
In my view, the best use of an airgapped machine would be for storage of extremely dense and sensitive information such as cryptographic keys. Signing or encryption should be accomplished through an inspectable data channel requiring manual interaction such as QR codes. When every bit in and out of a machine serves a purpose, it's much less likely to leak.
Example: show a qr code to a camera on the airgapped machine and get a qr code on the screen containing the signature of the data presented to it. There is very little room for nefarious code execution or data transmission.
The user would then use a terminal emulator to connect to something like a BBS[1] where they could browse files, and download or upload files to the connected USB storage device using XMODEM[2] like in the good old days.
edit: It could of course also filter the files, for example not list any executable files, and prevent transfer of executable files based on scanning their contents.
The MITM device would be implemented using a microcontroller with signed firmware, and careful design to prevent a connected USB device to do shenanigans like voltage glitching. This would include using isolated DC-DC and isolated data lines ala like this[3].
The MITM would only interact with the storage device class. If the connected device presents itself as more, say a keyboard, it would just ignore those.
The user must be prevented from bypassing the MITM device, though this could be done through physical means.
[1]: https://en.wikipedia.org/wiki/Bulletin_board_system
[2]: https://en.wikipedia.org/wiki/XMODEM
[3]: https://ez.analog.com/ez-blogs/b/engineerzone-spotlight/post...
Have a few ports on it: This device can only be a mouse -> USB This device can only be a keyboard -> USB
Then it filters everything coming in to ensure that it matches the desired type of activity.
For USB drives, I'm tempted to say it should read the USB drive once, and copy all information to internal storage in order to prevent data being sent to the usb via timed or coordinated reads. This would allow a truly read only thumbdrive.
On the other hand no matter the transport you’re probably going to get owned by well known vulnerabilities in any software processing data from the internet-connected side, if you’re using the air gap as an excuse to avoid patching or otherwise caring about secure coding practices.
As for patching, you would ensure a secure root of trust and only allow read-only media to deliver said updates as another sibling points out
Air gapping is still valuable but it’s still hard to impossible. For example, stuxnet was delivered by an insider. So good physical security and monitoring is also needed to prevent against insider threats.
Anything that does come out of the infected area in-tact has to be cleaned or inspected carefully to ensure it is free of the “sensitive data” infection.
Any device without a screen is somewhat useless here, especially for crypto, because you want to see what you are signing before you sign it.
So you'd snapshot its QR code, hand-carry the slate to air-gapped Computer B, press a button to wake it up, brandish the "copied" QR code in front of B's camera, etc. Maybe even take one and (with careful labeling) put it into a safe, depending on how long you plan to store it.
You could do something similar with a camera and thermal-paper printer, but then the physical artifact needs to be reliably destroyed by manual effort, as opposed to auto-erasure.
https://foundation.xyz/passport/
https://store.blockstream.com/products/blockstream-jade-hard...
I’m okay with it taking a minute or two to install software on a high security system, eg, the root cryptography for our military radios.
…maybe I should get into the business of “paper drives”.
It doesn't matter whether the smartphone is internet-connected or not, as the slate's contents wouldn't be of any use without hacking one of the machines, and if you could do that, you wouldn't need to hack the slate in the first place.
In contrast, a worker can sign-out a hardened device, and when they return it on the way out you can be reasonably sure they couldn't have easily made copies. Plus the scanner won't capture arbitrary pictures in the first place, and it can be set to auto-wipe after X minutes of inactivity.
If you give people 10 unexposed sheets and require them to return a total of 10 used/unused on the way out, that's susceptible to them smuggling in an unexposed sheet, and you're back to underwear searches again.
This would be a bit slow: say a barcode. If we assume a single barcode can hold 1500 characters (text twice as long as your comment), a blog entry may need 4-5 barcodes. Not undoable.
Such a machine would not have a camera, WiFi, BT, or any input or output mechanism of any kind.
But perhaps we are just saying the same thing, and I just prefer my way of saying it over admitting to yours...
Is it just that the amount of data it holds is more constrained?
Source? Unless you're using something like usb 4 (ie. thunderbolt) usb devices don't have DMA access.
"BadUSB is a computer security attack using USB devices that are programmed with malicious software.[2] For example, USB flash drives can contain a programmable Intel 8051 microcontroller, which can be reprogrammed, turning a USB flash drive into a malicious device.[3] This attack works by programming the fake USB flash drive to emulate a keyboard. Once it is plugged into a computer, it is automatically recognized and allowed to interact with the computer. It can than then initiate a series of keystrokes which open a command window and issue commands to download malware. " -- https://en.wikipedia.org/wiki/BadUSB
That said, cameras are more of a commodity.
QR and typing: see TOTP tokens!
0 - https://spectrum.ieee.org/the-crazy-story-of-how-soviet-russ...
so you consider that someone may be reading and possibly modifying data on any computer/phone you own, okay
> It's one of the reasons I've been very down on cryptocurrencies in general
but you are willing to have form of money that is only accessible via said computer/phone that someone can read and use as if he was you?
how does it work? how's this not a contradiction?
I have a low opinion about the usefulness of cryptocurrencies because true security is so difficult. It's basically impossible, even if you don't make any mistakes.
I really enjoy this kind of stuff, and loved reading about the z-cash ceremony. I'm not going to those lengths to protect my secrets, so I feel it's better if I don't hold a lot of wealth in such a fragile way.
I used to like it, but now I don't. It's still neat, but it's too prone to costly mistakes.
not everyone is a stock marketeer - I personally keep reading bullish as derived from a bully - something clearly negative
[1] https://www.nerdwallet.com/article/investing/bullish-vs-bear...
This is analogous to a power grid stripped of all fuses and circuit breakers to make it easier to design toasters.
We've studied this problem since 1972[1]. Solutions were found (but the Internet Archive is down, so I can't be sure [2] points to the right files now).
[1] https://csrc.nist.rip/publications/history/ande72.pdf
[2] https://web.archive.org/web/20120919111301/http://www.albany...
People have been doing the same for Apple when it tried to bring explicit app permissions to MacOS. https://tidbits.com/2024/08/12/macos-15-sequoias-excessive-p...
That's lazy design, and it doesn't work.
You can make an argument that UAC was part of a similar strategy, but not paying for an EV certificate only results in a one-time annoyance for your users, not a continuous one. UAC is equivalent to Gatekeeper. This permissions nonsense is worse than UAC.
Do you want to give your wallet to the cashier? (Yes/No)
Computers don't have to be that stupid about it, it was someone inside Microsoft being passive aggressive instead of actually
doing their job and presenting
useful options at runtime, that resulted in the horror that was UAC.We can all agree UAC and permission flags suck.
and yet, most ecommerce shops easily remember the credit card on file (amazon, steam, etc), and you literally can one click buy.
So, even if you’d need admin confirmation to run each new binary, it wouldn’t help - because no new binaries are executed, just python with a new set of scripts.
And, correct me if I’m wrong, but preventing os from running new scripts would be virtually impossible.
I don't remember the whole details, but I believe it installed an autorun.inf file on all USB drives so that inserting the drive on another PC would install it automatically.
It is strange to me that a security-conscious organisation such as a ministry of foreign affairs would build an air-gapped system this way. Possibly it's a compliance checklist item from their parent organisation, but with no oversight?
The US has "forward deployed" state department personnel that handle information security of embassies and consulates in a standardised way, probably this SE asian country (and the EU organisation) should follow suit.
See: Hezbollah & pagers.
(Also their walkie talkies also exploded.)
* force prompting executing anything off external media
* disallow anything other than input devices for USB
* disallow unsigned binaries from running
* work to require usb peripherals to carry a unique cryptographic signature so that you can easily lock the set of allowed devices once the machine is set up
Heck, a lot of this stuff could be valuable to help secure corporate IT machines too.
Fast forward a few years, and the computer was still running great. The desktop and downloads folders were full of messengers, "flash players" and other malware - but all binaries were throwing cryptic error. Since no one in IT was around or cared, nobody figured out how to edit the allow list. The computer was deemed half-broken. But when neighboring PCs were completely infested, this one could still open, edit, and print office docs flawlessly.
It felt like a magic fix for shared Windows PC security.
ACRORD32.EXE was actually cmd.exe
WINWORD.EXE was actually Mozilla
...and so on
Edit: one of those exes was regedit and every time I sat down I'd delete all the keys named Policies as a routine excersize. After that, restart explorer with one of the tricks. I don't remember the specific one but it wasn't officially documented iirc.
https://superuser.com/questions/335917/how-can-you-do-a-clea...
Physical security is another big factor, there is a long checklist for a SCIF that at some level takes into account TEMPEST type threats that mitigate many attacks on air gapped systems.
And none of these things are the default on commercial software because users want it to be frictionless. They want software to install right away when you plug in a usb drive, etc.
There is basically no security focused pc hardware, aside from maybe raptor systems which isn’t really the same ilk.
Off the top of my head, a lot of these devices and hardware were scam traps by LEO.
I’d go on a limb and say this isn’t a problem anyone actually wants to solve.
All the technical "hacking" happened on the non-air-gapped side, so that that computer would put malware onto the USB with a deceptive presentation (an .exe with an icon to look like a folder and with the same name as an expected folder, while the actual folder was hidden). The rest is social engineering (and Windows not showing the .exe extension).
[1]: `pip3.exe` is an example; it's hard-coded to run the Pip module from `site-packages`, but `pip.py` could be replaced. More to the point, these wrappers are automatically created by Setuptools, to run an arbitrary script specified in the build process, from a "stub" included with Setuptools. (All of this is to work around the lack of support for shebangs, of course.) I don't know if Setuptools can or does sign these wrappers. Probably not.
I think that will greatly reduce the ability to get work done. It's my understanding that the workflow for using these specific airgapped computers involved moving USB thumb drives between computers.
Now you're making me wonder if keyboard firmware could be an attack vehicle.
Does anyone have more details on how this is done?
> It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
[1] https://www.welivesecurity.com/en/eset-research/mind-air-gap...
THAT would probably ensure the user does not suspect anything nefarious has happened, even after the fact.
Now how Windows Defender and other heuristics based firewalls would not treat the malicious EXE with folder icon as a threat and quarantine it immediately -- I dont know.
The "malicious" exe, as I understood it, just boots up Python to run a script, where the actual malice lies. Windows Defender has to treat an executable that does only this as benign - because Python's packaging tools provide such executables (so that Windows users can get applications - including (upgrades to) Pip itself - from PyPI that "just work" in a world without shebangs and +x bits). For that matter, standard tools like Setuptools could well have been used as part of crafting the malware suite.
Presumably they could notice that an .exe has the normal folder icon. But presumably that icon could also be slightly modified in ways that would defeat heuristic recognition but still appear like a folder icon to a not-especially-attentive human.
>Does the malware EXE that now looks like a Folder icon with same name as the last modified actual folder (which is now hidden) ... also redirect the user to the actual folder and its contents in file Explorer after successfully delivering its malicious payload?
I didn't see anything about that in the description of the attack. But I assume that the Python script could accomplish this by just making an appropriate `subprocess.run` call to `explorer.exe`.
Once you can manipulate the code on the firmware, its probably pretty easy to find a kernel level exploit.
Here is a reference with a virus. https://superuser.com/questions/854918/manipulating-firmware...
Yea but that has to be a custom or specific kind of programmable USB device. Or one that somehow unintentionally allows you to reflash its firmware to something else.
And also if anyone ever plugs your malicious USB device into a Mac, they will get a pop-up from macOS that asks you to identify the keyboard. Although maybe if it fakes a specific USB keyboard that macOS knows out of the box, you could avoid that?
>It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
The problems here are to do with how Windows uses and presents file extensions.
OK, you may be overthinking this one
External buses and RF comms present massive attack surfaces that must be locked down with religious fervor including auditing, disabling, and management.
https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
I always thought that the big switch was probably still a massive vulnerability - is it air-gapped or not? When the switch is flicked it only takes milliseconds for an exploit.
Anyway, not sure what happened to those guys in the end.
(This should also include sneakernet!)
Heck every TV show has someone downloading the nuclear plans off Dr. Evil's laptop by...plugging in a USB device when he's distracted by spilling his coffee.
In my experience it doesn’t stop admins connecting an “offline Root CA” to the WiFi network to install their entire suite of server management software — none of which are functional without an active network connection.
Yes, my plan was to physically remove the wifi adapter daughter card. They exposed the CA to gigabytes of third-party software before I turned up to do the setup. Yes, I warned them not to even take the computer out of the box.
Offline anything just breaks people’s brains.
“How do we keep the anti-virus pattern file up to date?”
“You don’t.”
Protection was BitLocker drive encryption with a manually entered (long!) passphrase to decrypt. Backups were to encrypted USB media never plugged into anything else other than a redundant clone of the CA used for DR testing. Everything went into safes.
This design works Well Enough for all but the most demanding purposes, but the whole rigmarole was undone by a well-meaning but naive admin “just doing his job”.
Fibre for networking, PS/2 (with or without) adapters for keyboards and mice, and VGA for monitors.
as an example of what it's still like in some of those spaces, here's a product sheet for a cross-domain chat solution - the screenshot on the second page appears to be CDE. https://owlcyberdefense.com/wp-content/uploads/2020/12/20-OW...
Namely that (good) library authors will do everything possible to avoid breaking the public API, which can be seen as a “promise” from them in what can be relied upon, while internal/private members offer no such promises and the library author can feel free to change/remove them as desired with no prior notice.
This was more like a controlled environment, but everyone knows that USB/WIFI is a steaming shitpile, with its own firmware and other shit.
Quick FAQ:
> Haven't we known about USB vulnerabilities forever (agent.btz, BadUSB etc.)?
The fact that USB devices were used to transfer the files is irrelevant to the attack.
The attack doesn't depend on running the malware directly off the USB device, on any kind of auto-run vulnerability, etc. It would have worked out the same way if files had been transferred, for example, by burning them to DVD. The attack only depends on the machines on the non-air-gapped side, being compromised such that the attackers can control what is put onto the USB. But the USB drives themselves are only being used as dumb storage here.
The attack instead primarily depends on social engineering that is helped along by the design of the Windows GUI. On the air-gapped machine, the user sees a "folder" which is actually an executable file. By default, Windows hides the .exe file extension (which it uses to determine executability of the file) in the GUI; and the icon can be customized. Double-clicking thus launches the malware installer when it was only supposed to open a folder. The folder has a name that the user expected to see (modulo the hidden extension).
It appears that the original setup involves hiding[1] (but still copying) the folder that was supposed to be transferred, and then renaming the malware to match. (Presumably, the malware could then arrange for Windows to open the hidden folder "normally", as part of its operation.) Windows can be configured to show "hidden" files (like `ls -a`), but it isn't the default.
Notice that this is social engineering applied only to the process of attempting to view the files - nobody was persuaded to use any storage devices "from outside".
> Isn't that, like, not actually air gapped?
The definition of an air gap generally allows for files to be passed across the air gap. Which is all the attack really depends on. See also "sneakernet". The point is that you can easily monitor and control all the transfers. But this attack is possible in spite of that control, because of the social engineering.
> How is it possible to exfiltrate data this way?
The actual mechanism isn't clearly described in media coverage so far, from what I can tell. But presumably, once malware is set up on the air-gapped machine, it copies the files back onto the USB, hiding them. When the device is transferred back to the non-air-gapped side, malware already present there monitors for the USB being plugged in, retrieves the files and uploads them (via the "GoldenMailer" or "GoldenDrive" components) elsewhere.
[0] https://www.welivesecurity.com/en/eset-research/mind-air-gap..., via https://news.ycombinator.com/item?id=41779952.
[1]: Windows file systems generally don't have an "executable bit" for files, but do have a "hidden bit", rather than relying on a leading-dot filename convention. So it's the opposite of what Linux does.
> Next, the infected computer infects any external drives that get inserted. When the infected drive is plugged into an air-gapped system, it collects and stores data of interest. Last, when the drive is inserted into the Internet-connected device, the data is transferred to an attacker-controlled server.
This implies the ability to turn a common USB drive into a vector for malware.