"At best, an air gap is a high-latency connection" -Ed Skoudis - DerbyCon 3.0
This is the network that operation Olympic Games used to get Stuxnet into the Natanz facility. Contactor laptops are a major part of that network.
The same is possible in Windows 10 and 11, but the users will revolt, if a sysadmin were to enforce such (the same users who insist on using Windows instead of a more secure system).
Can I add a little more colour here (and have worked in and designed-for very secure environments) - users will revolt if removing the USB ports makes their life more difficult. This can work if there is an effective feedback loop that makes sure the users can still do their jobs efficiently in the absence of USB ports, and corrects for them when they can't. Users won't go around something unless it gets in their way!
Partly it's to prevent leaking of company secrets, unauthorized use of corporate devices for home use, harder to track the location of data, as well as the possibility of malware.
My workplace has a policy of no USB storage devices (though you can request an exception). By default, other USB devices work, and storage devices are mounted as read-only.
I don't think the goal is so much system security as preventing data breaches/data exfiltration.
I could easily bypass the policy since I have the permissions to do so, but I won't. Working in the trading/hedge fund space, it's not unheard of to see employees sued for stealing trade secrets (quant models, for example). One only needs to search "citadel sues former employees" for examples.
edit: former Citadel employee; have not worked there in over a decade.
The controls can be very granular, if you decide to manage that.
Additionaly, even when attacked with such extreme measures, most users won't try to plug in planted, potentially malicious USB devices if they don't expect them to work.
People don’t like windows let alone corporate deployments of windows.
My point is that, practically speaking, most companies don't have the discipline to actually keep an air gap up, long-term. You inevitably need to get data in and out of the air-gapped systems.
The "air gapped" networks I've seen end up not actually being air gaps. Real air gaps are inconvenient, so eventually somebody installs a dual-homed host or plugs the entire segment into a "dedicated interface" on a firewall. Even without that, contractors plug-in random laptops and new machines, initially connected to the Internet to load drivers / software, get plugged-in to replace old machines. The "air gap" ends up being a ship of Theseus.
I had a Customer who had DOS machines connected to old FANUC controllers. They loaded G-code off floppy diskettes. Eventually those broke and they started loading G-code over RS-232. The PCs didn't have Ethernet cards-- their serial ports were connected to Lantronix device servers. It wasn't ever really an air gap. It was a series of different degrees of "connectivity" to the outside world.
Norton, trust no other!
If one's role is to only update AV on the airgapped machine then their data transfer to the airgapped machine should be only going into one direction.
and also, if it's air gapped, why even have an antivirus. ... for air borne ones?
Seriously though, I learned a lot there. If I wanted friends to have access to such a system, this is the plausibly deniable access route I'd set up for them.
Unfortunately I wasn't prepared to broach the subject in a way that didn't have me say "you'd be safer without the AV". So I got nowhere.
Or do you eschew using a fork, because in 12 weeks in will fall on the floor?
Certainly, the problem is secret falls on the floor. The ones we can see can be handled.
This problem even happens with brand names, with hardware. You buy a fridge, and a decade later go to buy another. Meanwhile, megacorp has been bought by a conglomerate, and brand name is purposefully crap.
Speaking of which... it's remarkable that Microsoft Windows probably has code from 50,000 people in it. Yet there haven't been any (public) cases of people sneaking malicious code in. How come?
Here is a random vendor with nice pictures: https://owlcyberdefense.com/learn-about-data-diodes/
…but…what? Why are we doing the blinking-light song and dance at all then?
If data diode points to outside, like a power plant exporting its status to web, then photosensor can be completely taken over. Sure, the web page might be completely bogus, but there will be no disruption in power plant's system. The hardware design guarantees it. That is the strongest case for data diodes.
If data diode points to inside, like a power plant getting new data from the outside, then sure, photosensor software is a concern, but since it's relatively simple, this would not be my biggest worry. I'd worry about app that runs on target PC and receives files; if file is an archive, about un-archiver exploits; an finally about the files themselves. If there a doc, are you sure it's not exploiting Word? If there is an update, are you sure it's not trojaned? Are you sure users are not click on the executable thinking it's a directory?
I don't think this property can be guaranteed for the alternatives you proposed.
So a data diode wouldn't stop a "stuxnet" scenairo where the malware is trying to sabotage the air-gapped. But it would prevent secret information being leaked out.
(Btw. I'm just explaining what a data diode is, and what guarantees it provides. I don't actually think that it would be useful in practice, because it feels to be too cumbersome to use it and therefore the users/IT would poke holes into the security it would provide otherwise.)
Love to read your findings!
Why not USB or internet:
Transmitter is totally safe from compromised receiver. If you insert USB stick to upload file, it could maliciously pretend to be a keyboard. If you connect to Internet to upload a file, your network stack can be exploited (and if you have firewall, then firewall must be exploited first, not impossible). Only data diode lets you push the data to unsecure zone and not worry about getting infected in the process.
If receiver has to be secure, things are not as clear-cut, but there is still advantages from great reduction in complexity. None of existing protocols work, so vendor usually implement something minimally simple to allow file transfer and maybe mailbox-like messages. This system will always have some risks present - even if you securely sent PDF to airgapped site, it might still exploit the PDF viewer. But at least the malware won't be able to report status to C&C and exfiltrate the data.
(1) protected computer has a built-in PC speaker (for example, the computer I am typing this message on does not)
(2) There is an insecure PC with sound card and a microphone (or at least headphones which can be used as microphone)
(3) Secure and insecure PCs are close to each other, as opposed to being in different rooms
(4) It's quiet enough, and no one will notice the sounds (because PC speakers are crappy and can't do infra/ultra sound)
Likelihood of this succeeding depends on a lot of factors, the biggest of them being "how good is the security team". Presumably if they are buying data diodes, they at least have some knowledge?
Other exfil ideas I've read were to emit sounds using HDD, emit sounds by changing fan speed, blink code messages on lights ("sleep mode" or caps/num lock), show special patterns on monitors to transmit RF, add hidden dots to printed pages, abuse wireless keyboard or mice.. There are many idea and most of them are pretty impractical outside of very limited circumstances.
What you want is to minimize your data to less than a 1Kb so that it can be manually transmitted.
If a network stack on a modern computer is too dangerous, then use a modem (silly example: apt install minimodem) and an aux cable from the one computer's speaker to the other's mic jack, or a serial connection (not very familiar with those, can't say how complex the driver is there) or something similarly basic that you can audit a memory-safe implementation of
Edit: or do you mean the other way around, namely running a network stack on top of this (e.g.) serial connection? Also not what I meant but I wasn't explicit about that so this confusion would make sense. What I had in mind is doing whatever comms you want to do with the airgapped system, like logging/storing the diplomatic transmissions or whatever this system was for, via this super simple connection such that the airgapped system never has to do complex parsing or state machines as it would with something like USB or a standard kernel's network stack
But why, when a DVD-R handles most use cases at a cost of < $0.25 each, are reliable and ubiquitous, the hardware is likely already there (unless you are using Apple - caveat emptor) and they close the threat vector posed by read/write USB devices.
Sometimes the simplest solution is the best solution.
Plus, compared to a USB form factor, one imagines it’s harder to sneak in circuitry that could retransmit data by unexpected means.
Also, if you think that the seller is lying to you, can't the drive be opened up and inspected to check for that kind of capability ?
I’d argue that read-only CD/DVD has a smaller attack surface than USB, so of the two, it’s preferable. I’d further argue that a CD/DVD (ie, the actual object moved between systems) is easier to inspect than USB devices, to validate the behavior.
I don't know if people class something connected using a data diode as airgapped or not.
you need to use a file transfer tool intended for unidirectional transfer (e.g. multicast) otherwise you will have failure from lost packets.
If you don't require high speed just use RS232.
It is the responsibility of the host to protect the card. The position [i.e., setting] of the write protect switch is unknown to the internal circuitry of the card
https://en.wikipedia.org/wiki/SD_card#Write-protect_notchA diode / photosensor can't.
Yup. I was going to post that TFA and the people at these embassies apparently have a very different definition of what people consider an air-gapped system.
Pushing the non-sense a bit further you could imagine they'd recreate ethernet, but air-gapped, using some hardware only allowing one packet in at a time, but both ways:
"Look ma, at this point in time it's not talking to that other machine, so it's air-gapped. Now it got one packet, but it's only a packet in, so it's air-gapped! Now it's sending only a packet out, so it's air-gapped!".
Yeah. But no.
And Wikipedia? Which says:
> To move data between the outside world and the air-gapped system, it is necessary to write data to a physical medium such as a thumbdrive, and physically move it between computers.
Source: https://en.m.wikipedia.org/wiki/Air_gap_(networking)#Use_in_...
Moving a USB key between two windows machines sounds as bad of an idea as it can get for airgapped data exchange.
https://en.wikipedia.org/wiki/2008_malware_infection_of_the_...
At the time the morris worm had inspired some folks to see if they could spread binaries by infecting every disk inserted. That’s all it did….. spread. I think the virus lives off an interrupt generated by disk insertions.
Fortunately it was harmless (except for a few extra crashes) and I had my original OS disks that could be booted from to clean up the disks.
This is quite a stretch. So we have nothing so far.
I guess the problem is that most air-gapped guides and practices out there mostly focus on sucking the "air" out of computers: internet, networking, bluetooth, etc from the get-go ("remove the network card before starting!"). But even air-gapped systems need some sort of input/output, so a keyboard, mouse/trackpad, displays and monitors will be connected to it - all pretty much vectors for an attack; a base sw will be installed (making possible supply-chain attacks); largely USB drives and even local networking may be present.
As a general rule, I'd say anything that executes code in a processor can be breached to execute malicious code somehow. Signing executables helps, but it's just another hoop to jump over. In fact I thought the threat in OP was about a USB firmware issue, but alas, it was just an executable disguised with a folder icon some user probably clicked on.
To make things worse, critical hardware (trains, power plants...) vendor's fondness for Windows is notorious. Just try to find nix-compatible infrastructure hardware controllers at, say, a supplier like ABB who (among other many things) makes hydroelectric power-plant turbines and controllers: https://library.abb.com/r?dkg=dkg_software - spoiler, everything is Windows-centric, there's plenty of non-signed .EXEs for download at their website. This is true in many other critical industries. So common it's scary these things could be compromised and the flood gates, literally, opened wide open.
You just need a PC and then have a CD delivered through a trusted source – embassies should already have a way of ensuring physical integrity of their mail.
The technical knowledge needed for code signing, especially now with trusted hardware modules, is orders of magnitute more complicated than that.
Offices that don't follow security practices uncovered because they never called for help, another chance for drifters on autopilot to walk away from the job because it just got too hectic, stop paying licenses for a bunch of tools you didn't realize you were paying for and don't need, find replacements for all the tools that are not actively maintained, or don't have cooperative maintainers.
It's a healthy shake-up and our society at large should be less scared of making decisions like these
What is your priority?
(1) Ensuring Actual Security
(2) Following the Official Security Theater Script
In most government orgs, idealists who care about #1 don't last very long.
Employees (unknowingly(?)) using infected USB drives caused security problems. Well imagine that.
As several others pointed out the USB ports on the secure serfver should all be fullly disabled
In addition I would suggest leaving one rewired seemingly availble USB port that will cause a giant alarm to blare if someone inserted anything into it.
Further all informatin being somehow fed into the secure machines should be based on simple text based files with no binary components. To be read by a bastion host with a drive and driver that will only read those specific files, that it is able to parse succefully and write it out to the destination target, that I would suggest be an optical worm device that can then be used to feed the airgapped system.
I'd be really curious to hear of stories like this where the attacked OS is something a little less predicable/common.
I dunno, if a company has for more than two decades (2002: https://www.cnet.com/tech/tech-industry/gates-security-is-to...) said that security is the top priority, and they keep re-iterating that every now and then (2024: https://blogs.microsoft.com/blog/2024/05/03/prioritizing-sec...), yet they still don't actually seem to act like it, I'm pretty sure they still see it as an optional component/marketing story.
That's what I'm saying though: from my point of view, they've started to act like it in the last ~20 years. If you've got evidence to the contrary, feel free to share it.
From my pov, they're about as perfect as the average other for-profit, which is not very security-in-depth at all but it's not just a marketing sham anymore either the way that it used to be. From Bitlocker to Defender to their security patching and presumably secure coding practices, it's not the same company that it was when they launched XP. A lot of the market seems to have grown up and, at least among our customers, we're finding fewer trivial issues
At any rate, this subthread started by saying this standard Windows setup shouldn't be used in the first place. I'm all for not using closed software, but then the question rather becomes: who do you think is deserving of your trust in this scenario?
Whether that is a threat worth dealing with for the concerned embassies is another question of course.
It is probable that this unknown component finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, which is done by JackalWorm. We also believe that the component uses a folder icon, to entice the user to run it when the USB drive is inserted in an air-gapped system, which again is done by JackalWorm.
It's just another variant of the classic .jpg.exe scam. Stop hiding files and file extensions and this hole can be easily closed.
Ahem, "air-gapped'.
Any decent Unix system has either udev or hotplug based systems to disable every USB device not related to non-storage purposes. Any decent secure system woudln't allow to exec any software to the user beside of what's in their $PATH. Any decent system woudn't alllow the user to mount external storage at all, much less executing any software on it.
For air-gapped systems, NNCP under a secure Unix (OpenBSD with home mounted as noexec, sysctl security tweaks enforcing rules, and such) it's godsend.
Securelevel https://man.openbsd.org/securelevel.7
Because you could also say then that anyone can add an USB drive by plugging it directly on the motherboard...
(As for acoustic etc. side-channel attacks: these would require a level of physical access at which point the air gap is moot. E.g. if you can get a physical listening device into the room to listen to fan noise etc. and deduce something about the computation currently being performed, and then eventually turn that into espionage... you could far more easily just directly use the listening device for espionage in the form of listening to the humans operating the computers.)
For example, early on it says: " collect interesting information, process the information, exfiltrate files, and distribute files, configurations and commands to other systems."
and later on: " they were used, among other things, to collect and process interesting information, to distribute files, configurations, and commands to other systems, and to exfiltrate files."
It also mentions several times that the attack on a South Asian countries embassy was the first time this software was seen.
Repeating info like this was kind of a sign of part-applied AI edits with RAG a while ago, might still be true today.
You don't really need one to read text from a screen. Of that most would be old documents that for the most part should be public. What remains besides reading is most likely 95% stuff they shouldn't be doing.
The most secure part is the stuff we wish they were doing.
If you have an operator send a telegram for you that person is capable of doing a lot more with your text than you want. On the other end is another telegram operator to further increase the risk. You might want to send a letter in stead. It's slower but more secure.
If you want to read text from a monitor a computer is super convenient but like the operator it can do other things you don't want. You don't need a computer to put text on a screen. Alternatives might be slow and expensive but in this case you don't have to send things to the other side of the world. That would be the thing you specifically don't want.
They compressed the ROM, and "beeped" it out, wrapping the iPod in an acoustic box, recording it, and then decoding it to decode the ROM.
Back in the ?ps/2? days, I had a joke equalizer plugin for Winamp that used the 3 LEDs on your keyboard. Another output device!
Or https://en.wikipedia.org/wiki/The_Giver / https://en.wikipedia.org/wiki/The_Giver_(film)
Or https://en.wikipedia.org/wiki/The_Congress_(2013_film)
Or Nineteeneightyfour and so much more...(yawn)...
Or
<< You can already hack people by just telling them things.
True, but language fluctuates, zeitgeist changes and while underlying techniques remain largely the same, what nationstate would not dream of being able to simply have people obey when it tells them to do behave in a particular way. Yes, you can regimen people through propaganda, but what if it you could do it more easily this way?
I am one of them, so are you, and I just made you think of something against--or at least without--your will.
This applies to software as well
> Yes, you can regimen people through propaganda, but what if it you could do it more easily this way?
Widespread use of BCIs would help with this for sure, but don’t be under the impression that individual and population level manipulation techniques haven’t progressed well past simple propaganda.
I absolutely buy it based merely on the glimpse of the document from various whistleblowers over the years. At this point, I can only imagine how well oiled a machine it must be.
In short, I am not sure you are right about it. If anything, and I personally see it as a worst case scenario, use of that contraption will be effectively mandatory the way having cell phone is now ( edit: if you work for any bigger corp that and and want to log from your home ).
That is: the point I am making is more nuanced than whether something is popular (like cell phones or other tech).
[1]https://www.wired.com/2017/01/trump-android-phone-security-t...
Using chips with a secure architecture, safe languages and safe protocols is going to result in secure implants.
Not to say there might not be some new vulnerability, but I disagree with this idea people love to repeat that security is impossible.
Facts and reality, I guess?
> we hear about breaches of super important databases all the time and that doesn't seem to convince any company to give a single shit more than just enough to avoid negligence.
I'm not sure why you think this is counter to my point (perhaps we should wonder what you yourself are smoking?), which to reiterate was that:
1. Most current security issues are due to the various insecure foundations we build our technology on, and
2. By the time Neuralink type implants are common, that won't be the case anymore.
I agree that we do have the technology to make it secure if we want to. We've made flight software secure in the '80s or so.
What we don't have, is the incentives. We've built everything on insecure foundations to get to the market cheaper and faster. These incentives don't change for Neuralink. In fact, they create kind of gold rush conditions that make things worse.
What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations, even at the cost of red tape and slow bureaucratic processes. And it's starting, slowly. But e.g. the EU is promoting SBOM's, sobtheir underlying mental model is still one where you tape random software together quickly.
At some point in the future no one will be using x86 or any variation, and we will all be using a secure architecture. Same as with insecure languages, far enough in the future, every language in common use will be safe.
I believe by the time brain implants are common, we will be far enough in the future that we will be using secure foundations for those brain implants.
> What could change things dramatically overnight was the governenent stepping in and enforcing safety regulations,
For a damn brain implant I don't see why they wouldn't.
Oh, and Musk isn't allowed a Neuralink tripwire to blow up your brain via his invention because he saw pronouns listed somewhere and got triggered.
> The only way Neuralink is secure is if we get rid of the system that incentivizes #1, aka capitalism, and not replace it with something equally bad or worse.
Oh man, you've ingested that anti-capitalism koolaid like so many young college kids are so quick to do. It's always such a shame.
This isn't really anything to do with capitalism, it's a question of regulation e.g. what the FDA does, and also a question of time because when enough time passes, most computing will be secure by default due to having rid the insecure foundations.
And more than that, it's an issue with democracy more than capitalism. Fix the way people vote if you want to fix the world, or prevent the types of people who want to believe the earth is flat from having a vote at all.
There is no technical solution to people uploading high res photos with location metadata to social network de jour. Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
None of the people involved in bringing "enhanced human" tech to market will be immune to any of those pressures. (I mean, FFS, in the short term we're really talking about a product that _Elon_ is applying his massive billionaire brain to, right? I wonder what the media friendly equivalent term to "Rapid Unscheduled Disassembly" for when Nerualink starts blowing up people's brains is going to be?)
It absolutely will. I didn't say completely solved, I said largely solved.
> There is no technical solution to people uploading high res photos with location metadata to social network de jour.
Bad example honestly, since most social media sites strip out exif data by default these days. Not sure there are any that don't.
> Or the CEO who wants access to all his email on his shiny new gadget. Or the three letter agency who think ubiquitous surveillance is a great way to do their job. Or the politician who can be easily convinced the backdoors that can only be used by "the good guys" exist. Or the team who does all their internal chat including production secrets in a 3rd party chat app, only to have them popped and their prod credentials leaked on some TOR site. Or the sweatshop IT outsourcing firm that browbeats underpaid devs into meeting pointless Jira ticket closure targets. Or the "move fast and break things" startup culture that's desperately cutting corners to be first-to-market.
Yes yes, humans can be selfish and take risks and be bribed and negligent and blah blah blah.
The context of the comment was in neuralink implants getting hacked the way an out of date smart tv might. As when it comes to the actual tech, security will be a solved problem, because most of the problems we see today are due to everything being built on top of insecure foundations on top of insecure foundations.
Trigger-happy emotional non-intelligence.
Journalists need to check their biases and ensure that everything they write is balanced. When mentioning that they might be Russian speakers, a good balancing sentence would be to point out countries which use the Russian language. Just throwing in "Russian speaker" after explicitly stating they're not sure which nation state did this is extremely unprofessional.
Sure, mention all the facts. Don't try to interpret them as "clues". If you have to, make sure you're not building a narrative without being absolutely sure.
Its not good journalism to go from `transport_http` to indicating that this is an attack by the Russian federation. That's not how you do good journalism. How many people will retain the fact that the author does NOT know which, if any, nation state did this?
So the interesting TLDR, to me, is this:
> [The malware on the infected computer] finds the last modified directory on the USB drive, hides it, and renames itself with the name of this directory, [...]. We also believe that the component uses a folder icon, to entice the user to [click on] it when the USB drive is inserted in an air-gapped system
So the attack vector is "using a transfer medium where data can be replaced with code and the usual procedure [in this case: opening the usual folder] will cause the code to run"
See also: https://www.qubes-os.org/faq/#how-does-qubes-os-compare-to-u...
Windows natively provides the ability for executables to embed icons (known as resources) for the file manager to render them as. This, combined with the default of hiding file extensions for known types (e.g. .exe), is a recipe for a user eventually executing the malware instead of opening the file or directory they wanted.
This malware exploits that very fact by naming itself after the most-recently modified directory on the drive and embedding an icon that ensures that the file manager will render it as a directory.
If you ensured by policy that file extensions were never hidden, that resources were not rendered (every exe got the default icon [1]), and that every user received regular training to properly distinguish files from each other (and files from directories), this risk could be somewhat managed. Good luck; I don't even know if you can disable resource rendering.
https://www.nist.gov/itl/ssd/software-quality-group/computer...
This also tends to be a supply chain and insider threat.
Write block prevents transfer back to the USB which is the exfiltration mechanism.
Read-only media or destroying the media after use is a reasonable mechanism to protect against data exfiltration.
I'm not sure how you protect against infiltration though. A computer system that cannot get data in is pretty useless methinks.
Such thing just MUST BE a helper for creating malwares, what else it could be ? Definitely for circumventing human users.
Good job Microsoft ! Autoexec.bat is proud of you ! /s