As an example, if you're at a FedRAMP High certified service provider, the DoD wants to know that the devices your engineers are using to maintain the service they pay for aren't running a rootkit and that you can prove that said employee using that device isn't mishandling sensitive information.
EDR is a rootkit based on the idea that malware hashes are useless, and security needs to get complete insight into systems after a compromise. You can't root out an attacker with persistence without software that's as invasive as the malware can get.
And a managed SOC is shifting accountability to an extent because they are often _far_ cheaper than the staff it takes to have a 24/7 SOC. That's assuming you have the talent to build a SOC instead of paying for a failed SOC build. Also, don't forget that you need backup staff for sick leave and vacation. And you'll have to be constantly hiring due to SOC burnout.
If all of this sounds like expensive band-aids instead of dealing with the underlying infection, it is. It's complex solutions to deal with complex attackers going after incredibly complex systems. But I haven't really heard of security solutions that reduce complexity and solve the deep underlying problems.
Not even theoretical solutions.
Other than "unplug it all".
You nailed it. Can't really blame CISOs for pursuing this model though.
EDIT: For additional context, I'd add that security/risk tradeoffs happen all the time. In practice trusting Huntress isn't too different than trusting NPM with an engineer that has root access to their machine or any kind of centralized IT provisioning/patching setup.
Funny, my automatic assumption when using any US based service or US provided software is that at a minimum the NSA is reading over my shoulder, and that I have no idea who else is able to do that, but that number is likely > 0. If there is anything that I took away from the Snowden releases then it was that even the most paranoid of us weren't nearly paranoid enough.
I also find it kind of funny that the "blunder" mentioned in the title, according to the article is ... installing Huntress's agent. Do they look at every customer's google searches to see if they're suspicious too?
However, it's obvious that protection-ware like this is essentially spyware with alerts. My company uses a similar service, and it includes a remote desktop tool, which I immediately blocked from auto-startup. But the whatever scanner sends things to some central service. All in the name of security.
Unless maybe you just want to develop a more personal relationship with your internal cybersecurity team, who knows.
The startup script you blocked could have just been a decoy. And set off a red flag.
A lot of these EDR's operate in kernel space.
The problem to me is that this is the kind of thing you'd expect to see being done by a state intelligence organization with explicitly defined authorities to carry out surveillance of foreign attackers codified in law somewhere. For a private company to carry out a massive surveillance campaign against a target based on their own determination of the target's identity and to then publish all of that is much more legally questionable to me. It's already often ethically and legally murky enough when the state does it; for a private company to do it seems like they're operating well beyond their legal authority. I'd imagine (or hope I guess) that they have a lawyer who they consulted before this campaign as well as before this publication.
Either way, not a great advertisement for your EDR service to show everyone that you're shoulder surfing your customers' employees and potentially posting all that to the internet if you decide they're doing something wrong.
The machine was already known to the company as belonging to a threat actor from previous activity
This gains more trust with their customers and breaking trust with ... threat actors?
No, their job is to provide EDR protection for their customers.
As far as unique identifiers go, advertisers use a unique fingerprint of your browser to target you individually. Cookies, JavaScript, screen size, etc, are all used.
I'm also slightly curious as to if you might be associated with an EDR vendor? I notice that you only have three comments ever, and they all seem to be defending how EDR software and Huntress works without engaging with this specific instance.
Cybersecurity companies aren't passive data collectors like, say, Dropbox. They actively hunt for attacks in the data. To be clear, this goes way beyond MDR or EDR. The email security companies are hunting in your email, the network security companies are hunting in your network logs, so on. When they find things, they pick up the phone, and sometimes save you from wiring a million dollars to a bad guy or whatever.
The customer likes this very much, even if individual employees don't.
As a corporate IT tool, I can see how Huntress ought to allow my IT department or my manager or my corporate counsel access to my browser history and everything I do, but I'm even still foggy on why Huntress grants themselves that level of access automatically.
Sure, a peek into what the bad guys do is neat, and the actual person here doesn't deserve privacy for his crimes, but I'd love a much clearer explanation of why they were able to do this to him and how if I were an IT manager choosing to deploy this software, someone who works at Huntress wouldn't be able to just pull up one of my employee's browser history or do any other investigating of their computers.
It's a relatively common model, with MDR and MSSP providers doing similar things. I don't see it as much with EDR providers though.
If folks understood this better, there would be less reason for software like Huntress' EDR to exist.
in general, if you're using a company owned device (the target for this product and many others like it) you should always assume everything is logged
In the EU, employees have an expectation of privacy even on their corporate laptop. It is common for e.g. union workers to use corporate email to communicate, and the employer is not allowed to breach privacy here. Even chatter between worker is reasonably private by default.
I suspect, if the attacker is inside the EU, this article is technically a blatant breach of the GDPR. Not that the attacker will sue you for it, but customers might find this discomforting.
The key difference here is that pen testing, as well as IT testing, is very explicitly scoped out in a legal contract, and part of that is that users have to told to consent to monitoring for relevant business purposes.
What happened in this blogpost is still outside of that scope, obviously. I doubt that Huntress could make the claim that their customer here was clearly told that they would be possibly monitoring their activity in the same way that a "Content to Monitoring" popup for every login on corporate machines does it.
So if <bad actor> in this writeup read your pitch and decided to install your agent to secure their attack machine, it sounds like they "trusted you with this access". You used that access to surveil them, decide that you didn't approve of their illegal activity, and publish it to the internet.
Why should any company "trust you with this access"? If one of your customers is doing what looks to one of your analysts to be cooking their books, do you surveil all of that activity and then make a blog post about them? "Hey everyone here, it's Huntress showing how <company> made the blunder of giving us access to their systems, so we did a little surprise finance audit of them!"
Strongly disagree. If they installed this to do some analysis they would have done that in a VM if they “knew exactly what they were doing”.
Either you snared a script kiddy, or your software download and install process that followed that google ads click was highly questionable.
It was put there by your security team.
On the other hand, I'm pretty sure that the person who installed Huntress did not intend to upload any info at all, let alone to have that information made public.
One of the tools they make is a Endpoint Detection and Response (EDR) product.
The kind of thing that goes on every laptop, server, and workstation in certain controlled environments (banks, government, etc.).
I suspect this is deliberate.
I work on a REM team in a SOC for a big finance company all you US people know. An employee can't hardly fart in front of their corporate machine without us knowing about it. How do you all think managed cyber security works?
In fact, I have worked at several organizations in which this type of activity would be a terminable offense.
I can rob people one at a time or I can go rob the bank. I can break into your clients one at a time or I can break into your "security" company.
Where is the product that keeps that data, your infrastructure safe? Why arent you selling that. Oh wait there is no such thing as it does not exist.
You are a compromise by a state level actor waiting to happen. In fact if you were compromised by a state level actor it is in your companies best interest to cover it up rather than disclose it (as that would be the end of your organization).
It's the fox guarding the hen house.
At some point were going to find out that a government, China, Russia, India.... used you, or one of your peers doing the same. This is taking off my shoes at the airport levels of stupid and ineffective.
I spend a fair bit of time talking to C-levels. The bulk of them use your services not because they think they are effective but because they know that they can point the finger at you when the shit hits the fan.
Presumably legal, but morally gray.
Some random person downloaded Huntress to try it out. Not a company. Not through IT. Just clicked "start trial" like you might with any software. Were they trying to figure out how to get around it? We have no idea!
Huntress employees then decided - based on a hostname that matched something in their private database - to watch everything this person did for three months. Their browser history, their work patterns, what tools they used, when they took breaks.
Then they published it.
The "but EDR needs these permissions!" comments are completely missing the point. Yeah, we know EDR is basically spyware. The issue is that Huntress engineers personally have access to trial user data and apparently just... browse it when they feel like it? Based on hostname matches???
Think about what they're saying: they run every trial signup against their threat intel database. If you match their criteria - which could be as weak as a hostname collision - their engineers start watching you. No warrant. No customer requesting it. No notification. Just "this looks interesting, let's see what they're up to."
Their ToS probably says something vague about "security monitoring" but I doubt it says "we reserve the right to extensively surveil individual trial users for months and publish the results if we think you're suspicious." And even if it did, that doesn't make it right or legal.
They got lucky this time - caught an actual attacker. But what about next time? What about the security researcher whose hostname happens to match? The pentester evaluating their product? Hell, what about corporate users whose hostname accidentally matches something in their database?
The fact that they thought publishing this was a good idea tells you a lot. This isn't some one-off investigation. This is apparently? how they operate.
What about the time before this where it wasn't an attacker, so they didn't write an article about it, and so we never found out about it?
Having technical capability doesn't create ethical permission.
The distinction between "can" and "should" is fundamental to data governance - a concept that exists precisely because unrestricted access to customer data, even for security purposes, creates massive ethical and legal problems.
Huntress didn't monitor a contracted customer's systems for that customer's benefit. They surveilled a trial user for three months based on a hostname match, then published the results. That's not "how their software works" - that's a choice about how to use the access their software provides.
If you genuinely can't see the difference between contracted security monitoring and opportunistic surveillance of trial users, you shouldn't be commenting on security practices at all, let alone so confidently.
> We knew this was an adversary, rather than a legitimate user, based on several telling clues. The standout red flag was that the unique machine name used by the individual was the same as one that we had tracked in several incidents prior to them installing the agent.
So in any other context, they probably wouldn't do any digging into the machine or user history, but they did this time because they already had high confidence of malicious use from this endpoint.
So, I can only assume that a lot of residential machines that have proxies on them offered by companies like these have actually had those proxies installed by malware. The company themselves may not even be aware of this.
(I'm not saying that LunaProxy in particular is like this. I actually have never heard of LunaProxy before now, so the above may not even apply to it. Regardless, it's still worth applying caution.)
I am curious where the red line is.
Any criminal activity or just behavior that the analysts find interesting?
When doing red team engagements, we do the same, install same security solutions as the customer and work around it. It could be what happened here?
That the analysts spotted him and were able to connect it to existing cases is just good craftsmanship.
I no longer feel that it’s relevant to discuss a red line here. Huntress just did their job.
But some of these, like Bloodhound are not really telling you much you didn't know. They are tools to make exploiting access, whether authorized or otherwise, easier and more automated. Hell, even in the case of Cobalt Strike, they are doing their best to limit who can obtain it and chasing down rogue copies because used for real attack purposes.
I'm not really saying anything should (or can) be done about this. Just ruminating about it, as after many years in the industry, seeing a list of a mostly open source stack used for every aspect of cybercrime sometimes surprises me at just how good a job we've done of equipping malicious actors. For all the high minded talk of making everyone more secure, a lot of things just seem to be done for a mixture of bragging rights ego and sharing things with each other to make our offensive sec job a bit easier.
Anyone who knows anything about macOS knows that it is not possible to disable System Integrity Protection without rebooting into recovery (an environment that it is not possible to actually get events from). So their "detection" is just some random guy typing "csrutil disable" in their terminal and it doing absolutely nothing. I would not be surprised if there is some similar dumb explanation here that they missed, which would make for a substantially less interesting story.
Don't tell me your story is "good," let me read it and I'll be the judge of that.
A person like that obviously has extremely poor operational security and is therefore of low competence.
Competent actors likely utilize virtualization or in cases where the software is adversarial and may reveal virtualization, physical machines (eg. cheap Mini PC's) with isolated and managed networks (eg. connections routed through a commercial VPN or a residential proxy) not under the control of the machine.
Also styxmmarket doesn't appear to be in any way a dark web marketplace/forum. It doesn't even have an onion address? It has a .com domain, something that should be easy for the authorities to seize. Probably is a honeypot of some kind.