> This isn't a single point of failure - it's a systemic crisis.
> One in seven breaches isn't a sophisticated external attack - it's someone inside the organisation accessing data they shouldn't.
> These organisations aren't browsing - they're buying
As written, the guidelines talk about AI generated comments, not AI generated submitted articles
In any case, just flag the submission and move on
> That number isn't a projection. It isn't an estimate. It's the sum total of confirmed individuals affected across 735 breach reports filed with the HHS Office for Civil Rights - and it's growing every week.
Facepalm.
The real takeaway should be that at every level -- government, corporate, healthcare entities, personal -- we need to rethink how we're acting in the face of these disasters.
Government should recognize that its current regulations are insufficient and look for ways to refine them.
Corporations and health-care entities should be asking themselves, "Do I really need to store this data? If so, how do I store it securely, make my systems less vulnerable to attack, make my personnel more informed about phishing, store it for the minimum amount of time, etc."
And we as individuals should be asking ourselves whether so many health-care entities need to store so much data about us.
The breach was social engineering of a customer support rep.
Having worked with them, they’re absolutely necessary for healthcare (in its current form; don’t get me started) to function. The alternative is integrating with hundreds of payers (won’t happen) or doing it by fax/mail (disaster).
- better security training for employees
- don't store 193 M sensitive records in such a way that one social-engineering attack gives you access to all of them
- don't store 193 M sensitive records without appropriate encryption, and make it hard to steal both the records and the decryption mechanism.
> It is still unclear what prompted the hack. The prosecution claims that TAD Group tried to blackmail several companies to hire its services, inducing them with hacked information from their websites. However, no company has publicly complained yet. [0]
0 - https://kinsights.capital.bg/politics_and_society/2019/09/17...
The shear hostility by many people on here to data protection law (hello GDPR) suggests you are going to have a hard time getting such laws passed in the USA.
2. If you get breached, you have a problem. If everyone gets breached it starts to look more like cost-of-business (and that might be cheaper than a cyber firm that doesn't actually fix the problem [but looks good on audits])
3. I wonder if the breached data is entering AI corpuses. Will I be able to ask OpenAI "Does Joe Bloggs, 75 Penn Ave NY have an underlying health conditions I should know about"
the industry standard seems to be:
- release "oopsie" statement
- engage "cybersecurity firm" to investigate
- give out free credit monitoring for a year (fucking worthless)
and so far it seems to be working just fine
But forcible dilution (partial or total seizure) of the corporation? A mandatory insurance coverage? Absolutely.
We already have statutory HIPAA violation penalties, and I am extremely in favor of assessing them in a breach. The question is whether they are sufficient.
that should incentivize them to actually invest some money in security. right now its just tiny numbers which are easier to just pay off and forget about
That said, class action lawsuits also are part of the cost of business. Nothing is ever going to change unless the boards of directors (not CEOs) can be held liable for the behavior of the companies that they direct.
One of these is CHAMPUS, which indicates that it is for a service member or their family. You can tell which.
As a basic case, accumulate these (as in the CHC breach of ~30% of Americans) and you have a nice map of where US military are. Since bases house particular units and types of forces, a nation state can estimate strength and investment in the US military.
In a specific case, the response to claims includes patient responsibility (deductible, co-insurance, co-pay.) Add that up for a financial picture, then you’ve got a nice lead list for service members who have money problems.
Streamer doxing.
Literally just being trans.
HIV fear mongering.
Illegal fuckery with your insurance rates.
Employment discrimination.
Stalking.
Racial discrimination.
Can you imagine trying to fully trust a mental health professional today? A patient can't see a therapist's notes, but they sure as hell can be breached.
There is zero LEGITIMATE use for your breached health data.
Others may want your health data to bribe you. Maybe you got a STD from a mistress.
Maybe you have a heart condition and the business you are interested in working for self-insures. They don't want you on their books!
One would like to think the creators of AI have been prudent enough to ensure AI output obeys data protection law; however the laissez-faire approach the USA takes to data protection (and the hostility of many Americans on here to the GDPR) suggests otherwise.
As opposed to what exactly? A "communist" take on the loss of confidentiality? How might that go?
"There's no problem comrade, what are you talking about?"
This sounds like a failure of government regulation here, not a failure of a broad economic model.
Democrats and Republicans always think they’re smart by investing in whatever wave of technology. Here we are.