Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.
Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.
I’ve seen reports in the past that people found that syncing to the cloud was turned back on automatically after installing Windows updates.
I would not be surprised if Microsoft accidentally flip the setting back on for people who opted out of AI photo scanning.
And so if you can only turn it back off three times a year, it only takes Microsoft messing up and opting you back in three times in a year against your will and then you are stuck opted in to AI scanning for the rest of the year.
Like you said, they should be limiting the number of times it can be turned back on, not the number of times it can be turned off.
Not trying to say that you could have prevented this; I would not be surprised if Windows 10 enterprise decided to "helpfully" turn on auto updates and updated itself with its fun new "features" on next computer restart.
And even so, let's say they didn't use Windows — I'd still expect the same rigor for any operating system update.
Then you are hopelessly naive.
This was exactly my thought as well.
Right now it doesn't say if these are supposed to be three different "seasons" of the year that you are able to opt-out, or three different "windows of opportunity".
Or maybe it means your allocation is limited to three non-surveillance requests per year. Which should be enough for average users. People aren't so big on privacy any more anyway.
Now would these be on a calendar year basis, or maybe one year after first implementation?
And what about rolling over from one year to another?
Or is it use it or lose it?
Enquiring minds want to know ;)
My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.
Their disclaimer already suggests they don't train on your photos.
So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.
And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.
The good news is that the power of this effect is lost when significant attention is placed on it as it is in this case.
This is how my parents get Binged a few times per year
So to me it looks like MS tries to avoid that users ram MS's infrastructure with repeated expensive full scans of their library. I would have worded it differently and said "you can only turn ON this setting 4 times a year". But maybe they do want to leave the door open to "accidentally" pushing a wrong setting to the users.
Nobody really believes the fiction about processing being heavy and that's why they limit opt outs.
Aren't these 2 different topics? MS and big-tech in general make things opt-out so they can touch the data before users get the chance to disable this. I expect they would impose a limit to how many times you go through the scanning process. I've run into this with various other services where there were limits on how many times I can toggle such settings.
But I'm also finding a hard time giving MS the benefit of the doubt, given their history. They could have said like GP suggested that you can't turn it "on" not "off".
> As stated many times elsewhere here .... Nobody really believes the fiction
Not really fair though, wisdom of the crowd is not evidence. I tend to agree on the general MS sentiment. But you stating it with confidence without any extra facts isn't contributing to the conversation.
Analyzing and tagging photos is not free. Many people don't mind their photos actually being tagged, but they are a little more sensitive about facial recognition being used.
That's probably why they separate these out, so you can get normal tagging if you want without facial recognition grouping.
https://support.microsoft.com/en-us/office/group-photos-by-p...
If you have a large list of scenarios where Microsoft didn't respect privacy settings or toggles, I would be interested in seeing them.
I know there have been cases where software automated changes to Windows settings that were intended to only be changed by the user. Default browsers were one issue, because malicious software could replace your default browser even with lower permissions.
Are you talking about things like that, or something else?
Nobody. Absolutely nobody. Believes it's to save poor little Microsoft from having their very limited resources wasted by cackling super villain power users who'll force Microsoft to scan their massive 1.5 GB meme image collections several times.
If it was about privacy as you claim in another comment, it would be opt in. Microsoft clearly doesn't care about user privacy, as they've repeatedly demonstrated. And making it opt out, and only three times, proves it. Repeating the same thing parent comments said is a weird strategy. Nobody is believing it.
Then why they are doing it ? Maybe because CIA/NSA and advertisers pay good money.
Most moms and old folks aren't going to fuss or understand privacy and technical considerations, they just want to search for things like "greenhouse" and find that old photo of the greenhouse they setup in the backyard 13 years ago.
It's one thing if all of your photos are local and you run a model to process your entire collection locally, then you upload your own pre-tagged photos. Many people now only have their photos on their phones and the processing doesn't generally happen on the phone for battery reasons. You CAN use smaller object detection/tagging models on phones, but a cloud model will be much smarter at it.
They understand some of this is a touchy subject, which is why they have these privacy options and have limitations on how they'll process or use the data.
In a really sad way.
Maybe in your social bubble. I don't know anyone with OneDrive subscription.
Someone show me any cases, where big tech has successfully removed such data from already trained models, or in case of being unable to do that with the blackboxes they create, removed the whole blackbox, because a few people complain about their data being in those black boxes. No one can, because this has not happened. Just like ML models are used as laundering devices, they are also used as responsibility shields for big tech, who rake in the big money.
This is M$ real intention here. Lets not fool ourselves.
If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.
Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.
> You are trying to reach really far out to find a plausible
This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.
Their spokes person also avoided answering why they are doing this.
On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.
worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company
Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.
So if you enable the feature, it sends your photos to MS to scan... If you turn it off, they delete that data, meaning if you turn it on again, they have to process the photos again. Every time you enable it, you are using server resources.
However, this should mean that they don't let you re-enable it after you turn it off 3 times, not that you can't turn it off if you have enabled it 3 times.
all facial grouping data will be permanently removed within 30 days
I feel like you're way too emotionally invested in whatever this is to assess it without bias. I don't care what the emotions are around it, that's a marketing issue. I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
It's probably opt-out, because most users don't want to wait 24 hours for their photos to get analyzed when they just want to search for that dog photo from 15 years ago using their phone, because their dog just died and they want to share old photos with the family.
This doesn't apply to your encrypted vault files. Throw your files in there if you don't want to toggle off any given processing option they might add 3 years from now.
Then proceeds to appeal to emotion with dog photo statement.
It's super common for people to take a cynical interpretation of something and just run with it, because negativity bias goes zoom.
Be less deterministic than that, prove you have free will and think for yourself.
Clearly, you personally can't think of a reason yourself based on that 'probably' alone.
<< I feel like you're way too emotionally invested
I think. You feel. I am not invested at all. I have.. limited encounters with windows these days. But it would be silly to simply dismiss it. Why? For the children man. Think of the poor children who were not raised free from this silliness.
<< I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
I can respect that. What are those technical details? MS was a little light on the details.
"Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies. This helps you quickly and easily organize photos of friends and family. Only you can see your face groupings. If you share a photo or album with another individual, face groupings will not be shared.
Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall. Any data you provide is only used to help triage and improve the results of your account, no one else's.
While the feature is on, Microsoft uses this data to group faces in your photos. You can turn this feature off at any time through Settings. When you turn off this feature in your OneDrive settings, all facial grouping data will be permanently removed within 30 days. Microsoft will further protect you by deleting your data after a period of inactivity. See the Microsoft account activity policy for more information."
You can also see here some of the ways they're trying to expose these features to users, who can use Co-Pilot etc. https://techcommunity.microsoft.com/blog/onedriveblog/copilo...
I turn all Co-Pilot things off and I've got all those AI/tagging settings off in OneDrive, but I'm not worried about the settings being disingenuous currently.
There's always a worry that some day, a company will change and then you're screwed, because they have all your data and they aren't who you thought they were anymore. That's always a risk. Just right now, I'm less worried about Microsoft in that way than I am with other companies.
In a way, being anti-government is GOOD, because overly relying on government is dangerous. The same applies to all these mega-platforms. At the same time, I know a lot of people who have lots a lot of data, because they never had it backed up anywhere, and people who have the data, but can't find anything, because there's so much of it and none of it is organized. These are just, actual real world problems and Microsoft legitimately sees that the technology is there now to solve these problems.
That's what I see.
Did this line ever win an argument for you or you just use it to annoy who you're talking to?
After all, sometimes an emotional reaction comes from a logical basis, but the emotion can avalanche and then the logical underpinnings get swept away so they don't get re-evaluated the way they should.
They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.
Then you can guess Microsoft hopes to make even more money than it costs them running this feature.
Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.
Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.
Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".
And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.
---
And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:
> Microsoft's publicist chose not to answer this question
and
> We have nothing more to share at this time
but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.
Telling that these companies have some real creeps high up.
That would be a limit on how many times you can enable the setting, not preventing you from turning it off.
I don't know what they're seeing from their side, but I'm sure they have some customers that have truly massive photo collections. It wouldn't surprise me if they have multiple customers with over 40TB of photos in OneDrive.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
There is that initial phase of potential fair use within reason, but the illegal acquisition is still a crime. Eventually after they've distilled things enough, it can become more firmly fair use.
So they just take the legal risk and do it, because after enough training the legal challenges should be within an acceptable range.
That makes sense for publicly released images, books and data. There exists some plausible deniability in sweeping up influences that have already been released into the world. Private data can contain unique things which the world has not seen yet, which becomes a bigger problem.
Meta/Facebook? I would not and will never trust them. Microsoft? I still trust them a lot more than many other companies. The fact many people are even bothered by this, is because they actually use OneDrive. Why not Dropbox or Google Drive? I certainly trust OneDrive more than I trust Dropbox or Google Drive. That trust is not infinite, but it's there.
If Microsoft abuses that trust in a truly critical way that resonates beyond the technically literate, that would not just hurt their end-user personal business, but it would hurt their B2B as well.
But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.
I bet you "have nothing to hide".
We work with computers. Every thing that gets in the way of working is wasting time and nerves.
Did you read it all ? They also sugest that they care about your privacy. /s
This was pre-AI hype, perhaps 15 years ago. It seems Microsoft feel it is normalised. More you are their product. It strikes me as great insecurity.
Presumably it can be used for filtering as well - find me all pictures of me with my dad, etc.
The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.
And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.
Manager: hey let's go all in on this fancy new toy! We'll all be billionaires!
Employee: oh yeah I will work nights and weekends with no pay for this! I wanna be a billionaire!
Manager: actually it failed, we ran out of money, you no longer have a job... But at least we didn't build skynet, right?
If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.
I look forward to getting a check from Microsoft for violating my privacy.
I live in a state with better-than-average online privacy laws, and scanning my face without my permission is a violation. I expect the class action lawyers are salivating at Microsoft's hubris.
I got $400 out of Facebook because it tagged me in the background of someone else's photo. Your turn, MS.
Depending on the algorithm and parameters, you can easily get a scary amount of false positives, especially using algorithms that shrink images during hashing, which is a lot of them.
I think the premise for either system is flawed and both are too error prone for critical applications.
I'd imagine outside of egregious abuse and truly unique images, you could squint at a legal image and say it looks very much like another illegal image, and get a false positive.
From what I'm reading about PhotoDNA, it's your standard phashing system from 15 years ago, which is terrifying.
But yes, you can add heuristics, but you will still get false positives.
Also, once the system is created it’s easy to envision governments putting whatever images they want to know people have into the phone or changing the specificity of the filter so it starts sending many more images to the cloud. Especially since the filter ran on locally stored images and not things that were already in the cloud.
Their nudity filter on iMessages was fine though (I don’t think it ever sends anything to the internet? Just contacts your parents if you’re a minor with Family Sharing enabled?)
A key point is that the system was designed to make sure the database was strongly cryptographically private against review. -- that's actually where 95% of the technical complexity in the proposal came from: to make absolutely sure the public could never discover exactly what government organizations were or weren't scanning for.
Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
> the conversation was dominated by uninformed outrage about things that weren’t happening.
The thing that wasn't happening yet was mission creep beyond the original targets. Because expanding-beyond-originally-stated-parameters is thing that happens with far reaching monitoring systems. Because it happens with the type of regularity that is typically limited to physics.
There were 2ndary concerns about how false positives would be handled. There were concerns about what the procedures were for any positive. Given Gov propensities to ruin lives now and ignore that harm (or craft a justification) later, the concerns seem valid.
That's what I recall the concerned voices were on about. To me, they didn't seem outraged.
Yes. Completely wrong. Not even close.
Why don’t you just go and read about it instead of guessing? Seriously, the point of my comment was that discussion with people who are just guessing is worthless.
> Yes. Completely wrong. Not even close.
Per Apple:
Instead of scanning images in the cloud, the system performs on-device
matching using a database of known CSAM image hashes
Recapping here. In your estimation: known hashes would be stored on devices
and images would be scanned against that.
Is not even close to the system performs on-device
matching using a database of known hashes
. And folks who read the latter and thought the former were, in your view, "Completely wrong".Well, okay then.
https://web.archive.org/web/20250905063000/https://www.apple...
I’m not making people guess. I explained directly what I wanted people to know very, very plainly.
You are replying now as if the discussion we are having is whether it’s a good system or not. That is not the discussion we are having.
This is the point I was making:
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
The discussion is about the ignorance, not about the system itself. If you knew how it worked and disagreed with it, then I would completely support that. I’m not 100% convinced myself! But you don’t know how it works, you just assumed – and you got it very wrong. So did a lot of other people. And collectively, that drowned out any discussion of how it actually worked, because you were all mad about something imaginary.
You are perfectly capable of reading how it worked. You do not need me to waste a lot of time re-writing Apple’s materials on a complex system in this small text box on Hacker News so you can then post a one sentence shallow dismissal. There is no value in doing that at all, it just places an asymmetric burden on me to continue the conversation.
That said, I think this is mostly immaterial to the problem? As the comment you’re responding to says, the main problem they have with the system is mission creep, that governments will expand the system to cover more types of photos, etc. since the software is already present to scan through people’s photos on device. Which could happen regardless of how fancy the matching algorithm was.
Just as an example, part of my responses here were to develop and publish a second-preimage attack on their hash function-- simply to make the point concrete that varrious bad scenarios would be facilitated by the existence of one.
I would not care if it worked 100% accurately. My outrage is informed by people like you who think it is OK in any form whatever.
Phew, not AI then… ?
It’s not hard to guess the problem: Steady state operation will only incur scanning costs for newly uploaded photos, but toggling the feature off and then on would trigger a rescan of every photo in the library. That’s a potentially very expensive operation.
If you’ve ever studied user behavior you’ve discovered situations where users toggle things on and off in attempts to fix some issue. Normally this doesn’t matter much, but when a toggle could potentially cost large amounts of compute you have to be more careful.
For the privacy sensitive user who only wants to opt out this shouldn’t matter. Turn the switch off, leave it off, and it’s not a problem. This is meant to address the users who try to turn it off and then back on every time they think it will fix something. It only takes one bad SEO spam advice article about “How to fix _____ problem with your photos” that suggests toggling the option to fix some problem to trigger a wave of people doing it for no reason.
Assuming that it doesn't mysteriously (due to some error or update, no doubt) move back to the on position by itself.
When I truly need Windows, I have an ARM VM in Parallels. Right now it gets used once a year at tax time.
But tomorrow they’ll add a new feature, with a different toggle, that does the same thing but will be distinct enough. That toggle will default on, and you’ll find it in a year and a half after it’s been active.
Control over your data is an illusion. The US economy is built upon corporations mining your data. That’s why ML engineers got to buy houses in the 2010s, and it’s why ML/AI engineers get to buy houses in the 2020s.
- "Scan photos I upload" yes/no. No batch processing needed, only affects photos from now on.
- "Delete all scans (15,101)" if you are privacy conscious
- "Scan all missing photos (1,226)" can only be done 3x per year
"But users are dummies who cannot understand anything!" Not with that attitude they can't.
This would create a situation where some of the photos have tags and some don’t. Users would forget why the behavior is different across their library.
Their solution? Google it and start trying random suggestions. Toggle it all on and off. Delete everything and start over with rescanning. This gets back to the exact problem they’re trying to avoid.
> - "Scan all missing photos (1,226)" can only be done 3x per year
There is virtually no real world use case where someone would want to stop scanning new photos but also scan all photos but only when they remember to press this specific button. The number of users who would get confused and find themselves in unexpected states of half-scanned libraries would outweigh the number of intentional uses of this feature by 1000:1 or more.
> Google it and start trying random suggestions.
If the options were indeed as I suggested, why would the top Google result not say "click the very clearly labelled 'scan missing photos' button"?
Google search results are useless when tech companies don't empower users with clear control over their data. Users are reduced to superstitious peasants not because that's their nature, but because they are not given the capability to act otherwise.
> It’s not hard to guess the problem: toggling the feature off and then on would trigger a rescan of every photo in the library.
That's would be a wild way to implement this feature.I mean it's Microsoft so I wouldn't be surprised if it was done in the dumbest way possible but god damn this would be such a dumb way to implement this feature.
IANAL, but I think the key remaining in the user’s possession doesn’t matter as far as the company with a deletion requirement is concerned.
Maybe you'd have to force the user to export the key to an external file (and forget the path) or encrypt it with some mechanism that the app isn't in control of.
If this were happening on device (lol) then you should do both the scanning and deleting operations at times of usually low activity. Just like how you schedule updates (though Microsoft seems to not have forgotten how to do this). Otherwise, doing the operations at toggle time just slams the user's computer, which is a great way to get them to turn it off! We'd especially want the process to have high niceness and be able to pause itself to not hinder the user. Make sure they're connected to power or at least above some threshold in battery if on laptop.
If you can on device and upload, again, you should do this at times of low activity. But you also are not going to be deleting data right away because that is going to be held across several servers. That migration takes time. There's a reason your Google Takeout can take a few hours and why companies like Facebook say your data might still be recoverable for 90 days.
Doing so immediately also creates lots of problems. Let's say you enable, let it go for awhile, then just toggle back and fourth like a mad man. Does your toggling send the halt signal to the scanning operation? What does the toggling on option do? Do you really think this is going to happen smoothly without things stepping on each other? You're setting yourself up for a situation where the program is both scanning and deleting at the same time. If this is implemented better than most things I've seen from Microsoft then this will certainly happen and you'll be in an infinite loop. All because you make the assumption that there is no such thing, or the possibility of, an orphaned process. You just pray that these junior programmers with a senior title just don't know how to do parallelization...
In addition to the delay you should be marking the images in a database to create a queue. Store the hash of the file as the ID and mark appropriately. We are queuing our operations and we want to have fail safes. You're scanning the entire fucking computer so you don't want to do things haphazardly! Go ahead, take a "move fast and break things" approach, and watch your customers' get a blue screen of death and wake up to having their hard drives borked.
> unless you have some clever way I’ve not thought of?
Seriously, just sit down and think about the problem before you start programming. The whiteboard or pen and paper are some of your most important weapons as a programmer. Your first solution will be shit and that's okay. Your second and even third solution might be shit too. But there's a reason you need depth. We haven't even gotten into any real depth here either. Our "solution" here has no depth, it's just the surface level and I'm certain the first go will be shit. And But you'll figure more stuff out and find more problems and fix them. I'm also certain others will present other ideas that can be used too. Yay, collaboration! It's all good unless you just pretend you're done and problems don't exist anymore. (Look ma! All the tests pass! We're bug free!) For christ's sake, what are you getting a quarter million+ salary for?If disabling the feature kept the data, that would be a real problem.
I don’t know why you think it’s dumb that they purge the data when you turn a feature off. That’s what you want.
> I don’t know why you think it’s dumb that they purge the data when you turn a feature off. That’s what you want.
I think you should have read the other comments before responding. There are multiple solutions here. And note that my answer is suggesting a delay so we don't hammer the user's computer. Toggling should schedule the event, not initialize it. You've over simplified the problem, treating it as if operations can be performed instantaneously and that they are all performed locally.They are exactly where I left them 20 years ago.
It's very sad that I can't stop using them again for doing this.
Just as linking to original documents, court filings etc. should be a norm in news reporting, it should also be a norm to summarize PR responses (helpful, dismissive, evasive or whatever) and link to a summary of the PR text, rather than treating it as valid body copy.
I'm going way off topic, and off on a tangent here.
Anecdote, famous public broadcaster TV talk show in Germany (Markus Lanz): The invited politician failed to answer, so the host did what you asked. Three times. Then he just stopped and went to the next topic like nothing happened.
For anyone thinking this is reasonable, what else could he have done, after all?
This method is utterly useless for the public watching the dialog, but has benefits for both the show and the politician. The public won't learn a thing. The host can pretend to be super tough in evading guests. The politician is let off the hook very easily - he just have to deflect the question(s) with canned standard responses three times, easy enough, no consequences.
Next day, the very critical people on reddit wrote highly upvoted comments celebrating how "tough" the host was on the politician.
But the whole scenario is always the same, every single time, almost like it's scripted: The guest only has to deflect the "tough" question a few times and then nothing else happens, they just move on. It's also eerie to see the change in the host and their questions, from acting tough three times to changing back to acting amiably and forgetting about the unanswered question.
At this point this is all just part of the "act tough but don't upset the guest" show.
You may ask, but what can they do?
Well, how about throwing the guy out? What's the use of them as an interview partner if the interview is used as a mere PR piece? They should just have replacement guests on standby. That won't be a high-level person, but it does not need to be. Yes, they will have trouble getting politicians in if they have to fear actually having to answer. So what? Is the show being a one-sided PR piece any better? They could just interview normal non-Berlin-politics-bubble people instead. There are soooo many who have interesting things to say, much more interesting than some politician's prepared statements.
Unless there are actual consequences, like ending the interview right there and letting the viewers or readers know that answers were refused, acting tough does not matter if it can just be waited out.
How about 12 times? See BBC News’s Jeremy Paxman interview with Michael Howard - https://youtu.be/IqU77I40mS0?si=NpW7cSqi2eXsQt8s
Should have just said 'link to a screenshot of the PR text', apologies for the confusion
Modern reporting is tricky because there are hungry sharks circling all sides.
- victim says hi, this thing is messed up and people need to know about this
-Company says "bla bla bla" legal speak we don't recognise an issue "bla bla bla"
End of article, instead of saying
"This comment doesn't seem to reflect the situation" or other pointing out that anybody with a brain can see the two statements are not equal in evidence nor truth
The privacy violations they are racking up are very reminiscent of prior behavior we've seen from Facebook and Google.
They'd probably do it happily even without a warrant.
I'd bet Microsoft is doing this more because of threats from USG than because of advertising revenue.
I'm old enough to remember when companies were tripping over themselves after 9/11 trying to give the government anything they could to help them keep an eye on Americans. They eventually learned to monetize this, and now we have the surveillance economy.
https://www.pcmag.com/news/the-10-most-disturbing-snowden-re...
...and build them a nice portal to submit their requests and get the results back in real time.
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
The sad thing is that they've made it this way, as opposed to Windows being inherently deficient; it used to be a great blend of GUI convenience with ready access to advanced functionality for those who wanted it, whereas MacOS used to hide technical things from a user a bit too much and Linux desktop environments felt primitive. Nowadays MS seems to think of its users as if they were employees or livestock rather than customers.
Meanwhile Apple is applying different set of toxic patterns. Lack of interoperability with other OS, their apps try to store data mainly on iCloud, iPhone has no jack connector etc.
They are a hard nosed company focused with precision on dominance for themselves.
- EU governments keep auditing us, so we gotta stay on our toes, do things by the book, and be auditable - it's bad press when we get caught doing garbage like that. And bad press is bad for business
In my org, doing anything with customer's data that isn't directly bringing them value is theoretically not possible. You can't deliver anything that isn't approved by privacy.
Don't forget that this is a very big company. It's composed of people who actually care and want to do the right thing, people who don't really care and just want to ship and would rather not be impeded by compliance processes, and people who are actually trying to bypass these processes because they'd sell your soul for a couple bucks if they could. For the little people like me, the official stance is that we should care about privacy very much.
Our respect for privacy was one of the main reasons I'm still there. There has been a good period of time where the actual sentiment was "we're the good guys", especially when comparing to google and Facebook. A solid portion of that was that our revenue was driven by subscriptions rather than ads. I guess the appeal to take customer's money and exploit their data is too big. The kind of shit that will get me to leave.
The edges and frontiers are what bug me. AI mania is a pox.
How long has MS been putting ads in the start menu?
Even Bill Gates dumped the Windows Phone and switched to Android (he prefers the Samsung Galaxy Fold4)
https://www.gearbrain.com/bill-gates-windows-phone-android-2...
Any company that has to state that they take privacy very seriously, doesn't.
The rest of your response makes that very clear: you are focused on doing things by the book, i.e. the bare minimum required by law instead of actually giving one shit about privacy and security yourself.
Erm, dude ....
IANAL, and I am sure most people do not need to be lawyers to figure out that not allowing people to permanently opt-out of photo scanning is almost certainly going to be in contravention of every EU law in the book.
I hope the EU take Microsoft to the cleaners over this one.
I understand that the company doesn't get the benefit of the doubt in such situations, especially when publicists "choose not to answer" why this feature is done like that. Great job there...
I'm also hoping we get a correction, be it the EU or just PR backlash. As I said, this is the kind of shit that makes me not want to have my name associated with the company.
Did they ever open source anything that really make you think "wow"? The best I could see was them "embracing" Linux, but embrace, extend, extinguish was always a core part of their strategy.
I don't know what this Microsoft thing is that you speak of. I only know a company called Copilot Prime.
Everything I see from that company is farcical. Massive security lapses, lazy AI features with huge privacy flaws, steamrolling OS updates that add no value whatsoever, and heavily relying on their old playbook of just buying anything that looks like it could disrupt them.
P.S. The skype acquisition was $8.5B in 2011 (That's $12.24B in today's money.)
Oh what time does to things!
Looks like nothing has changed.
Or do they end up so enmeshed with the corporate machine that that they start to really believe it all makes sense?
At least that's the only way I can imagine them keeping their sanity.
Again you are reinforcing my point. I've directly meet people that have said things like this real life Exec quote.
"I love bopping them, just like turtles when they pop their head up for air, bop" *gestured fist hammer motion
In regard to treating people like disposable slaves in order to get what they want.
Source: https://www.dutchnews.nl/2025/10/court-tells-meta-to-give-du...
Any organization this large is going to have approximately the same level of dysfunction overall. But, there are almost always parts of these organizations where specific leaders have managed to carve out a fiefdom and provide some degree of actual value to the customer. In the case of Microsoft, examples of these would be things like .NET, C#, Visual Studio [Code], MSSQL, Xbox.
Windows, Azure & AI are where most of the rot exists at Microsoft. Office is a wash - I am not a huge fan of what has happened to my Outlook install over the years, but Teams has dramatically stabilized since the covid days. Throwing away the rest of the apple because of a few blemishes is a really wasteful strategy.
Microsoft hate was something else in the '90s and 2000s. Yet people stayed with it as if they had no choice while OS/2, AmigaOS, NextStep, BeOS and all those UNIXes died.
Nowadays most things happen in browsers anyways, WINE/Proton have come a long way, and alternatives to almost anything windows-only have reached a critical quality threshold.
Microsoft knows the vast majority of professionals are forced to use their products and services or else they can't put food on the table. That's why Microsoft can operate with near impunity.
Do they get scanned as well without the person's permission?
Users: save files "on their PC" (they think)
Microsoft: Rolls out AI photo-scanning feature to unknowing users intending to learn something.
Users: WTF? And there are rules on turning it on and off?
Microsoft: We have nothing more to share at this time.
Favorite quote from the article:
> [Microsoft's publicist chose not to answer this question.]
https://www.microsoft.com/en-us/servicesagreement#15_binding...
Or find services that may not be as easy to use, may cost something and may not have all the features you want, but which wont make unreasonable demands for your data.
In light of the way the US government is carrying on, I'd rather not give Microsoft any of my images.
What is this supposed to mean? That you'd be happier with the dystopia if they were going after people you like less?
Of course, that's also the reason why Lens was deprecated despite being a good, useful app, forcing one to deal with the bload of Copilot 365.
I can never help myself from hearing this inside, and am just incredibly thankful that we have Linux and FOSS in general. That really gives me hope for humanity at this point.
I type this in FireFox, on NixOS, with all my pics open in another tab, in Immich. Thank you, thank you, thank you.
...I guess Microsoft believes that they're making up for it in AI and B2B/Cloud service sales? Or that customers are just so locked-in that there's genuinely no alternative? I don't believe that the latter is true, and it's hard to come back from a badly tarnished brand. Won't be long before the average consumer hates Microsoft as much as they hate HP (printers).
Who's making the t-shirts? Don't forget the Microsoft logo. They're proud of this!
In my head it's sounding like that Christmas jingle. It's the most wonderful time of the year!
Presumably, it's somewhat expensive to run face recognition on all of your photos. When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again.
In fact, if you follow the linked page, you'll find a screenshot showing it was originally worded differently, "You can only change this setting 3 times a year" dating all the way back to 2023. So at some point someone made a conscious decision to change the wording to restrict the number of times you can turn it _off_
For example, people who don't use their encrypted vault on OneDrive, so they upload photos that should otherwise be encrypted to their normal OneDrive which gets scanned and tagged. It could be a photo of their driver's license, social security card, or something illicit.
So these users toggle the tagging feature on and off during this time.
Maybe the idea is to push these people's use case to the vault where it probably belongs?
And unlike most things, both prompts require you to explicitly click some sort of "no", not just click away to dismiss. The backup one is particularly obnoxious because you have to flip a shitty little slider as the only button is "continue". Fuck. Off.
If they had taste, someone opinionated over there would knock heads before shipping another version of windows that requires restarts or mutates user settings.
The issue is that is a feature that 100% should in any sane world be opt in - not opt out.
Microsoft privacy settings are a case of - “It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”
Even my 10(?) year old iPhone X can do facial recognition and memory extraction on device while charging.
My Sony A7-III can detect faces in real time, and discriminate it from 5 registered faces to do focus prioritization the moment I half-press the shutter.
That thing will take mere minutes on Azure when batched and fed through GPUs.
If my hunch is right, the option will have a "disable AI use for x months" slider and will turn itself on without letting you know. So you can't opt out of it completely, ever.
This is probably the case. But Redmond being Redmond, they put their foot in their mouth by saying "you can only turn off this setting 3 times a year" (emphasis mine).
But that's not necessarily true for everyone. And it doesn't need to be this way, either.
For starters I think it'd help if we understood why they do this. I'm sure there's a cost to the compute MS spends on AI'ing all your photos, turning it off under privacy rules means you need to throw away that compute. And turning it back on creates an additional cost for MS, that they've already spent for nothing. Limiting that makes sense.
What doesn't make sense is that I'd expect virtually nobody to turn it on and off over and over again, beyond 3 times, to the point that cost increases by more than a rounding error... like what type of user would do that, and why would that type of user not be exceedingly rare?
And even in that case, it'd make more sense to do it the other way around: you can turn on the feature 3 times per year, and off anytime. i.e. if you abuse it, you lose out on the feature, not your privacy.
So I think it is an issue that could and should be quickly solved.
That means that all Microsoft has to do to get your consent to scan photos is turn the setting on every quarter.
Very likely true, but we shouldn't have to presume. If that's their motivation, they should state it clearly up front and make it opt-out by default. They can put a (?) callout on the UI for design decisions that have external constraints.
If the user leaves it off for a year, then delete the encrypted index from the server...
My wife has a phone with a button on the side that opens the microphone to ask questions to Google. I guess 90% of the audios they get are "How the /&%/&#"% do I close this )(&(/&(%)?????!?!??"
We all know why.
If you can't (work, etc.) try to avoid uploading sensitive documents in onedrive.
I always wondered who uses OneDrive for cloud storage. Hell, I think even Google Drive is better.
Microsoft has really pivoted to AI for all things. I wonder how many customers they will get vs how many they will lose due to this very invasive way of doing things.
Just stop using Microsoft shit. It's a lot easier than untangling yourself from Google.
But Microsoft is pretty easy to avoid after their decade of floundering.
For private repos there is Forgejo, Gitea and Gitlab.
For open-source: Codeberg
Yes, it'll make projects harder to discover, because you can't assume that "everything is on github" anymore. But it is a small price to pay for dignity.
i.e. You’ll do what we tell you eventually.
I wonder if this is also a thing for their EU users. I can think of a few laws this violates.
> [Microsoft's publicist chose not to answer this question.]
With a little more effort you can deploy Nextcloud, Home Assistant and a few other great FOSS projects and completely free yourself from Big Tech. The hardest part will probably be email on a residential connection, but it can be done with the help of a relay service for outgoing mail.
Games I guess.
Both Mac and Linux desktop/laptop machines are better and less loaded with shit. If you don’t need or want a full featured PC you have Android and iOS which are also better. Android you have to be careful of but if you pick well it can be customizable and less loaded with shit.
Steam is available for both Linux and macOS. Are there just not as many game titles? I just saw Cyberpunk show up in the Apple Store for Mac so there seems to be an effort to port more games off Windows.
I have a Windows VM but use it less and less. Only need now is to test and build some software for Windows.
Also: I realized what I do kind of like about Apple and how best to describe their ecosystem. It’s the devil you know. They are fairly consistent in their policies and they are better on privacy than others. Some of their policies suck, but they suck in known consistent ways.
If I left Apple, Linux (probably on Framework) is the only alternative.
A vast majority of the games work fine under Linux now, in fact most release these days woke even on day one of release. The only games that don't really work are ones which use invasive kernel-level anti-cheat systems.
Microsoft: it's just a shit as Microsoft 365 and SharePoint.
That's your problem right there.
> Microsoft only lets you opt out of AI photo scanning
Their _UI_ says they let you opt out. I wouldn't bet on that actually being the case. At the very least - a copy of your photos goes to the US government, and they do whatever they want with it.
Look, scanning with AI is available!
Wow, scanning with AI is now free for everyone!
What? Scanning with AI is now opt-out?
Why would opting-out be made time-limited?
WTF, what's so special about 3x a year? Is it because it's the magic number?
Ah, the setting's gone again, I guess I can relax. I guess the market wanted this great feature, or else they wouldn't have gradually forced it on us. Anyway, you're a weird techie for noticing it. What do you have to hide?