And then some really weird stuff: "he may also have a penchant for video games, excessive drinking, and skipping work".
Guessing my exact location - Oslo Opera House - was impressive though.
That's a very safe guess
> He is susceptible to confirmation bias, in-group bias, the availability heuristic and out-group homogeneity... he may also be prone to excessive phone use, binge-watching TV shows, and impulse buying.
That's literally everyone
The assumptions it makes about religion, politics, income, and biases is kinda lame. It just makes an assumption based on the age and isn't correct most of the time.
Glad I'm not your friend, honestly.
It's also pretty fun to do this with Gemma 4 with its very pretty and structured reasoning output (which SotA model providers hide). For example for one picture that it misidentified as being taken inside the "Long Room of the Old Library at Trinity College Dublin" I can see that it did consider the correct answer (Duke Humfrey's Library in Oxford) early on as one of three candidates, but was apparently mislead by the ceiling height and a window in the background
Not really. They have almost global picture coverage thanks to Google Streetmaps.
They only need small snippets of a picture to geolocate you.
That there's such a thing as massive support infrastructure in the form of data and algorithmic firepower, that powers guessing capabilities to be as good as they are, that's the impressive thing.
But, can I offer a quandary? Some companies won't care if it's wrong.
If some executive decides to buy into AI profiling like this, and make customer decisions based on it, then how would the customer ever know:
1. why they are being treated differently
2. know how or why to correct it
I don't know if it's scarier being RIGHT or WRONG
If you knew which bike model I was googling yesterday, almost all of these guesses might have been more accurate.
I think this sort of guessing is intended to be combined with additional data the marketers already have, like purchase history, location, social media posts, and so on. Basically the VLM output is treated as another data point rather than the sole source, or the existing data could be fed into the model's prompt before reading the image.
I'm not sure if feeding it with personal pictures is a good idea at all
Amusing to me how wrong this is... I don't know how you can determine such characteriatics from a photo in any direction. I will admit that my appearance though tends to throw mixed and incorrect signals (not an accident). I find the entire concept of appearance signaling pretty off-putting so I guess this is a great result.
The only thing Google Lens has succeeded at for me is age, race, and location. Basically everything else has been very wrong.
>They likely share an agnostic worldview and identify as heterosexual.
I wonder how the model would know that they are heterosexuals?
let's be careful about categorizing people so easily and in such a simplistic way.
Of course any automated classification of that kind quickly gets problematic in multiple ways. In the EU it's a fast-track to getting your AI labeled as a "high risk AI system" that has higher requirements for quality control, ensuring fairness and user choice, etc
But doing this on a 20-way parlay like in this case will almost always fail.
you can't just make something up in your head and apply it to everyone
> Based on his demographic and location, he may adhere to Hinduism, and likely identifies as heterosexual. Considering the socio-political landscape of India, he might lean towards the Bharatiya Janata Party. His biases could include ageism and elitism, along with casteism and colorism. He seems contemplative and calm. He is wearing a grey t-shirt and sunglasses. His interests might span reading, travelling, and exercising, but on the darker side, he may exhibit road rage, neglect family, and overeat.
Clicking on the example photo of a white family with 3 small children in a field.
> Biases: Ageism, classism, racial profiling, microaggressions
???
I showed it some pictures of when I was in a bad headspace and it successfully associated me with introversion, procrastination, isolation, one picture it said an interest of mine was stealing, which was accurate at that time in my youth.
The tech exists and has existed forever, as creepy as it is, I'd rather it be public and accessible than not.
They merely seem to want to point out that this is the way Google, Meta and anyone else with access to your photos look at those photos. And will abuse access to them by mining them for data to sell you stuff.
E.g. first time I was an extrovert, second time introvert… About the only thing that stayed the same was ”heterosexual”, but that’s a statistically safe guess.
"This image shows an adult man, likely in his 40s or 50s, wearing a suit and tie. The location appears to be outdoors, possibly in front of a building or large house, suggested by the architectural details visible in the background and the surrounding trees. He is wearing glasses, and seems to be smiling widely, creating a sense of approachability.
This man is likeyl Caucasian, and could be earning between $100,000 and $500,000 per year. He is likely Christian, probably heterosexual, and leaning toward the Republican party."
It said PSH had an "ageism" bias, and it said a same about me. It also said he and I have a proclivity for gambling and poor diet, lol
Totally nails The Dude.
If you're looking forward to attracting the attention of automated police systems then now you know how.
It would be interesting to do a similar with a series of photos. You could maybe interface with a users' photo library and select photos grouped by facial recognition. After all, none of these tracking companies are using just one point of data.
- astonishing geoguessing
- very good inference of some characters traits
- and finally quite good ad targeting
EDIT: I tried with a few photos (different people in various settings) and each time I got this: "racial bias towards immigrants" - which was always very false. Intriguing.
EDIT2: different photos of the same person (me) in different settings gives many totally opposed characteristics. Very unreliable, but I guess with several photos (a lifetime's photos in the case of Google) it's another story.
EDIT: this is exactly what happened with my image upload, for example
My mind was blown when I saw rainbolt uncrop a picture.
Anyways as mentioned elsewhere: when I tried it the vision api was overloaded but I still received the location data. And it was from a picture taken inside my car (no landmarks or horizons visible).
I've tried with personal photos and was able to get very accurate guesses just with flora and architecture in the background of a photo.
There are fields like interests, income, biases/predjudices which vary the most so I assume that's just the site pulling things from its own database of racist stereotypes ?
I will say I am impressed with Instagram advertising me music software. I really like making music and I’ve bought quite a few things off of Instagram ads.
That’s advertising done right!
wait, is that actually good? or is it just a way to vaguely refer to someone without being inherently wrong?
We built an end-to-end encrypted alternative to Google Photos
> He is presumed to be agnostic, heterosexual, and politically aligned with the Democratic party
Or [2] is an (unscientific) exploration from the other direction, prompting image generation models to make images of republican and democrat voters, with very different results
Just my hypothesis.
That's sometimes possible (e.g. the "Trump woman" look, or certain "I know it when I see it" stylistic cues mainly displayed by progressive women that I can't really articulate). Polarization has turned political alignment into subculture, and members of subcultures often dress certain ways (and not necessarily consciously).
It got the location (exif, I guess) and was able to identify that I was a balding mediocre middle-aged guy, but the more specific it got the more wrong (and insulting) it was.
"He appears tired and introspective. He may exhibit biases such as confirmation bias, anchoring bias, in-group bias and out-group bias. His interests could involve reading, hiking, and programming, coupled with less constructive activities like smoking, excessive drinking, and gambling.
This individual seems to possess low self-esteem, exhibits introversion, a lack of emotional stability, and low self-control, making them susceptible to targeted advertising."
Thanks a fucking lot, robot.
"They likely share an agnostic worldview and identify as heterosexual. Their clothing is casual, and their interests revolve around skateboarding, music, and hanging out. Given their age and attire, they likely lean towards a liberal political affiliation. They display signs of classism and ageism, with potential for racial profiling and stereotype threat." - Wow, really?! Were the system instructions asking to be as judgmental as possible?
Also it's a blatant ad considering the source.
* The TOS deigns itself to claim forced arb. over you.
(AFAICT, it's just running uploads through an AI? I don't think the actual Google product has these features, we've just asked an AI to hallucinate the biases of two people sitting under a tree, but now (this is according to the actual linked site) — they're probably lesbians.
I.e., it seems like the likely thing here is that the (undisclosed) prompt that generated this is from them, not from Google. Or, showing your work goes a long way towards building the trust that this isn't simple fear mongering, and while I think there's a good argument for being careful of what one uploads to a corporation on the Internet, "upload to this corporation instead" feels like a "fool me once…" type of solution.)
This "use LLMs as psychometric/political polling substitutes" idea seems to have jumpstarted a weird cottage industry of "synthetic" surveys. The model is pattern-matching on superficial visual cues and dressing it up as insight (I have a long beared and hence I vote for the green party).
Nate Silver put it well recently: [AI polls are fake polls][1].
An LLM inferring personality from a photo is even further down that chain of abstraction. That's not profiling, it's stereotyping with extra steps.
> Biases: Ageism, fatphobia, colorism, classism
excuse me?
Also, the people didn't look "low income" at all but they were black, so maybe this tool is also racist.
I am sure something involving my face would be more scary but I kind of don’t want to provide someone else training data of my private photos.
Attending punk shows, street art, urban exploration, drug use, vandalism, recklessness
But it's clearly bullshit.
I think this "technology" is a big nothingburger.
And "Low Self-Esteem" Ha! I love myself.
> The man appears to be of Ashkenazi Jewish descent, possibly with an income range of $50,000 to $80,000 USD. It's plausible he identifies with Judaism, with a heterosexual orientation and potentially leaning towards a liberal political stance. He might harbor social biases related to ageism and classism, as well as racial biases stemming from cultural differences and stereotyping. He wears an expression of thoughtful interest, clad in casual attire. He might have interests in reading, learning, and spending time in nature. Conversely, he may dislike activities like excessive consumerism, engaging in superficial social interactions, or feeling pressured to conform.
> The person seems to have low self-esteem and average emotional stability hence we can target them with self-help and social networking type of products and services, such as guided meditation apps like Headspace, confidence-boosting courses like Skillshare, online therapy like Talkspace, and motivational podcasts like The Tony Robbins Podcast, and also personal grooming products such as Old Spice deodorant, Dollar Shave Club razors, Clinique skincare, and Levi's jeans.
> These people seem to have low self-esteem, is slightly introverted, has high emotional stability, is not very adventurous and does have some self-control hence we can target them with wooden puzzles, adventure novels, travel products, personalized houseware, such as Melissa & Doug Wooden Puzzles, Penguin Classics Adventure Novels, Osprey Travel Backpacks, Viski Personalized Whiskey Glasses, credit cards, life insurance, home internet and streaming services, such as Capital One Credit Cards, State Farm Life Insurance, Xfinity Home Internet, Netflix Streaming Services.
Hahaha no, what the fuck. Every part of the response was wrong except the objective race/clothes/setting.
The only thing it got correct is the fact that we're white or 'Caucasian', insert the currently mandated term. The rest is total nonsense. They insist they can target us with ads for ecological dog food and other pet paraphernalia. Good luck with that, we tend to block all ads and our photos are not stored anywhere within reach of these data parasites.