There are real use cases for this technology! But the idea that the generation of superficially plausible text is "the next Industrial Revolution" comes out of the same mindset that has turned a neat technology into a banal hellscape for consumers and employees. We desperately need some leadership in companies or institutions that can place this technology in its proper context, and leverage it without getting manic about it.
I proposed once a while back that we should have the HN admins strip all integer counts for a week server-side, to see if the site quality improved or worsened during that time. The mods suggested I ask HN, so I did. HN loathed the idea of it, for every possible reason except this one: removing all those integers would be like quitting gambling cold turkey after years of pulling the vote lever every day. I’m not much less vulnerable to this than everyone else, but I still want to see it happen someday. I remain reasonably confident that our social media site’s quality would skyrocket after a couple days of our posts and comments being disinfected of make-integer-go-up jackpots.
There's the classic "I wish facebook had a dislike button" or the equivalent for twitter.
But in the thread-based forum context, removing the downvote has interesting effects. For one, it stops people who down-vote-brigade to lower visibility. It also stops the "I don't like that guy" engagement and works on a more positive "I appreciated this comment" mode.
It's not one-size fits all but I've seen positive effects on more marginalized forums.
So much of social media nowadays is just low quality clips of TV shows/movies with an AI-generated song over them. Or the same Minecraft parkour map as an AI voice recites an r/AmITheAsshole post. Or AI-generated funny videos. The quality of the content doesn't matter at all.
Anyone I've talked to about how it was all just AI just responds with something akin to "I don't care if it's AI, it's funny! Let people enjoy things!"
So, now people are in groups and chats full of bots posting exactly what they want to hear.
Instead of meta b it's states, companies, or individuals hoping to make money from their followers
Like don't we want people running these companies to be honest to the public rather than misdirection?
Ironically, this makes even less sense.
If (ostensibly) the goal of developing LLMs was so we can all create more while working less, but he also assures us there will be just as much work in the future, then what was the point of this tech in the first place?
What about any of these folks’ biographies hints that they’re capable of being honest?
> This is a good instinct: one of the virtues of democracy is the way that it gives people a feeling of control over their own lives. People who believe that they can rein in AI companies through votes and laws and regulations will be much less likely to turn to violence.
I like how this is entirely put in terms of "feelings" and "beliefs" with the ultimate goal being to keep people from resorting to violence. It doesn't seem to play any role how much control people actually have.
> We founded Anthropic because we believe the impact of AI might be comparable to that of the industrial and scientific revolutions, but we aren’t confident it will go well. [1]
We’re past plausible text since GPT-2 and it’s undeniable that the technology is making waves right now and is having an impact.
As you can’t judge the impact of the Industrial Revolution by the first steam engines, you can’t dismiss the impact the technology is having right now.
There was recently an article shared around here that an LLM diagnosed ER patients more accurately than doctors.
Looking beyond LLMs image analysis to detect cancer and other diseases.
Like in coding, AI can and should be a useful tool for the human who decides and is ultimately responsible.
The AI as it is today isn't really doing any of those things. At most, it's a sort of reliable replacement fot Google Search. Worden ehen, it's being presented as threat to all those things the people care about.
AI-made music is frankly pretty good, do you actually listen to it?
I don't mean music that has AI-generated stems as part of an arrangement, where a human actually created it and used AI for bits and pieces, I don't see absolutely any point on listening to purely AI-generated music. The fundamental essence of music is emotion, listening to something generated without emotion has no point, it might sound good but it's hollow and devoid of meaning.
I've tried to listen to it, it doesn't even make me "sad", it makes me feel... Nothing. I'm a hobby musician and I incorporated some AI-generated parts in some tracks where I mangled/processed them but my idea was exactly to express how hollow AI-generated music is without the human aspect.
I think this is more of a musician side which I respect, but a lot of people would simply not care who created it (or what).
What you are describing is more akin to a form of hollow entertainment through the medium of music, a lot of pop music can also fall into that category (no, not all, there is also a lot of artistry is many pop artists/songs).
If AI-generated music triggers emotions on you then keep consuming it but knowing that it's a hollow form of the art, there's no one on the other side communicating with you, it's basically like having a conversation with a chatbot, it might sound human but you know that there's no one on the other side listening to you. AI music is the other way: there's no one on the other side telling you a story, or a feeling they went through, it's just a mimesis of it.
We can take examples of some pieces from famous composers like much of Haydn's works, some pieces from Handel, Bach, Mozart, etc.. Some of their works were commissioned pieces for particular functions. Whether the music be for courts, dances, aristocratic displays, churches, and other events. Even on the battlefield music has been used to route troops, supply orders, and other forms of communication. My point is that there is not always a story to be told. Music can also be used to disrupt one's sense of time -- while on hold on the phone, elevators, etc.. I would not say the music in those instances are really telling me a story either.
Much like the visual arts. Emotion can be expressed in a piece, but pieces can also be functional in nature. There is a difference between figures in an instruction manual, portrait paintings, and a van Gogh piece.
Not to mention that this debate has been had countless times through out history, as well. It's always the same No Scotsman Fallacy. For example, some critics of electronic music have made a similar argument way before AI.
"It's not real music if there are no instruments."
"It's not real music if <racial/cultural demographic> creates or plays it."
"It's not real music if the music does not adhere to contrapuntal rules."
I think what angers people most is that as technology progresses, the gap between effort and accomplishment decreases. Thus there is some sort of clinging to a sunk cost fallacy for some. As if something being easy to create devalues all the effort one has put into something. Maybe it does? I do not personally think so. If anything, it allows greater access for people to participate in the arts -- something the arts have also had a historically rocky relationship trying to gatekeep.
The invention of the camera did not make painting irrelevant. It even opened a new door to the world of visual arts. I do not think AI music will make musicians irrelevant either, and perhaps new doors might open too.
It's pretty silly that so many people take as an axiom that the human brain basically has a monopoly on certain patterns of electrical signals, and have semi-religious beliefs that this will always be the case.
It's that experts in a field generally agree that what comes out is insidiously hollow garbage.
This isn't a "semi-religious" belief. It's linear token soup and diffusion bakes running headfirst into actual expertise, second and third order effects, refined skill and taste, and so on.
If you actually want to see civilization advance, you cannot rely on machines that merely mash up existing intellectual output while pretending to have expertise.
We already had that in the form of art school avant-gardism. AI is just style transfer of that, with corporate sycophancy and valley hyperbole as a veneer.
It doesn't matter how technically innovative, or how much expertise, a model has, while an AI is not a consciousness that can express itself it will be hollow. There's no way around that.
If some form of AI becomes conscious, and can express itself through whatever art form it conjures for that, why would it even use music? Music is human, it's tuned to how our brains work and perceive sounds, I'd be much more interested to discover what art forms another form of consciousness that we can commuicate with can come up on its own.
The brain perceiving sounds a certain way in the end is just data, that can be mapped as well, an AI can make us laugh right because it understands speech really well (and will be a thousand time better someday), what's the actual difference with music?
Let me give you another example, there is some Meme about older folks getting bamboozled by AI images right (especially doomsday stuff) which proves that it does trigger them genuine emotions, what's the difference if that image does actually exist or not (or let say a human photographed it).
You are confusing the topography of it with the substance, what's the point of something that is without substance? Without meaning? It's just fake, whenever you point to someone that an image that brought them joy is fake, generated by AI, it immediately changes the feeling they had. It doesn't bring the same awe anymore, awe is reserved to what is real. It might bring awe in the sense of "woah, a computer can do that" but that's a different feeling than being in awe of the story the image created.
How can it be full of emotion if it's created by something without emotion? It's just a mimicry of emotion, I really cannot understand how you cannot feel that knowing it's not created by another being; being real is the whole point, an emotion triggered by something not real, not experienced, transformed, and communicated by someone else is inevitably hollow.
Like: how can AI know what is to feel in love? Or to feel the loss of a loved one? Or to feel despair about something? Or to feel depressed? Or to feel extreme joy? Why would you listen to a song telling you a story to evoke an emotion on something that simply does not exist? There is no experience being transmitted, it's purely a hollow amalgamated mimicry of the experiences that were ingested but the output has absolutely no emotion, just a synthetic mimesis of it.
You are enjoying the mimicry, it's entertaining, but I really would like for you to ask yourself deeper questions about this rather than be impressed by the surface of it.
> The brain perceiving sounds a certain way in the end is just data, that can be mapped as well
You completely missed the point.
The college-age students I interact with hate AI content from other people, but they love using AI for their own work.
They'll pump AI generated memes and AI altered images all day long. Then they'll use ChatGPT to do their homework and write their resume, then look for an AI tool that will spam apply to jobs for them. Then when they get the job they plan to use ChatGPT to level the playing field with more experienced, older peers.
That's not even getting into the AI entrepreneurs who think they're going to use AI to start a company or find a winning strategy to trade memecoins or bet on PolyMarket so they don't have to get a job at all.
I think the next generation is all-in on AI for their own use. They see it as their advantage over the boomers occupying all the good jobs. They think ChatGPT is their cheat code for getting into these companies and taking those jobs.
To conceptualize AI as merely “superficially plausible text” would be like writing off a Watt steam engine in 1776. The current AI bubble might be early, but it won’t be wrong. The fervor with which corporations are exploring the space stems not from misplaced optimism but an existential threat. Right now every industry is vulnerable to disruption on a massive scale.
And we’re still in the early stages. Frontier models like Claude or GPT-5.5 are still just tuning 2017’s “Attention is All You Need” with MoE, RLHF, and more compute. We are roughly where online services were in the early 90s, when Prodigy and CompuServe were battling it out for market share before the open web swept them aside.
We are still waiting for the modern equivalents of Yahoo, Google, Amazon, and Facebook, never mind the lessers. As Tim Berners-Lee said of the web: “we have not seen it yet. The future is still so much bigger than the past.”
I'm going to say up front that I'm not as familiar with this period of history as I should be, but -- would it be totally unfair to say the same of the "Industrial Revolution"?
I'm not gonna say they're equivalent by any means, but my understanding is the "Industrial Revolution" was hellish for many people. Maybe the mistake is the framing that "the revolution" or "the next big thing" is always a good thing?
They are good things. If you were an adult, male aristocrat, yes, your untouched meadows and streams got tainted. If you were a woman you stopped dying in childbirth. If you think of infants as people, they stopped massively dying.
The Industrial Revolution was good. But it also required erecting the modern administrative state to manage. People had to soberly measure the problems, weigh the benefits and risks, and then invent new institutions and ways of thinking to accommodate the new world.
That happened in the Second Industrial Revolution. The First Industrial Revolution was much less comfortable for both workers (who were given much worse working conditions) and the aristocracy (whose landholdings were much less valuable) - it was the middle class who benefited.
> The Industrial Revolution was good.
The outcomes of the Industrial Revolutions were good. The experience of living through those revolutions was mixed.
Infant deaths decreased for a while (and NOT because of the industrial revolution):
> These patterns are better explained by changes in breastfeeding practices and the prevalence or virulence of particular pathogens than by changes in sanitary conditions or poverty[1]
then rose:
>Mortality at ages 1-4 years demonstrated a more complex pattern, falling between 1750 and 1830 before rising abruptly in the mid-nineteenth century.
[1] Davenport, Romola J. (2021). "Mortality, migration and epidemiological change in English cities, 1600–1870." International Journal of Paleopathology, 34, 37–49. PMC7611108.
Maybe AI enables great inventions in a decade, but for now the only appeal is that multinational corporations get to fire workers and everything's filled with slop. Of course they're not happy.
I think Society will completely reshape itself over the next decades, likely with UBI and other form of social help and the ones that don't want to partake into the whole "AI orchestration" will just not have any opportunity imo, sad, but this is the way I see it. I truly believe it because myself and ALL the people I know have pseudo-replaced their work with solely orchestrating AI, including very complex jobs and lately because some of my friends asked me, I've also built "agents" that replaced entirely their work, and their employer don't even know about it (customer management, remote) which proves that those jobs shouldn't even exist as they are ALREADY replaceable, all Zoom meetings are immediately recorded, agents do basic loop adversarial with all common models, then proceed with doing tasks and so-on, that last for about 30min and the whole week of work is done, all chats are directly sent to a triage agent as well then the whole rag thing and so on.
My work went from managing/developing 1 repo to 70 repo at once, evening to morning answering questions like a bot 10h a day with 8 monitors in front of my face, and I'm realistic, I know at some point I can literally replace my own self with an AI as well to answer for me, it's just a matter of time.
We need to rethink everything and the whole AI hate from the youth will not change anything about it.
I have multiple friends also running pretty large businesses with 30 or more staff, and right now they are literally at a point where they argue about why they shouldn't fire most of them, it's fuckin sad, but it's the reality.
You are conflating the concept of UBI with social welfare, they are different things and it's a bit annoying to see the erosion of the UBI concept into social welfare, I've noticed an uptick of this the past year or so, no idea where it's originating from...
UBI could mean also that people could be living in places further away from main cities, and eventually housing will be automatically built as well so costs could drop sharply.
What's the point of progress if we keep repeating the same mistakes of leaving miserable people behind? Is that progress or just a repetition of the cycle with new shiny things?
It took two world wars till we had an aberrational period where the middle class actually had lives which were good.
UBI can’t happen because governments globally don't have the money to pay for it. Its good to hope, but the details aren’t in favor.
We'll have no UBI and little purpose.
That's the only statement that's true. Admiting to AI use is unfashionable in the western world at this time.
But how much would you like to bet that 90% of those students who were booing also used AI to do their homework for them quite often? So your take away would be "the AI stole their education". No, they were dishonest and the AI helped them cheat themselves out of learning.
Technology doesn't make anything banal or a hellscape, or fire people. Technology is a lever.
If humans use AI to produce worse output because they are too lazy to bother reviewing and iterating on it, that is a human problem. If humans are going to use AI to help them exploit other humans more efficiently, that is also caused by the human rather than the technology.
Also, the ChatGPT moment for humanoid robots is coming this year or next. It will become very obvious that AI use in these robots is not just superficially plausible text.
This is like saying a smoker can't criticize the tobacco industry. It's entirely possible to recognize that AI in school is a huge problem while (hypothetically, in this case) still using it. Indeed, if enough of your peers are using it and you do not, you are effectively being punished for being virtuous. It's a lot like being the one cyclist in the Tour de France who isn't doping.
Similarly, if your peers aren't able to keep a conversation going in a seminar because they had AI do their reading and assignments for them, then you, as a student, are having your education stolen from you in a very real way. Education is something that happens in community. When enough of your community is using AI, your education will suffer.
I will die on this hill: AI _properly_ integrated into education will be a huge improvement for students because it will enable each student to have personalized instruction and tutoring.
This is a fine thing to wish for. But literally every AI company today wants their customers to use AI as much as possible.
I, too, would like to live in a world where AI is only _properly_ integrated into education. But that is impossible without limiting its improper integration. An no AI company wants any limits on AI.
I doubt it. AI seems fundamentally useful. If the guys at the top can’t get their shit together with messaging and strategy, and it increasingly looks like they can’t, they’ll be replaced before an entire generation is potentially rendered permanently uncompetitive. (And to be clear, there is no rush to adopt.)
> We desperately need some leadership in companies or institutions that can place this technology in its proper context
We need the public debate to stop being set by Altman, Musk et al. We need our generation’s Dickens, Tolstoys, Sinclairs and Whitmans.
What are the ways potential futures with AI, on the spectrum from the familiar sci-fi AGI to more-subtle forms, could work? What are the novel ways it might not? How does capitalism need to evolve? Electoral democracy? Labour organization? If I think to the last few years of television and movies, Westworld is the only one to have contributed anything original to the discourse since Isaac Asimov’s era of science fiction.
They're out there, but the artists are roundly anti-AI; if you want their input, you have to listen to what they're saying, rather than pretending that dissenting voices are uninformed.
We don't talk about human intelligence with "use cases", I think we need to be realistic about what AI will be in our lives, most people already can't do without, and this will without doubt expand further.
Like agricultural employment has gone from ~70% to ~2% but the people who would have dug potatoes make cars, houses, aircraft and the like.
How expensive do you think it would be to convince 30 million people of something that wasn't true?
Who might benefit from a generation of Americans being pessimistic about their future?
More specifically, who might benefit from a generation of Americans being anti-AI?
How would the cost compare to the potential benefits?
To be fair, this isn’t the commencement speaker’s job.
I would 100% expect a commencement speaker to be hyping me up for what comes next.
That’s what this speaker was trying to do. The problem is it was stupid and dishonest. It could have been done properly. But none of that will rise to the level of a roadmap. If you’re looking for a roadmap at commencement, you were failed at multiple steps before.
That being said we already have relative superabundance and we're more miserable than ever, so it's not clear that more of it will cheer us up.
Distribution of abundance in current time is close to evil, America reducing entitlements and support (not expanding). Rampant waste. No reason to think any of this will change.
It's not great that we can buy iphones (and AI is going to cause all electronics to be scarce, so much for abundance there)
Housing food and gas are goods...
Or did you mean something completely different?
That sounds great, but how are LLMs supposed to achieve this? You can't just say "AI will make a utopia". You have to present a vision for how it will get us there.
I'm tired of hearing about how AI will solve all the worlds problems. I want to see actual progress towards achieving these goals. And for the most part that hasn't manifested. Most people would consider AI to have had a net negative impact on their lives.
Saw an article recently that said CS majors were up there with performing arts majors and art history majors in terms of unemployment rate.
You can't have it both ways: either LLMs are an amazing, revolutionary technology that can replace many human jobs in unprecedented ways, or it's going to be a mild transition that really only helps people.
The assembly line was explicitly about replacing skilled with relatively unskilled labor.
I think what they are saying is "that something can replace a job does not inherently imply the next step is poverty". From that perspective, you can absolutely have it both (and many other combinations of) ways.
What actually happened in each case was that employment went up for a good long while, as the efficiency boost to the sectors touched made investment far more viable. Eventually successive rounds of automation did reduce employment in each of weaving and mining, but it wasn’t an overnight catastrophe as initially advertised or feared.
Programmers (and other workers but this a tech centric forum) need to start to accept that programming was a necessary evil of the before times. We didn't have the theories. We didn't have the manufacturing techniques.
Before hardware was powerful enough to run models on a laptop we needed all that hand crafted custom state management to avoid immediate resource exhaustion. Or to hide the deficiencies of the chips of the day.
For all the appeals of tech workers to a lean into a high tech life, programming as humans did in the before times seems pretty outdated. Bring back rotary phones too, I guess.
If we don't have jobs we are free to:
Take up arms against an exploitative political and owner class minority.
Make sure grandma and the kids are ok. Everyone has enough to eat?
Free the sweatshop kids we exploit without giving them a choice of "the mines" or college, from obligations to our own meat suits
???? What else?
Whole lot of job culture too was just busy work to satisfy the beliefs of they who are generationally churning out of life. Bye grandpa; thanks for zero assurances but tons of obligations; you won't be missed!
Elon and such are not an immutable constant of the universe. Few more years and he'll be Mitch McConnelling out on TV. Especially with all the drug abuse.
Everyone under 50 needs to prepare for the future not LARP the past.
Think ST:TNG; automation makes enough stuff. Why worry about money?
So focus on political action then; log off this VC funded freebie intended to ameliorate your feelings about the rich owners and operators of this site, and do like they do; tell government to make things right by you or we replace government.
You think PG is sitting on the sidelines letting Congress figure out themselves? He's putting his thumb on the scale through his actions through social networking with politicians.
Gotta leave the basement and do the work
Americans are heavily propagandized and naive af. So exhausted by educated morons.
How are we not going to be begging whoever controls chip fabs and electrical plants for compute tokens? HOW!? EXPLAIN IT.
I am meeting with my state legislators this week to, among other things, discuss how big tech should be on the same hook as the food industry who have to label their products in the open.
How all the auto standards are openly legislated, AI standards should be as well. It's just electrical physics not magic.
How like the government has to release laws, big tech should have to release all code, guiding theoretical principles, training and development environments and attest that is what they loaded on those servers.
Use their tools against them; they have the government in their cornering giving them handouts. Go get yours.
You all came up in a society that afforded zero assurances this whole time. Rather than idle about jerking off the American ego perhaps you should have listened to everyone saying this was coming a decade ago. Two decades ago. 4 decades ago.
I have zero respect for my fellow Americans. Willfully ignorant and willingly exploited serfs. Forget I said anything; you all didn't do the political action work to put me on the hook for your healthcare so thoughts and prayers, HNers.
Ah so your answer is AI will cause most people to live in abject poverty. Good talk.
https://news.ycombinator.com/item?id=48099117
But go ahead and commit to the dogma money must continue to exist even though the financial system is merely a socialized ethno object not immutable physics.
I forget my lived experience is atypical among engineers. Before getting an EE degree (which I have not leveraged in over a decade tbf) I worked to live in ruralandia fixing old tractors electrical issues for poor farmers (which turned me on to EE), building barns, homes, rebuilding cars, growing crops, slaughtering livestock for food. I also play 3 musical instruments fluently.
I have a shit ton of experience and muscle memory for doing without money. Get good scrub?
Do not put all your eggs in a single skills basket.
End of day you aren't in a soup kitchen feeding the hungry and exploiting child labor to avoid sewing a shirt. I guess you get what you give, right?
:shrug:
Guess you all should not have ignored politics this entire time. Live and learn.
Thoughts and prayers, fellow Americans.
Please don’t do this.
What is this? The NBA? You want people to stick to social norms, call it both ways.
Oh, I downvoted both of you. But I only flagged you because of the name calling, which is against the guidelines [1]. When I flag I like to give the person on the other side a note, in case they genuinely didn’t know.
Not terribly concerned about HN rules. Just bored and will abandon this account now. Cheers
ICE has an $80 billion budget.
Demand Congress pay off mortgages rather than hand Leon Skum tens of billions.
There you go. Stability.
[1] https://www.fhfa.gov/data/dashboard/nmdb-outstanding-residen...
Shows you don't need to have red skin and horns to delight in the suffering of starving people.
College graduates being that myopic and failing at such basic logic. One can only wonder about the quality of education they've got and how it would help them in the modern technological world. Though being that hypocritical may be they would exactly do very well.
>University of Central Florida’s College of Arts and Humanities and Nicholson School of Communication and Media
yep, clearly not Stanford.
Yes you can. They use AI and also despise it because it will turn the world into one big caste system. Ones with access to compute, and ones without.
Labor saving technology does not create enough alternative jobs to employ all those that it displaced, otherwise it wouldn't be labor saving.
Instead, the surplus created by these technologies allows that society to deploy labor on less immediately necessary jobs. These jobs weren't created by the technology, they were always there, but society did not have the resources to staff them (think education, research, academia, merchants, etc.)
This dynamic has been true since pre-historic times, so you'll need some extraordinary evidence if you want us to believe this time is different.
Things like Unions, Wars, etc.
What comes after new technology has always been the elite class owning them all and forcing everybody else to suffer until something managed the distribution of resources slightly better (War forces that).
Avoiding a repeat of that would be great while also increasing productivity would be good.
The Luddites were all for saving labor, but not if enshittified products and slavery to unreliable machines were the price.
Sounds pretty familiar to me.
Destroying the machines was a way to gain leverage for a class of people who had none. People had been using looms for centuries. It wasn't the technology that was the problem... that's what the victors, the capitalists, have written was the reason.
Well, yeah.
Or, alternatively, that we need the humanities today in a fundamental, possibly existential, way. If AI is another Industrial Revolution, rise to be our Sinclairs, Dickens and Tolstoys.
Hmm, how would we measure and confirm this hypothesis?
Anyone can pick up a pencil and practice for hours a day! You can look out a window for inspiration! There is no "gatekeeping" art, only people upset it doesn't come as easily to them as B2B SAAS and confusing real effort and introspection as "gatekeeping".
The AI art people were so happy to rub it in artists face, that finally, without effort or appreciation, they no longer had to pay the skilled person for an image.
The More Young People Use AI, the More They Hate It
https://news.ycombinator.com/item?id=47963163
Study found that young adults have grown less hopeful and more angry about AI
Somehow I have a feeling that the reaction would have been totally different if it would have been the EECS graduates.
Fear and rejection in certain professions is real and maybe even understandable.
I imagine 25 years ago someone telling music graduates “streaming is the future of music distribution” would have received the same reaction.
However there was a feeling that “the job” is radically changing right now.
Still, just because a technology facilitates something does not make their distaste any less potent. If anything, they recognize how much of world's building blocks are a fancy facade ( mild alliteration intended ).
> in US mind you
That is my only reference.
> Still, just because a technology facilitates something does not make their distaste any less potent.
Sure, I agree once again. I may have not explained my position well initially. I just cannot help but feel it's a little hypocritical. And again, hypocritical might be a poor word to use.
We have kids booing a commencement speaker after her AI comment (which I think was a distasteful comment), but at UCLA's graduation a few days ago, we had this: https://www.youtube.com/shorts/zSqOPOzrIig
(Student's explanation: https://www.youtube.com/shorts/rswUgIfj1YU)
I think why I am having difficulty describing what I am thinking is because there is not one homogeneous group of students. There is clearly a subset of students that oppose AI's current and future costs/benefits. Though, at the same time, there is a different subset of students that heavily rely on AI. Some to even a problematic degree.
I have a few friends that are professors at a prestigious, private university in my city. They have all shared their little tricks in how they are trying to combat AI usage in academics. Some put hidden white text in the margins of their assignments. When citations are submitted with work, they look for the the 'utm?=chatgpt' in the urls. Some of the foreign language professors craft writing prompts with words that they know LLMs often tend to translate incorrectly.
Based on the research I can find via a few quick searches, it appears that in the populations of the studies, AI usage is far more common than AI abstinence. I imagine these students want to use AI to benefit themselves but not harm themselves in the future. I do not fault them for that in the slightest, but I do not think that is how things are going to end up working out. I strongly believe the students that misuse AI to do their work for them -- not help them -- will be in for a rude awaking.
As I am reading the source, it is more weird than I initially accounted for. The speech she gave was fairly benign compared to some of the bigger quotables from Musk, Altman or other AI industiry figures. Basically, march of time and 'I remember when' kinda nostalgia.
But given how weirdly benign the speech was, I have to ask. Why the boos? Is there some context I am missing? Was the speaker recently on the wrong side of history?
I am asking half-jokingly, but it seems like there is a giant part that is missing somewhere and I have no reasonable way of explaining it.
I feel mentioning AI in a commencement speech would be like me stating something in a graduation speech like, "Congratulations, class of 2026. The Carolina Hurricane have swept their opponents in both rounds of the NHL Stanley Cup playoffs. May your future be as bright as theirs."
No telling though. I am completely unaware of who the speaker even is.
"Passion--let's go!" Lady read the room.
The message to a group of graduating artists should have been about the literature, art and public works that turned the Industrial Revolution's hyper-concentrated gains into broadly-felt benefits. (And then, after WWII and the Green Revolution, encouraged us to start reckoning with its environmental cost.)
AI is potentially—and with increasing confidence day to day—showing itself to be useful. That deserves neither worship nor demonization. Yet history—told by the humanities!—tell us, it probably hasn’t started in the right leaders’ hands. It is the role of the humanities to show and guide the public through that debate and reconciliation.
Look how genuinely surprised she was by the audience's reaction. In their world, AI is an unambiguous good.
Clearly people don't consider it obvious, considering my comment got flagged.