It's been incredibly blackpilling seeing how many intelligent professionals and academics don't understand this, especially in education and academia.
They see work as the mere production of output, without ever thinking about how that work builds knowledge and skills and experience.
Students who know least of all and don't understand the purpose of writing or problem solving or the limitations of LLMs are currently wasting years of their lives letting LLMs pull them along as they cheat themselves out of an education, sometimes spending hundreds of thousands of dollars to let their brains atrophy only to get a piece of paper and face the real world where problems get massively more open-ended and LLMs massively decline in meeting the required quality of problem solving.
Anyone who actually struggles to solve problems and learn themselves is going to have massive advantages in the long term.
Using an LLM to do schoolwork is like taking a forklift to the gym.
If all we were interested in was moving the weights around, you’d be right to use a tool to help you. But we’re doing this work for the effect it will have on you. The reason a teacher asks you a question is not because they don’t know the answer.
Compare: My piano teacher doesn't give diplomas because none of her students would care, her students actually want to learn. When my piano teacher cancels class, I am disappointed because I wanted to learn. My piano teacher doesn't need to threaten me with bad grades to get me to practice outside of class (analogous to homework), because I actually want to learn.
There are many college students for whom none of these tests would pass. They would not attend if there was no diploma, they're relieved when their professors cancel class, and they need to be bullied into studying outside of class.
What made us think these students were ever interested in learning in the first place? Instead, it seems more likely that they just want a degree because they believe that a degree will give them an advantage in the job market. Many people will never use the information that they supposedly learn in college, and they're aware of this when they enroll.
Personally, the fact that they can now get a degree with even less wasted effort than before doesn't bother me one bit. People who want to learn still have every opportunity to.
If students find a way to get a diploma without doing the work, it will soon be worth less than the paper on which it is printed.
If employers no longer look for the diploma-signal in an employee, what will be the reason an employer will hire an employee?
I think this story will become true, and society will radically shift into one where critical thinking skills will actually be the only skills employers look for in employees, since the grunt work can be automated.
What becomes the signal then? Will we shift back into apprenticeship based employment? How do potential laborers display their critical thinking skills apart from displaying them in person?
In the computer engineering industry, you increasingly have to demonstrate the same: either as a part of your prior work for hire, or a side project, or a contribution to something open-source.
A diploma is still a useful signal, but not sufficient, except maybe for very junior positions straight from college. These are exactly the positions most under pressure by the automation.
So degrees have been a weak signal for a long time. Several of the best developers I've worked with had no CS degree at all. As a result we have interview processes that are baffling to people from other industries. Imagine a surgeon having to do interview surgery, or an accountant having to solve accounting puzzles. AFAIK we are very unusual in this regard and I think it's because degrees are such a weak indicator and the same is true for certificates in our industry.
I strongly disagree, that’s the intent not a side effect.
It’s IMO a common misconception that early algorithm classes are just designed around learning algorithms. Instead basic algorithms are the simplest thing to turn abstract requirements into complex code. The overwhelming majority of what students learn are the actual tools of programming, debugging, etc while using the training wheels of a problem already broken up into bite sized steps.
Ramping up the complexity is then more about learning tradeoffs and refining those skills than writing an ever more efficient sorting algorithm or whatnot.
My experience is graduates of schools focused on the more practical aspects tend to make better Jr developers on day one but then stagnate. Meanwhile graduates of the more theoretical programs people pick up those same practical skills on the job leaving them better prepared for more demanding assignments.
This then feeds into the common preference for CS degrees even if they may not actually be the best fit for the specific role.
IMO one of the big problems is that we’ve gone too far with the assumption that learners can’t be valuable until after they’re done learning. Partly a cultural shift around the role of children and partly the reality that knowledge work doesn’t require much unskilled labor compared to physical industries.
But I wasn’t aware about the master peace. Thank you for sharing that!
We simply stopped doing that. A decade of apprenticeship reduced down to 3 months of shadowing during summer.
Compare that with today, by early 20s one is only getting out of college undergrad. About to start the real world job training.
So much of wasted time.
More reason to vye for labor protections. If they realize they can't just rotate out people every 6-20 months they may actually go back to fostering talent instead of treating acedemia like a cattle farm.
LLM cheaters might incidentally be doing society a service.
I graduated university around the turn of the century, long before the current AI boom started, and the majority of my classmates were like that. Learning the bare minimum to escape a class isn't new especially if you're only taking that class because you have to because every adult in your life drilled into you that you'll be a homeless failure if you don't go to college and get a degree. The LLMs make that easier, but the university, if the goal wasn't just to take your tuition dollars to enrich a vast administrator class instead of cover the costs of employing the professors teaching you, could offset that with more rigorous testing or oral exams at the end of the class.
The real lesson I learned during my time in university is that the two real edges that elite universities give you (as a student) are 1) social connections to the children of the rich and leaders in the field that you can mine for recommendations and 2) a "wow" factor on your resume. You can't really get the first at a state school or community college, and you definitely can't get the second at a state school or community college, despite learning similar if not the same material in a given field of study.
It hasn't been about (just) the learning for a long time.
It's not so much what you learn, but that you train it hard.
Annyway, any advantage is entirely offset by having to live in a world with LLMs. I'd prefer the tradition of having to educate retarded college graduates. At least they grow into retarded adults. What are we gonna do about chatbots? You can't even educate them, let alone pinocchio them.
This is already true to some extend. Not apprenticeship taking place of college, but the last couple of places I worked hiring generally happened based on: I already know this person from open source projects/working with them in a company/etc.
In certain companies, degrees were already unimportant even before LLMs because they generally do not provide a very good signal.
Now if students can shortcut the education process, they can spend less time in it and this may force colleges to reinvent themselves and actually rethink what education looks like in the new era.
The Harvards will be fine, though. But I guess that will raise the standards naturally.
Four year degree is a very expensive investment in the current environment. We should push younger people to face the real world as soon as possible. Apprenticeship is indeed a great way to achieve that IMO. As a great side effect the young people won’t have to start out their careers saddled with huge debt.
Amen.
I look forward to the era where we train professionals the old fashion way: apprenticeships. It sure worked for blacksmiths and artisans for hundreds of years.
In many countries, regardless of how learning it was achieved, you still need a paper to prove that you actually did it.
And in countries like Germany, better keep all those job evaluations close at heart because they get asked for as part of many job interview processes, additionally have them reviewed by lawyers, as they legally can't say anything negative, there is an hidden language on how to express negativity which to the reader sound positive on first read.
The employers that do use those hidden phrases just hope they arent challenged/the employee doesnt notice.
Thats also why most evaluations are entirely written in the superlative.
Many employeers profit from foreigners that aren't well versed in these nuances, and have to be educated this is a thing.
An example for others not used to German work market,
https://www.karriereakademie.de/arbeitszeugnis-formulierunge...
Just go through the site with your favourite translation tool.
When one gets to have a good works council, there are plenty of stories that go around.
That's not a standard at all. You usually can't graduate without at least one peer-reviewed publication, but beyond that, as far as number of publications goes, it varies a lot from institution to institution. The biggest standard is that you complete a dissertation and defend it.
That's why the universities of Oxford and Cambridge give Master's degree to everyone that gets a Bachelor's degree after five years, without further examination or coursework (note that these are MAs only, not MRes, MPhil or MBA degrees, which typically require 1-2 years of studies, exams and theses).
Historically, the academic Master was seen as equivalent to a Master in a craft (e.g. philosophy <=> carpentry).
They can't have it both ways
I hire people now and where they went to school means little to me. The first priority is “can they do the work?” which is a niche programming. After that is established, I barely take note of school.
I don’t personally count a CS degree as an indication the person is a good programmer, or thinks logically, or has good work ethic.
For company I work for we hire 1 dev per year and I believe last year we did not even hire a single person.
So I do have time to check up the candidate do 3 rounds each 1 hr so that candidates can see what company are we and what person is the candidate. We also are small company so we don’t get that many applicants anyway.
I cannot imagine how it goes when someone needs to hire 20 devs in one quarter. Especially for a company that is any way known and they can get 1000CVs for a single position. They need to filter somehow.
In my experience, only very large and bureaucratic institutions (governments, schools) really demand degrees and certifications.
One could in theory establish the value of a thing or person by comparing it with similar things but if everyone does that the process becomes senseless.
If a working car is worth 5000, the same car from the same year with a defect that costs 1000 to fix should be worth 4000. If the repair costs 6000 there is no car.
No, it has value because it gatekeeps class mobility. Degrees haven't signified learning or problem solving longer than I've been alive. The attitude is that if you pay for an education, you're entitled to a degree. The education aspect is optional.
I think this is already the case.
No. It has value, because companies value it. It sets the starting point for your first salary and then for every salary negotiation moving forward.
As someone who did not go to university, but has the same knowledge self-taught I can tell you that this piece of paper would have opened so many doors and made life so much easier. It took me 15 years to get a salary, that people get with the piece of paper after graduating. And not because it took me 15 years to reach that level of knowledge. I had that by the time the other graduated.
I had a friend who always cheated in school, and now he works for a big car company and earned a fuck ton of money.
Life is unfair and companies only care about the paper your diploma is printed on. If students would ask me for advice, I would tell them to cheat whenever possible.
I am not arguing on the merit of having a diploma is a bad thing - the colleges these days have turned their backs on the people as well.
Average $100k to get a piece of paper (LLMs are not the issue here the useless degrees that the colleges offer). Invest that $100k at an average 15% return - it becomes a lot of money 25 years from now.
Or get a piece of paper (if your major is useless) and pay the banks 100k + interest.
And even if you ramped up a 50k debt, it’s a government loan where the interest rates are low, your monthly pay off is based on your salary, and if you are not able to pay the debt within a certain period, it’s wiped away.
This isn't exactly news, or even a recent phenomenon. My peers who finished their CS degree circa 2004 ish were a broad mix of utter idiots who shouldn't get a job in that field, or damn sharp (if green) technically-minded people, or somewhere in between.
The industry will sort them out; the ones that can, do, the ones that can't, will do something else of use, or find another job.
No. That's how it should be, but the reality looks different: it has value because it shows that someone spent 3+ years doing what they were told to do, enduring all the absurdities they were subjected to in the meantime. Whatever means they used to cheat don't matter, since they still worked on what someone told them to and produced results satisfying the expectations.
There are, perhaps, institutions where learning and problem-solving are seen as the most important while "following orders" and "staying in line" are deemphasized. For the students of all the others, putting up with an utterly absurd environment is often one of the biggest barriers to learning. Yet, it's a requirement without fulfilling which you can forget about graduating. Hence my conclusion: the diploma from most learning institutions certifies you as a good corporate drone - and that's enough of a signal in many situations, so why bother trying to fix it?
From what I remember it was 4 years of learning stuff I signed up to learn, occasionally being quizzed on said stuff, and then they gave me a paper that claims I know the stuff.
Some college students may be genuinely interested in one particular subject, but they're required to take a bunch of other courses, and consider those to just be hurdles.
I still think they're better off at least making an effort and trying to learn something, but I do think it's important to note that just because a student has no interest in one particular class, doesn't mean they have no interest in any class.
The course I'm interested in gets kinda hard, and my "just pull up an LLM" muscle is very, very strong, (and besides, I'm not used to struggling! and why should I get used to it in the classes i like?! I can't afford a C in my major!) so ... I use LLMs on my "I'm interested in it" class too and... we're back to the original argument.
I find a lot of these comments more disturbing than the concerns about AI.
They are in some countries, you get at the vocational programs or apprenticeships alongside the highschool, and in the end you might get the opportunity to apply to the university or just carry on with your job.
That is how I did mine in the 1990's Portuguese education system, and how I was already coding and understanding the big boys computer world at 16y.
The music industry is built on the back of people with music degrees. They don't get the name recognition of headliners. But song writers, arrangers, and session musicians are all very likely to have formal training in theory and maybe performance.
Producers and engineers less so. Those are more of a track record who-you've-worked-with occupation.
Music is no different from software in thst regard.
But she is a good example of degrees not equaling skill
For the exact reasons you state, pre-AI homework was often copied and then individuals just crammed for the tests.
AI is not the problem for these students, it's that many students are only in it for the diploma.
If it wasn't AI it would just be copying the assignment from a classmate or previous grad.
And I imagine the students who really want to learn are still learning because they didn't cheat then, and they aren't letting AI do the thinking for them now.
In some ways offering the diploma and all the requirements that go with that take the joy out of the learning for me.
1. Students given bad incentives to be thrown into a system with a completely different purpose than their main goal. Then those jobs turning face to suddenly say "schools teach you nothing" and even refuse to hire the newest generation.
2. Students in general not being stimulated by primary school and given direction and vision on what to do in life. Simply being pushed by parents to "be successful".
3. The crippling reality as of late that a job doesn't even guarantee keeling a roof over your head anymore. Leading to discouragement to even bother trying.
4. Connected to #2, the decline of various apprenticeships, internships (which are now a college recruiting pipeline), and other ways to invest in employees. Even if they complain about new grad output, they are still content outsourcing such training instead of investing in their employees for a career.
There's a lot of systems failing which can arguably cause an entire collapse in the country. Then no one will get an opportunity to properly learn.
You have this option with things like mits open courseware. Some colleges are OK with you just wanting to learn
Your piano teacher does not give a diploma because she is not offering a university education. If she worked with a few other experts and they designed a coordinated curriculum and shepherded students through it over the course of two to four years, and documented that process to the point where they could file with an accrediting agency, then she could issue a degree in piano.
> then she could issue a degree in piano.
It's worth noting, plenty of universities do this. You can get a degree "in piano".Being able to play moderately hard pieces from dots and sight-reading are the entry level requirement. It's taken for granted you can already do that.
The degree part means learning music history, theory, and performance styles, working on performance projects, solo and with other musicians.
The analogy with ChatGPT is that it's taking over the entry-level part of the process. You can't expect to get onto a music degree if you only know how to prompt ChatGPT to produce a MIDI file for your entrance exam.
And in CS, you can't produce good code if you barely know what a server is.
It's all very Dunning Kruger. If you use an LLM to produce course work to get your piece of paper at the end, you don't even know what prompts you should use to do an unfamiliar job, never mind having the skills to do it yourself.
This is much larger than a cultural problem with the students of today. They believe, rightfully and accurately, that the university degree is yet another part of the machine that they will become a cog in.
What should be alarming to everyone is that these students will graduate without having learned anything and then go into the workplace where they will continue to not use their atrophied critical thinking skills, to simply do yet more, as a cog in the machine.
* according to the UCLA CIRP freshman survey
I wanted to learn a new language and I wanted to take some history courses that covered regions and eras not well covered in my high school courses. But despite my university having a significant "elective" component to my degree path, none of those courses were on the list of allowed electives for my degree. So in this case, the university was failing at a focus on education by hindering my ability to branch out away from my core studies and requiring that I take "electives" that were more closely associated with the imagined career path my degree would provide.
On the flip side, the "core" courses for my degree were bogged down in academic minutia and exercises that bore only the most surface level resemblance to the things I've done in my actual career. Often the taught material was out of date relative to the state of the industry. Other times the material was presented with philosophical reasons for learning the material, but with no practical application backing it to help make that philosophy complete. And very little material (if any) covered the usage of tools of the industry. In this case, we're failing the goal of setting people up for their careers by not teaching the practical applications of the knowledge. And to be clear this isn't just a "learning examples are by necessity simplified examples" problem. I later went back to school at a local community college for different material and from day one those courses were more relevant and more up to date. They provided material that was immediately useful in real world applications of the underlying knowledge. And I think some of that was because many of the courses for that community college were taught by industry veterans, either part time or as a "retirement" gig.
In short, my experience at a large university was indeed a series of boxes that were to be checked, ostensively to provide me a "well rounded" education, but practically all narrowly focused on getting me a job in the field. Yet the boxes also failed at being relevant enough to the state of the industry to actually give me a foundation to work from when starting my career.
Always follow the money
I found this quite striking, since something like 10% of undergrads go into research. Most people really are there to help them get a job.
So the program is designed not to meet the needs of 90% of its "customers".
The stated rationale at the time was that degree holders earn more, so if everyone gets a degree, everyone will earn more. I am doubtful that was the true rationale but it's how the policy was sold to people.
These days you need a college degree just to afford a 1 bedroom apartment by the time you are 40.
When we're talking about college costs, public schools are really the only institutions that matter. The Ivies (or even Ivy+) are a rounding error compared to the big midwestern land-grant universities and the UC and Cal State systems.
States have substantially reduced their per-student support for universities: https://www.ppic.org/publication/higher-education-funding-in...
This coincides with federal funding programs, but student loans are famously not dischargeable, which makes them more instruments of social control than conventional financial vehicles.
Yeah, that's when the great "push for education" came, as well as neoliberalism which preached continuous hustling and individuality. And in the 90s, the ADA and other anti discrimination laws hit, and requiring a college degree was and still is a very useful pre-screening filter for HR to continue discrimination.
For me the impact of the university administrators as they chased higher endowments for more buildings with naming rights and expanded their own bureaucracies with direct hires that did not directly contribute to the faculty mission did more to alter the university experience than anything else.
[0]: https://educationdata.org/average-cost-of-college-by-year
Me and most of my peers in college had the choice between two courses. Course A was interesting, yet vastly more challenging and therefore time consuming, with the additional downside of lower grade expectation. Course B was boring, a gentle breeze in comparison, yet with an almost guaranteed perfect grade.
Imagine which course most students choose?
Even if a student wants to take on the more interesting course, incentives matter, and the incentive is: better grades qualify for better compensated positions and prestigious degrees. Only students who didn't care about this or were confident enough in their ability did choose Course A. In the end, barely a handful of students out of hundreds went with A.
One analogy I use a lot: if I have a professor sitting next to me, what is the best way to learn a topic?
Struggle through it on my own and I won't be leveraging the professors knowledge.
Ask the professor to do everything for me and I won't be learning anything at all.
Now if the professor is an AI, the same trade-offs hold.
For example, I will back and forth conversations with AI to explain subjects to me. I ask questions, push back, ask for examples, and so on.
If I do ask the AI to answer something for me, I then ask it to break down the answer for me so I can make sure I understand it deeply.
And of course, none of this matters if I don't want to learn something :)
>And of course, none of this matters if I don't want to learn something :)
Society makes people do a lot of things they don't want. I wonder if we're going to hit a breaking point this generation.
A decent amount of my professors don't know the answers because they bought the course, test questions, and lectures from Cengage. During exam review, they just regurgitate the answer justification that Cengage provided. During the lectures, they struggle to explain certain concepts since they didn't make the slides.
Professors automate themselves out of the teaching process and are upset when students automate themselves out of the learning process.
I can tell when the faculty views teaching as a checkbox that they officially have to devote 40% of their time to. I can tell when we are given busywork to waste our time instead of something challenging.
To use your analogy, I'm being told to move 1000 plush reproductions of barbells from Point A to B by hand because accreditation wants to see students "working out" and the school doesn't want high failure rates.
We are all pulling out the forklift. Some of us are happy because we don't have to work as hard. Others are using the forklift so we can get in a real workout at home, as school is not a good use of our time. Either way, none of us see value moving paperweights all day.
edit:
My favourite course during my Computer Engineering degree was Science Fiction because that professor graded us on substance instead of form. It was considered a hard class because one would get good marks on the essays by focusing on building substantive points instead of strict adherence to the form of a five-paragraph hamburger essay.
The call to action is to make courses harder and stop giving students plush barbells.
For example, University of Toronto Engineering Science (hardest program in Canada) gives first-year students a "vibe coding" lab in which students learn how to solve a problem that AI cannot.
https://www.cs.toronto.edu/~guerzhoy/vibecoding/vibecoding.h...
It is also hard to evaluate university teaching because there are no benchmarks for that (compared with high school, for example), and it is hard to judge if teaching is good from student feedback. You can only know if someone fucked up or did really well, which are outliers.
There are other issues as well. Professor IMO is a ridiculous job, you are supposed to be an expert in the field, be a researcher, be a manager, be a teacher, be a salesman, all at the same time. There are people who can excel in all these, but these are probably just outliers. It doesn't help when PhD training doesn't train you to be a proper manager and teacher. While there are some teaching training, I think we are not really held to a high enough standard. E.g. One can pass the teaching course if they just show up and spend some time, even though their teaching is horrible.
But sure, you're always going to find a few meh or bad professors. And they will stick out as much ad thr great professors
Does the use of a quantifiable metric like a GPA not exacerbate this? In a world where people take a GPA seriously, you'd have to be irrational to not consider cheating a viable option.
You could say the same about credit score and dating apps. These institutions assist the most predatory and harm the most vulnerable.
The current and old school way is a proctored exam.
Also, always keep in mind the most underrated kind of cheater: the clearly smart kid who cheats to go from an A- to an A. That level of student can't be "caught" with tests because by all accounts they already know and have even mastered the material. The pressure at that level of competitiveness simply requires zero room for doubt. Not really what the article is about, but some food for thought.
I remember illustrating a point to a class by posing a question and then calling on a student I figured wasn't smart enough to answer correctly so that everyone could see her make the mistake.
The ethics of that still bother me.
> like taking a forklift to the gym.
First, you will have excellent forklift skills in the end. A real profession!Second, girls dig forklift operators or so I was told.
This has a negative feedback loop where universities have to lower standards to bring dumber and lazier students to compete with other diploma mills.
But the gym isn't the best place to engage in forklift training. And you engage in forklift training at the gym, expect to learn how to use a forklift to lift gym weights. Don't expect to also get the benefits that the gym is designed to impart.
I related with that analogy too, infact that whole piece is worth reading. I can't seem to find it's link though!
Essentially, since they are a summary of "the" state of knowledge, the teacher should be able to ask them to put a number on how novel a piece of text is.
Once LLMs are able to evaluate, independently, the soundness of an argument... (Hopefully, this will be achieved AFTER $5 H100s reach the average consumer)
They are the wrong tool for pedagogy.
Look, we have no idea what the feedback is like that this grad student gives, what the class sizes are like, what the cadence is, what the grade percentages are, etc. All we know is that Clayton Ramsey is a grad student at Rice in the robotics department and that he wrote a hot take here.
For me, the most important thing is if this grader is bothering to really grade at all. I think we've all had a harried grad student just dash off a few red lines on the week one HW about a week before the final exam. That's not a 2 way street, and if the feedback isn't as in-depth as he wants the work to be, well, he shouldn't be surprised. He can't be expecting students to put in the time unilaterally. But, we don't know any of that really.
Personally, I think that before the decade is out, we're not going to be talking about this at all. Because the students will be adept enough at using the LLMs to make it look like their own writing anyways. This is a problem that experience will solve for them.
And also, I think that the days of the massive lectures and essays are pretty much cooked. That 'cheap' model of education can't survive this LLM revolution. We obviously have to change what the heck higher education is trying to do.
My take is that we're going to go to smaller class sizes like those at St. John's or Oxbridge. Under 10 people, you have to have done the reading or look like a fool, all with a PhD in the subject as a guide/teacher. Large classes weren't cutting it for decades (ask any Frat about their test banks), and now the veil is just ripped off.
I'm sure the time has come for college students to master using LLMs. It's just as important as grammar or basic math now. The software I build (and the entire tech industry) automates huge swaths of business processes with AI. Students need to be able to understand, work with, and manage swarms of AI agents doing work.
To stick to the analogy:
I need skilled forklift drivers, not big buff workers like I used to.
Someone with years of coding experience is going to be able to laser guide an AI agent to the answer/result than someone who has muddled their way through comp sci 101 using an AI chatbot.
This isn't even an opinion on LLMs, it's recruiting 101. You're free to convince the gym to train forklift drivers, but don't be surprised when you're laughed out the room.
Sure, you should lift them yourself too. But using an AI teaches you a shit-ton more about any field than your own tired brain was going to uncover. It's a very different but powerful educational experience.
If you never learn to research, sure. Otherwise, you should be worried about accuracy, up to date information, opinionated takes, and outright lies/misinformation. The tool you use doesn't change these factors.
Besides, those are incredibly short-term concerns. Recent models are a whole lot more trustworthy and can search for and cite sources accurately.
Firstly, these prompts tend to be shockingly close in behavior. Secondly, Google tends to rank reputable or self curated sites which have some accountability. It can be wrong but you know thr big news sites tend to at least defer to interviews to back up facts. Wikipedia has an overly strict process to prevent blatant, source less information.
There's room for error, but there's at least more accountability compared to what an LLM is going through.
> Recent models are a whole lot more trustworthy and can search for and cite sources accurately.
Lastly, prompts are still treated as black boxes, which is a whole other issue. For the above reasons I still would simply defer to human curated resources. That's what LLMs are doing anyway without transparency.
People want to give up transparency for speed? It seems completely counter to hacker culture.
I'm there for the degree. If I wanted to learn and engage with material, I could save $60,000 and do that online for free, probably more efficiently. The purpose of a class writing exercise is to get the university to give me the degree, which I cannot do by actually learning the material (and which, for classes I care about, I may have already done without those exercises), but can only do by going through the hoops that professors set up and paying the massive tuition cost. If there were a different system where I could just actually learn something (which probably wouldn't be through the inefficient and antiquated university system) and then get a valid certificate of employability for having actually learned it, that would be great. Unfortunately, however, as long as university professors are gatekeepers of the coveted certificate of employability, they're going to keep dealing with this incentive issue.
Not to burst yours or anyone else's bubble, but no, probably not.
The hard part of learning isn't access to content, it's discipline and dedication. School provides structure, goals, timelines, and deliverables. The value of this cannot be understated.
I've heard from many people how they're going to learn programming online and then get a job as a developer. Almost all of them fail.
I learned programming online and got jobs as a developer (I did later study CS at a university though). In my experience the best developers are those who taught themselves. Admittedly this may have been more the case for my older generation where formal education for programming wasn't that great nor widely available.
The simple question to ask is, when you go home, what do you do? If the answer is learn how to sew or work on your project car you've had for 10 months, you can probably learn on your own. If your answer is watch TV, play video games, go on a walk - then you can't, and you should go to university. Some people have told me this question is unfair. I mean, they're so tired from work, of course they want to relax. Well, guess what - your life doesn't stop if you're learning how to code on your own or whatever. If that's all it takes for you to not do it, then you don't have what it takes.
How often are people picking up new and complex skills that takes years to get the hang of? Almost never. So there you go, most people require a formal, structured education to pull that off.
Highly self-driven students are a minority, but not a rarity. People do things out of being interested and enjoying learning. It shouldn't be a surprise in a website called hacker news.
How do you know? It's easy enough to assert, but what kind of proof can there be for this assertion? Obviously the students are enrolled in university, and their accomplishments without it are only hypothetical.
Universities have all sorts of pathologies, from academic fraud to parasitic admins, but they also have people with deep knowledge of their field and who occasionally are even good teachers, and undergrad courses at least leave you enough time to explore and direct your own learning.
They also put you in an environment where you can measure yourself against others, which you sure don't get sitting in your bedroom hacking your games. As a consequence, your head doesn't get inflated so much (unless you're top of the class, which kind of naturally resolves itself when you get your first job where everyone thinks you're a useless moron with no life experience).
Also: university libraries.
Edit: oh shit. I just realised. I did learn to code on my own O.o
I don't understand how you can make this claim based on observing students who are in an environment with discipline and structure.
I thoroughly believed this to be true when I was younger. I thought the explosion of the internet and availability of free course materials, videos, MOOCs, and any information you want was going to change the education game forever.
What finally changed my mind was when I became a hiring manager. I decided I'd give an interview to almost every self-taught developer who applied. If someone didn't have a college degree listed on their resume, I'd schedule a call to hear their story. I thought I was going to be uncovering diamonds in the rough that other companies overlooked.
With a few notable exceptions, it did not work out that way. Don't get me wrong: A couple of the self-taught developers were absolutely brilliant. However, I found that most were, to be blunt, not even progressing their intra-career knowledge as fast as peers with traditional backgrounds. We hired a few, but a common theme was that they needed more guidance for dealing with the structure and expectations of an office job.
I also had a few very above-average friends in high school who went the self-education route. "College is a waste of money" mindset. Voracious readers in their youth. Last I checked, both of them were bouncing from entry-level job to job.
Of course, there are students who go to paper-mill colleges who also learn very little.
I think the value of a demanding, structured college education is partially the education, but largely about learning how to learn. Learning how to deliver, learning how to operate on a schedule, and having some structure to check your understanding relative to peers. Almost everyone I know (including me!) who does self-studying reading thinks their understanding is better than it is right up until they have to apply it, at which point they realize they didn't understand it as deeply as they thought. It's easy to read course materials and think "That makes sense" because it's logically consistent, but integrating the knowledge in a way that you can apply it and reason about it is harder. Structured learning forces people to do the latter, whereas self-guided learning leaves it as an exercise for the reader. An exercise that many don't follow up on.
If I wanted to learn JavaScript or .NET or CSS or whatever I could easily do so online. But that's different from becoming a software developer. The important thing is that university doesn't focus on one topic, it teaches a variety of topics that they think will be useful for your career. You can do this without uni, but you need to be good at figuring out what to learn, not just how. And of course the discipline to complete your goals by yourself, like you mention.
Although maybe something you could do would be to look at a university's course structure and copy it.
That _may_ be true for the vast majority, but it's criminal to waste the time of bright young people by putting them though hoops. I would even speculate that that they phoney goals, timelines, and deliverables in school actually damage kids.
Also, do I really need to remind people here of the "resources" used when you struggle and need help while self-guided?
- I probably don't need to rant about StackOverflow. Discord can be incredibly hit or miss, many forum pleas goes uncalled. It can be really hard to get unsuck compared to asking a teacher about their own assignment
- worse than asking quesions, forget getting high quality feedback on your project: getting people to do more than a quick skim takes effort in and of itself. You'd truly find an angelic soul if anyone decided to disect and correc your source code.
- there's also so, so, so many domains to explore. How will you specialize without knowing they exist? And if you've dug deep into any domain, you know that this is where the publicly free knowledge truly dries up. You won't find a nifty course on low level optimization hacks, nor network architecture (beej gets close, but only touches the surface), nor modern rendering techniques. You'll find some 300 level material, but 400 level stuff will likely require a mix of cobbling project ideas together that's within your reach but also pushing you. Scoping is always hard to do, and nearly impossible while still a student.
And Software is one of the easier domains to self learn. Good luck with the lab based STEM, getting proper feedback in art while learning theory, taste testing as a cook, using power tools in any given blue collar work, etc.
I would argue that if it costs $60,000, both your education system and the recruitment in those companies that require this degree are broken. It's not the case in all countries though.
Not that it is your fault, just stating the obvious.
But that's just the job market. The other elephants in the room are inflation and the housing market. People who don't have top-notch jobs (that require degrees) can't afford to buy a house. They can hardly afford rent. Cities don't want to build more housing because that will undermine the equity growth of homeowners.
We are a society of ladder-pullers.
> We are a society of ladder-pullers.
I don't disagree, but often we complain about people pulling up ladders and when faced with the same decision we follow suit. Ultimately we can't change this behavior if no one is willing to defect from "conventional wisdom" > We can't fix the problem by making better choices as individuals
That's wildly inaccurate. Your logic necessitates that "the system" is not composed of individuals.The problem itself was created through individual actions...
I would love to live in a world where everyone was altruistic and made correct choices for the long term good of society, but I don't. And there are limits to how much I'm willing to act as if I do, when in practice it just means I'm giving away resources to people who are purely (thinkingly or unthinkingly) selfish.
> The problem of individuals making optimal choices on an individual scale being sub-optimal on a society scale
The problem is people think this is a remotely accurate statement. These things only work if you use very low order approximations. Like being the only person and time not existing. As soon as you build any accuracy your net benefit more aligns with society.The classic example of this is the marshmallow experiment. It's myopic. We do it all the time but frequency doesn't make an action intelligent https://news.ycombinator.com/item?id=43873275
This isn't a tragedy of the commons issue. There's no finite resource we're all trying to draw upon in this case. The supply is generated by ourselves and it could be infinite if we chose to.
To me it reads as if you're contradicting yourself. It can be plainly seen that almost nobody is willing to defect. So change is impossible, you claim. Also change is possible, you claim. Which is it? You think people will change their choices as a result of you writing this comment and "winning" the argument?
I agree that people are responsible for the current situation, since their choices brought it about. That does not constitute a solution, though.
One or a thousand individuals changing their choice by themselves will do nothing. For tens of millions to change some external factor is required. If it was not, then it would have already happened. You say no external factor is required, so why do you think it didn't happen already, and why would it happen in the future?
> So change is impossible, you claim
I claim the opposite. Just because few people defect doesn't mean that rate is fixed and immutable. That is where you misunderstand. Really, my comment is a "call to arms". It is a literally a plea to ask people of HN, including you, to become the change we all want to see. Follow your own logic. You have given up and are trying to justify it.You are not mindless automata.
You have a choice.
Your actions matter.
Things have changed before, they can change again.
Every big problem is composed of many small problems.
We programmers are experts in breaking complex things down. Sure, fixing one small problem doesn't solve the big problem, but they do add up. That's all I'm arguing. I'm asking that others stop being apathetic and defeatist, to get up, and continue. I'm extending my hand, will you take it? There's more of us making effort, will you help?
While I get where you're coming from -- I think the correct thing to do is to both move for systemic change and attempt to live the life you advocate for -- I think the position of "I'm moral, why do other people need the system change in order to be 'moral' as well?" more totally abandons the actual goal (fixing things) than the other way around. Fundamentally, things tend to change for material, systemic reasons, and so most often the best way to get at issues is not to go after individuals (whose behavior is more a symptom than the disease) but the root cause, the systemic influences that cause them to act that way.
> I think the correct thing to do is to both move for systemic change and attempt to live the life you advocate for
You'll get no disagreement from me[0,1,2]. > Fundamentally, things tend to change for material, systemic reasons, and so most often the best way to get at issues is not to go after individuals (whose behavior is more a symptom than the disease) but the root cause, the systemic influences that cause them to act that way.
This is the part I disagree with (as seen in my linked comments). This is a defeatist attitude that acts as if people are mindless automata. We forge our own reality. No, we don't have complete control, but we have some control. We cannot directly control the large system, but we can control ourselves and we can strongly influence those around us.Ultimately this is the root. There's no magic wizard in the sky making people do things, there is only us. Those "systematic reasons" are a bullshit excuse to pass blame[3]. All those things are created by us. The only reason we pretend it isn't is because the results of our actions are only observed after long periods of time. It's those small decisions that add up over time. With each action we choose a better future, a worse, or a neutral. No one can predict the future, but we have a lot of evidence that short term thinking leads to worse results. I'm not asking anyone analyze every move and overload themselves with the infinite chaos. But I am saying we all need to think a few steps ahead and consider unintended consequences. To not be so rash. If things were easy, they would have already been resolved, so we so really take a moment to consider more than our gut reaction.
But ultimately, only you can control you. I hope you advocate for others around you to make good and thoughtful decisions, but there's no dragon in a cave where we can get everyone together and defeat. The dragon only is at thing of our collective consciousness. Each person that decides to defect makes the dragon a little weaker, and each person that decides to believe in the dragons power makes it a little stronger. That's the choice.
[0] https://news.ycombinator.com/item?id=43809695
[1] https://news.ycombinator.com/item?id=43852024
Lead by example. Mimicry is real, we all do it whether or not we are aware. Every node in the graph influences others. If you must exhort, it works better if you follow your own advice.
Of course, be discerning as you do this, and don't expend your energy or goodwill where it will be wasted. Be like Gandalf.
Any dogmatic system is fault-tolerant, in that it will "route around" some amount of internal dissent, but this does not make it impregnable.
The ruts in our minds steer us just as much as those in the ground. But earth turns and so can we.
I'm not even sure if people are aware of inflation/housing as a completely solvable issue by the govt. I guess it's because people most people are clueless on how it is to be solved.
>We are a society of ladder-pullers.
It's by design, to serve the rulers. It's an assembly line of slaves who are given some freedoms and are put through various stages of school, university, work and retirement. When most people retire they are left with little to nothing.
Broken? Saddling individuals with a quarter million in debt when they are just starting life is absolutely broken. That they must indenture to be a modern professional (and buy hope for at least a middle class landing) is broken.
The notion that everything must return a (generally, near-term) accounting profit is on its face stupid.
Even today, that university is considered expensive for the state at ~$8,200/semester.
Ideally maybe employers ought to rely on more targeted selection mechanisms. But this would be extremely expensive (and potentially legally risky due to equal opportunity laws) so most don't bother.
As I said, the only country I know where it is like that is the US.
For a true solution, the entire taxation and monetary system will have to overhauled. It's of course not going to happen.
Transactions outside of the govt monetary system is effectively illegal or taxed so people are forced to participate by applying for jobs for their livelihood.
Meh, academic degrees don't come for free, someone has to pay for universities, staff and other expenses. In the US it's everyone for themselves by student loans that can't be discharged in bankruptcies, in Europe it's the tax payers.
The problem is, the ones profiting from the gatekeeping (aka employers) aren't the ones paying for it in either system. If employers had to pay, say, 10.000$ for each job listing that requires an academic degree without an actual valid reason, guess how fast that incentive would lead to employers not requiring academic degrees for paper-pusher bullshit jobs.
But how do you get all students to agree with this in principle when someone is in more rush to start earning an income than others?
However, employers would then look to only hire from universities that do good teaching, so maybe it's a win-win?
I can’t imagine this in my own life. I use concrete things and ways of thinking and working I learned in my CS degree _all the time_.
> I'm there for the degree
Would you hire someone without a degree?When you're in a position to hire or influence hiring, will you consider those without degrees?
I ask because I hear this sentiment a lot but we still have a system becoming more reliant on degrees. The universities may be the gatekeepers of those degrees but they're not the ones gatekeeping the jobs. They have no influence there. They were not the ones who decided degree = credentials. I ask because many people eventually grow in their jobs to a point where they have significant influence over hiring. So when that time comes will you perpetuate the system you criticize or push against it? Truthfully, this is a thing that can be done with little to no risk to your own employment.
Yes.
A person is a idiot if he/she takes someone's competence at face value because of a degree. ( jobs aside - don't assume your doctor is competent because he has MD, it will cost you your life)
I see your point, but the issue is that it's quite futile to shame students for playing the game that people in the industry has set up. It doesn't help that in the past, college degrees were in fact more relevant than today, especially before the era of the internet and wikipedia, so if older people who are currently in charge of hiring aren't aware of these changes, they might just apply their outdated personal experience and just assume college degrees hold the same weight as they did in the past.
I'm pretty certain when kids these days eventually become responsible for hiring decisions, they probably will handle things differently since their experiences are different.
But I think the second part is amiss. I do agree that educators (being one) need to recognize the reality of the environment. But I think you also can't ignore that the reason the degrees are used to signal qualification is because the degree is intended to signal that some knowledge is held by a specific person. Yes, things have changed. It did used to be that interviews were much shorter, with a degree being a strong signal (at minimum, it is a signal that someone can sufficiently follow directions and manage their time). Should that mean those tasked with ensuring a student gets their education does not get their education? I get your point, but I think it is barking up the wrong tree. Asking academia to change only furthers the problem as it lets employers shift blame. It is like getting upset at a minimum wage worker for increased prices due to tariffs.
The really weird thing is we're now in this bizarroland setting where employers will filter for degree and then spend months and a lot of time and money putting candidates through testing rounds. Making them do tons of leetcode and other things to evaluate their performance. Where the candidates spend months studying to pass the interviews. And the complaints here are probably more apt, about how the material they need to study is not significantly relevant to the skills needed for a job[0]. Worse, it doesn't seem to be achieving the results it is setting out to seek.
As they say about the stock market "the market can stay irrational longer than you can stay solvent". My point is that some academics are making classes easier. Grade inflation is a measurable phenomena and it sure got worse with Covid. My point is that employers are acting irrationally.
[0] I'll give an example. I had an interview a few weeks ago and we had run a bit longer than expected prior to getting to the coding portion of the interview and where their coding software crashed a few times, giving us 10 minutes to do what was expected to be 30. Finally coding, the program crashed and I said "I don't use a lot of Juypter Notebooks, does the kernel crash when OOM?" I'm explaining my thought process aloud and frankly, I'd never hit this on the job. No answer. I quickly scroll through logs, say "I'm going to guess that's the issue, I'd normally guess and if wrong google it". Yes, it was the right answer. But there was no reason to do this, especially with me guessing right. This wasn't some dinky startup, it was a multi-trillion dollar company...
Academia put itself as a gateway and barrier to the middle class. Why would we be surprised when people with no interest in anything but the goal are not enthralled by the process?
How did academia do that? It doesn’t seem like universities would have the power to do that. More likely, either employers put academia as a gateway. Or even: the culture at large misunderstood what pathways existed to middle class life. Or even: pathways to middle class life became scarcer and more insecure, and the real gatekeepers (hiring managers) had no good ways to select which of the many people at the door to let through.
Why don't employers recognize the credentials of a MOOC to the same degree that they would a university degree?
We could similarly ask why employers value the degrees of some universities more than others.
I think it's important to realize that ultimately the decisions come from the employers, not the universities. No one is making the employers do anything. But at least the second question might have a clearer partial answer. In part, there is a selection of a tribe, an implicit "culture fit" that's happening. It isn't uncommon to see employer bias towards specific universities. This is especially true with prestigious universities.
But it's not the universities that are making anyone do anything and that's an important distinction.
It is a confounding variable[0]. The problem with trying to go after the confounding variable is you 1) don't solve the problem after fixing it 2) let's the current negative feedback loop continue growing.
- ebiester claimed academia made itself the gatekeeper of credentials
- alpinisme suggested that was silly. Employers have autonomy
- you asked why MOOC classes don't get the same credentials
- I said you missed alpinisme's point. The employer is the one that is deciding those credentials are not equal. I also pointed out how the same is true about different universities. Pointing out that you didn't need to say "MOOC" when "Stanford vs University of Wyoming" also leads to the same conclusion (which makes it weirder to point to academia as a whole instead of the prestigious unis)
- you act like I was supposed to have read a reply to a different person while not addressing my point
- I double down saying you're barking up the wrong tree. Take the finger and point it at employers
- you say you never claimed employers aren't the problem and I'm creating a narrative
Well you're right, you never claimed that. But ebiester did and when alpinisme called bullshit you came to ebiester's defense. Forgive me if I got confused.Regardless, it still doesn't change the fact that the employers have complete autonomy. They can make whatever rules they want. There's no use to pointing at academia because ultimately they have no say. Do they want to be the credential keepers, yeah! Which also means they'll be happy to be the ones being yelled at if the result is that employers keep using them as credential makers.
Ultimately academia is about prestige. Ultimately it's about far more than "the product" (the credential). But who decides what credential is best? The fucking employers. No one is holding a gun to their head. There's no front door or back door dealings.
You have concluded that since academia benefits from the employers selecting them as credential makers that they are the problem, or a meaningful part. The issue is, if you take away academia, the problem doesn't get resolved. Nor does it improve. Arguably, it becomes more noisy until employers converge onto some other arbitrary credential
There is certainly room for employers to stop over-credentialing jobs, but there’s also room for universities to improve their role in the process. Employers don’t have “complete autonomy”; we can’t just suddenly decide we’re going to create more nurses, doctors, engineers, and lawyers because we’ve found a loophole in the pipeline.
Depending on the type of nurse, you do not need a degree. Some will need to pass an exam though. You can become a lawyer without a degree, only needing to pass the BAR. An engineer doesn't need a license. You are thinking of a Professional Engineer, which is a specific job title that has legal ramifications. This requires a degree but it also requires you working under another PE for some time and then passing state competency exams.
The cases we're talking about here are jobs where if you do something wrong you can kill hundreds or thousands of people and destroy millions or billions of dollars worth of assets. *You've just completely moved the goalpost to try to make your point.*
Do you really honestly believe that these jobs shouldn't require a formal education?
I'm not going to take anyone seriously that is going to argue that medical doctors shouldn't receive formal educations. Come on, don't be dumb. Just have a normal fucking conversation. You don't have to double down and back yourself into a corner.
I know you're not that dumb, so stop bring disingenuous.
We agree on most, I just think you tend to think on absolutes and I do not. I don’t think most current degree progressions need degrees, but I’m willing to recognize there is a large subset what do. And colleges still hold a monopoly on those degree. But the fact you can’t follow HN guidelines because you’re proved to argue makes further discussion boring and fruitless.
It also recognized the GI Bill as an opportunity and also sent lobbyists for the HEA in 1965.
We could make it less meaningful if employers weren’t so keen on using credentials as their own gateway. That may have more of a chance of happening if the OPs perspective becomes more prevalent and the credential becomes an increasingly worse signal for meaningful skills.
But it wasn't about credentials even. It was about inculcating a culture. You knew that someone had the knowledge and ability to reach university, and you knew that they had a shared common culture with you. Shared common culture and norms increases trust. Credentials mattered for doctors, but universities, in the end, were selling something far more intangible: culture.
You can start issuing your own credentials tomorrow.
Accreditation is regulated by NGOs who need government approval and without that you cant received financial aid (or participate in some programs) — but you can hand out pieces of paper for completing your program.
https://www.ed.gov/laws-and-policy/higher-education-laws-and...
Further, some high school graduates (like myself at the time) literally don’t know HOW to learn on their own. I thought I did but college humbled me, made me realize that suddenly i’m in the drivers seat and my teachers won’t be spoon feeding me knowledge step by step. it’s a really big shift.
If you were the perfect high school graduate, then congrats, you’re like the 0.01%! And you should be proud (no sarcasm). This doesn’t describe society at large though.
For the very few that are extremely motivated and know exactly what job they want, i do think we need something in between self guided and college? No BS - strictly focusing on job training. Like a boot camp, but one that’s not a scam haha.
The other aspect of college you ignore is, it is a way to build a network prior to entering the workforce. It’s also one of the best times to date, but that’s another story.
Completely agree that the cost of college in the US is ridiculous though.
I don’t know how generalizable this is. I remember reading a few studies trying to assess if Ivy League education was really more valuable that a state school. The result (IIRC) was that it only matters for students who came from the lowest economic strata; the authors presumed it was due to the network effect. But that also means the network effect was negligible for the majority of students.
I think you underestimate how bad some high schools really are.
Citation needed. There's great books out there that provide a lot of guidance down a particular path. I'd say a lot of them do, and I can't imagine online learning sources would be worse. There's online communities for learners for specific subjects that are full of people offering good advice.
I’ll leave you with this thought though. Of all the professions, tech is probably the one where this is the easiest to do. There are many companies that don’t require a bachelors. For most of the last 15-20 years tech was in a “boom” cycle, and yet the vast majority of software engineers I met DID have a bachelors degree.
Why? If it’s as easy as “pick up a book”, then why didn’t more people take that path? I think very few people have the drive and discipline to accomplish a full career prep on their own.
If your hypothesis was true, wouldn’t tech be mostly filled with self-taught engineers?
1) The cultural zeitgeist around higher education (at least in the US) is and has been "you must go to college and get a degree". It's been over 20 years since I dropped out of my first university. I'm doing just fine, and yet to this day, I will be asked by older members of my extended family if or when I'm going back to "finish" my degree, or whether my company will offer tuition assistance to help me go back. If you graduate high school these days (and for at least the past few decades), the expectation is that you go to college and get a degree. And if you're going to have to do that anyway, you might as well get the degree in the field you want to go into.
2) As a corollary to that, while new / younger companies might have set aside the degree requirements, the big tech houses definitely still preferred them and having the paper was still a leg up in the hiring and recruitment process. And even in companies without an explicit preference for a degree, it's often listed as a requirement on the job posting. "BS or Equivalent Experience" is easier to match both as a candidate and as an employer if the candidate has the BS as that is an objectively verifiable fact.
When I was a kid and got an assignement for writing an essey about "why good forces prevailed in Lords of the Rings" as a gate check to see if I actually read the novel I had three choices: (a) read the novel and write the essey myself (b) find an already written essey - not an easy task in pre-internet era but we had books with esseys on most common topics you could "copy-paste" - and risk that the professor is familiar with the source or someone else used the same source (c) ask class mate to give me their essey as a template and rephrase it as my own
A and C would let me learn about the novel and let me polish my writing skills.
Today I can ask ChatGPT to write me a 4 pages essay about a novel I've never heard of and call it a day. There's no value gained in the process.
That's a simple example. The problem is that the same applies to programming. Novice programmer will claim that LLM give them power to take on hard tasks and programm in languages they were not familiar before. But they are not gaining any skill nor knowledege from that experience.
If I ask google maps to plot me a directions from Prague to Brussels it will yield a list of turns that will guide me to my destinations, but by any means I can't claim I've learned topography of Germany in the process.
If we take the original article at face value, no you can't do that. ChatGPT will apparently produce something that is obviously ChatGPT produced and fail to fool even the most absent minded of instructors that you have read the material. So even with a ChatGPT LLM to help you out, you're largely going to have to do a modified version of C, replacing your class mate with the LLM and adding in the need to do your own reading and validation to ensure that the text matches the actual book contents.
> If I ask google maps to plot me a directions from Prague to Brussels it will yield a list of turns that will guide me to my destinations, but by any means I can't claim I've learned topography of Germany in the process.
I would argue that even if you plotted a route by hand reading maps, you can't claim to have learned the topography of Germany either. "The map isn't the territory" after all.
So, you can usually get away with it if there is not some way the professor/TA can prove it.
As things change, this will change, but that's the situation the author of the original article finds themself in, because it's the current situation.
Only if you don't have any custom instructions about style and don't proofread it afterwards. All the usual "tells" of ChatGPT are very obvious to scrub out, and you don't have to use OpenAI's chat wrapper to begin with.
Not true.
Using LLM to learn quickly a new programming language + being productive is best method ever. If you pay attention, you acquire rapidly new skill and knowledge, and those that are relevant to your job.
Using LLM is MUCH more efficient than reading a book going through all the minute details of the language prior telling how to use it. It's the same as learning a language from your parents compared to learning a language from a class. You might not know all the grammar rules, but you'll be way more proficient. And nothing prevents you from learning the grammar later on.
I tend to disgree. maybe for you, but my mind does not work that way.
As a teacher at a university I come to see that students "learn" by asking an LLM, but they forget to understand the content the LLM produces because the LLM actually solves the assignment (mostly) for them. One may say that's the teachers task to produce better questions, but the thing I most "struggle" with that getting educated as a student seems to just be a play of "gaming the system". Yes, it was similar during my time (learning how to reach your goal with as less effort as possible is a good part of the "education" at a university IMHO), but we actually had to think and understand while today just seems like prompt-and-copy.
There are multiple ways you can use such technology, too. If you use Google Maps with its out-of-the-box configuration for turn-by-turn directions, with it oriented in the direction of travel, you won’t learn so much; but if you change it to always display the map north-up, and look at the map it shows you—inferior though it be to good paper maps, in most cases—it’s easier to develop a feel for layouts and geography.
(I don't usually do that, but it appears so many times in the first few sentences that I had to do it here)
I agree with your points, though, but I think that they are in agreement with the comment you are answering to...
And yeah, and revisiting the OP we're on the same track.
It sounds like you agree with GP.
I agree that this situation that the author outlines is unsatisfactory but it's mostly the fault of the education system (and by extension the post author). With a class writing exercise like the author describes, of course the students are going to use an LLM, they would be stupid not to if their classmates are using it.
The onus should be on the educators to reframe how they teach and how they test. It's strange how the author can't see this.
Universities and schools must change how they do things with respect to AI, otherwise they are failing the students. I am aware that AI has many potential and actual problems for society but AI, if embraced correctly, also has the potential to transform the educational experience in positive ways.
Why would they be stupid? Were people before LLMs stupid for not asking smarter classmate/parent/paid contractor to solve the homework for them?
Large part of education is learning about things that can be easily automated, because you can't learn hard things without learning easy things. Nothing conceptually changed in this regard, like Wolfram Alpha didn't change the way differentiation is taught.
I agree that making assignments not designed with external sources in mind significantly impact the final grade is not ideal. I think this is minor and easily fixable point rather than some failure of the whole education system.
In American universities where your GPA from your in-class assessments forms part of your final grade? Yes, absolutely.
Where I came from you do your learning in class and your assessment in a small, short set of exams (and perhaps one graded essay) at the end of each year. That seems far more conducive to learning things without having to juggle two competing objectives the whole time.
Whether not doing everything to maximize your GPA is "stupid" (literally or figuratively) is a good question too.
But even if your assignments influence your GPA it's rarely the only thing that does, and not doing assignments will harm your ability to perform in midterm/exam/whatever.
https://chatgpt.com/share/6817fe76-973c-8011-acf3-ef3138c144...
https://www.reddit.com/r/ChatGPT/comments/1hun3e4/my_little_...
I don't know what the answer is. I'm old school, if it was up to me I'd bring back slide rules and log tables, because that's such a visual and tactile way of getting to know mathematics and numbers.
It's interesting to consider how AI is affecting humans' cognition skills. Is it going to make us stupid or free us up to use our mental capacities for higher level activities? Or both?
Its only stupid if you try to optimize for the wrong things (finishing quickly, just getting a pass).
I'd say it's very smart if you don't rely on LLMs, copy the homework from someone else, or similar; because you're optimizing for learning, which will help you more than the various shortcuts.
> Universities and schools must change how they do things with respect to AI, otherwise they are failing the students.
Hard disagree.
Students need to answer a fundamental question of themselves;
Am I here to learn or to get a passing grade?
If it is the former, the latter doesn't really matter.If it is the latter, the former was not the point to begin with.
For most continued learning it's better if the university uses calculators, compilers, prepared learning materials, and other things that do stuff on behalf of the students instead of setting the bar permanently to "the student should want to engage everything at a base level or they must not be here to learn". It allows much more advanced learning to be done in the long run.
IMHO, the example of using calculators in a learning environment is a great topic to explore.
Using calculators in a university setting is entirely reasonable as it is expected students have already mastered the math calculators automate. Formulae calculators are also included as, again, the expectation is a student capable of defining them have an understanding of what they are and when to use them.
Now, contrast the above with using calculators in elementary school, where basic math is an entirely new concept and the subject being taught. Here, the expectation is students learn how to perform the operations themselves through varied exercises, questions to the instructor, and practice.
> It allows much more advanced learning to be done in the long run.
Only if the fundamentals have already been established. Which leads back to my original question:
Am I here to learn or to get a passing grade?
University is like a supermarket.
For some, they go there with a loose idea of what they want only to find ingredients not previously considered, often ending up with a better dining experience because of it.
For others, it is aisle after aisle of crap "Uber Eats" can deliver already made and without the hassle of having to cook it.
To each their own.
1. Students are a captive audience. They don't want to be there. It's the law that makes them be there. Even once you're beyond mandatory education this holds true: they were just carried into further education by momentum. They didn't realize they had a real choice or what alternatives were available.
2. A lot of the skills you build in classes aren't useful to you. I spent a lot of time in my English (second) language classes, but it was my use of the internet that really taught me the language. The later years of English classes was just busywork.
In my native language classes I had to write a fair number of essays. The only time this was useful was the final exam of that class. I haven't written a "real" essay since. Even if I did, it would probably be in English and use a different style - something taught to me by forum posts.
During the final exam, a national exam (all the students did it at the same time in the country) you were given blank pieces of paper, a pen, 10 different topics to choose from, and 6 hours.
You pick a topic, write a draft, then write the essay. ~700 words.
Bonus points if you use relevant literature or science quotes in it. Quotes you had to memorize, without knowing the topics.
None of the topics resonated with you? Tough luck, buddy.
Don't have any 'novel' ideas? Figure something out!
Cited something the reader didn't know about? Should've known better!
Quoted someone the person grading doesn't know? Better hope they're in a good mood!
There's a reason they got rid of this style of exam a few years later. Doesn't change that this one thing dominated my native language and literature classes.
My biggest takeaway from it is how much I hate my native language class and literature class, but that might've happened even without the essay.
Exactly. I tend to think that the role of a teacher is to get the students to realise what learning is all about and why it matters. The older the students get, the more important it is.
The worst situation is a student finishing university without having had that realisation: they got through all of it with LLMs, and probably didn't learn how to learn or how to think critically. Those who did, on the other hand, didn't need the LLMs in the first place.
I figured this out in high school. It can’t be all that uncommon of a thought that if you are already in school and paying and given time to learn, you might as well do so?
Young kids don't get it, they just do what they're asked. That's okay. University students graduating without having figured it out is a problem. And somewhere in the middle is when the average student gets there, hopefully?
My high-school age daughter told me how her small private school solved this problem:
They brought back oral exams.
There aren't a lot of other good options. Written take-home work and online tests have always been fertile ground for cheating. Another benefit of oral exams: you learn to communicate under stress.
Different courses and universities vary in teaching quality greatly. Often the examination criteria is loosely correlated with knowledge or skill and students end up studying 'around' the examination process, rather than learning for the sake of it or for enjoyment.
Someone mentioned verbal exams - this is the way to do it, but I only had a pleasure to do very few in my years. Probably because it's seen as 'too time consuming' or 'time wasteful', so the shift is then to the student to waste their time instead.
And then you get the occasional course with a lecturer everyone just pays 100% attention to and engages with, where you almost don't need an exam in the first place.
'The problem of LLMs in academia' is a symptom. You get what you measure.
I myself went to college to get the meal ticket, not to learn. But since the system was entirely exam based, I was forced to learn.
Looking forward towards is, but I fear that might be wishful thinking in part.
Also pre LLMs I have seen too many deep thinkers fail and pretenders succed. I don't see how LLMs can change that. Unless we all collectivly grow tired if pretenders and fakers amd value deep understanding. I just see not many indication of that.
There should be zero tolerance for these types of behaviors in my opinion. I see zero evidence of these behavior even being identified by most, let alone any thought on calling them out or stopping them.
You've helped me realize that folks using llms in replace of learning to write themselves are almost certainly giving up all thought of nuance on a topic and are, without realizing it, letting the llm either ignore or add nuance based on its training data and random chance.
Hopefully the pendulum will swing the other way and there's a public epiphany but given the loss of nuance over the decades I'm not betting on it
So, unfortunately the student's behavior is somewhat rational given the incentive structure they operate in.
But a huge amount of "ugh I'm too smart for this assignment" complaining that students do is just kids being immature rather than an honest attempt at learning through other means.
It means that you are losing your time. If you are a university student and use LLMs for your classes while "challenging your mind" for stuff outside of class, maybe you should just not be studying there in the first place.
While we're talking about things we're grateful for, I am so glad that we've structured the education and employment systems such that not having a degree puts you at significant risk of unemployment, prevents you from ever immigrating anywhere for the first decade of your working life, and generally marks you as a failure.
I thought I was pretty clear when I told you I got marked down for taking the stance that one should not commit crimes when asked to by one's boss.
The problem isn't that I was asked to produce an essay as part of the process of teaching me about ethics. The problem is that the entire class was pointless busywork taught by lecturers who weren't particularly ethical and who failed to teach any of us about ethics. ChatGPT could have done the entire thing and the class would still have failed to leave me with anything other than lingering resentment.
It's not like this in every country, though.
If you want to make your own certificates, good luck getting them on the trusted list.
Companies need to bring folks in on a probation period and actually test the skills are there.
Writing is hard. Sometimes it means sitting with yourself, for hours, without any progress. Leaning on an LLM to ease through those tough moments is 100% short circuiting the learning process.
To your point, maybe you're learning something else instead, like when/how to prompt an LLM or something. But you're definitely not learning how to write. Whether that's relevant is a separate discussion.
Sounds like "back in my days" type of complaining. Do you have any evidence of this "100% reduction" or is it just "AI bad" bandwagoning?
> But you're definitely not learning how to write.
How would you know? You've never tested him. You're making a far-reaching assumption about someone's learning based on using an aid. It's the equivalent of saying "you're definitely not learning how to ride a bicycle if you use training wheels".
Its basically adults producing texts of slop messages to each other. It is actually atrophying.
You might be in a circle of people that wants to know "why" things work. For example, when there's a bug, we go through several processes of:
There's a bug...why does it happen? What were they thinking when they wrote this? How to prevent this from happening?
This is true even for simple bugs, but nowadays you just vibe code your away into the solution, asking the AI to fix it over and over without ever understanding how it works.
Perhaps its just the way things are. I mean who uses their head to do calculations nowadays? Who knows how to create a blurring effect in physical drawing?
Regardless of the existence of other ways to exercise your legs which you also will not do, because you're a person with working legs who chooses to use a wheelchair.
And so often also on wildly tangential subjects that are purely academical artifacts.
Cheating with LLMs is the inevitable conclusion of being a subject to a dragged out must-have education system that mostly just cheats the students of their time and money. That's the friendly way to put it.
I jumped through all the fucking hoops and now I'm paid handsomely, at every corner of the road leading here you see some pompous academic wankers with more medals than a photoshopped North Korean general.
Even if one won’t need that specific know how after exams - just realization how much one can memorize and trying out some approaches to optimize it is where people grow/learn.
You might have attended a good degree, where the learned information was actually beneficial. But I'd bet for most degrees out there, rote memorization is the consequence of professors wanting easily gradable exams, existing for their benefit, not the students.
Which means the actual problem is low quality education and degrees and we might find common ground here.
I believe that the same holds true for other facts one might memorize. Yes, the fact may seem like meaningless trivia (and might even be so at times), but in the right situation knowing that fact can help with understanding. You can certainly spend too much time on memorization of facts, but that doesn't mean it has no place either.
Facts and knowledge acts as a scaffolding, making it easier to absorb more information and knowledge the more you have.
But don't worry, worst case scenario, all of the kids growing up in this environment that are actually learning will build structures to exploit the prompters, I suspect the present situation where prompters can accidentally find themselves in real jobs is transient and building better filters will become survival imperative for businesses and institutions.
The current situation is that people need to pass exams, get certain GPA's, etc. to have opportunities unlocked to them. Education today is largely about collecting these "stamps" that open doors and not about actual learning.
this isn't the root of the problem
the root of the problem is that higher education has become, for the most part, an exercise in getting a piece of paper, so that you can check a box on a form or pass first level screening for a job
My students however don't understand that the importance is on the process, not the result. My colleagues do.
Source: I used it to write this post.
Employees are increasingly expected to use personal devices in work settings.
The vast majority of companies still issue laptops and other hardware
those who use the tools to accelerate their learning will do so and others who use it just to get by will see their skills atrophy and become irrelevant.
> how many intelligent professionals and academics don't understand this
Mastery of a discipline does not imply any pedagogical knowledge, despite anything one of my childhood heroes, Richard Feynman, might have claimed.
Despite frequent claims otherwise, in my experience and sampling of PhDs and Masters of different sorts and grad students working toward those degrees, an advanced degree does not teach anyone how to lead or teach. This is true of even some of the folks I knew studying Education itself who were a little too focused on their own research to understand anything "so simple."
> cheat themselves out of an education
What's "an education," though? For some people, education is focused on how to learn. For others, it's focused on some kind of certification to get a job. Some of us see value in both. And I'm sure there are other minority opinions as well. We, as a society, can't agree. The only thing we can seem to agree on in the US is that college should be expensive and saddle students with ridiculous debt.
I think that's the answer:
LLMs are primarily useful for data and text translation and reduction, not for expansion.
An exception is repetitive or boilerplate text or code where a verbose format is required to express a small amount of information.
If you aren't aware: (high-parameter-count) LLMs can be used pretty reliably to teach yourself things.
LLM base models "know things" to about the same degree that the Internet itself "knows" those things. For well-understood topics — i.e. subjects where the Internet contains all sorts of open-source textbooks and treatments of the subject — LLMs really do "know their shit": they won't hallucinate, they will correct you when you're misunderstanding the subject, they will calibrate to your own degree of expertise on the subject, they will make valid analogies between domains, etc.
Because of this, you can use an LLM as an infinitely-patient tutor, to learn-through-conversation any (again, well-understood) topic you want — and especially, to shore up any holes in your understanding.
(I wouldn't recommend relying solely on the LLM — but I've found "ChatGPT in one tab, Wikipedia open in another, switching back and forth" to be a very useful learning mode.)
See this much-longer rambling https://news.ycombinator.com/item?id=43797121 for details on why exactly this can be better (sometimes) than just reading one of those open-source textbooks.
* Not provide background information and let people figure it out for themselves. This will not help me achieve my goals.
* Link them to Google's SRE book and hope they read it. Still not achieving my goals, because they won't.
* Spend 3 hours writing the relevant background information out for them to read as part of my proposal. This will achieve my goals, but take an extra 3 hours.
* Tell the LLM what I'm looking for and why, then let it write it for me in 2 minutes, instead of 3 hours. I can check it over, make sure it's got everything, refine it a little, and I've still saved 2.5 hours.
So for me, I think the author has missed a primary reason people use LLMs. It saves a bunch of time.
But ultimately, getting the concise summary for a complex topic (like SLIs and SLOs are) is brilliant, but would be even better if it was full of back-links to deeper dives around the Internet and the SRE book.
If they won't read a relevant section of Google's book, why would they read an LLM-written version?
It feels like the information is there strewn across the internet, in forums, Reddit posts, stack overflow, specs, books. But to trawl though it all was so time consuming. With an LLM you can quickly distill it down to just the information you need.
Saying that, I do feel like reading the full spec for something is a valuable exercise. There may be unknown unknowns that you can't even ask the LLM about. I was able to become a subject expert in different fields just but sitting down and reading through the specs / RFCs, while other colleagues continued to struggle and guess.
If an LLM can help you understand an RFC, it's great. You're now relying on the RFC.
If an LLM can help you not rely on the RFC, you're doing it wrong.
Why is the angle called m? Why is a combination nPr * (1/r)? What is 1/r doing there?
I use mathacademy.com as my source of practice. Usually that’s enough but I tend to fall over if small details aren’t explained and I can’t figure out why those details are there.
In high school this was punished. With state of the art LLMs, I have a good tutor.
Also it’s satisfying to just upload a page in my own handwriting and it understands what I did, and is able to correct me there.
Now, I know psychologically it's not as simple, and both society and ourselves equate academic (and professional, later on) success with personal worth, but that's a deeper, harder topic.
This can go beyond just specific documentation but also include things like "common knowledge" which is what the other poster meant when they talked about "teaching you things".
need is a strong word. Do you need to be able to do ctrl+F? Not really, you can just read it all. But maybe it's easier to do ctrl+F. Same with LLM. Just imagine it as a fuzzy ctrl+F. Can be useful.
Being really good at ctrl+F / LLM is not the same. I learn a lot just browsing through documentation, without searching anything in particular.
That's the big issue with LLMs as of now; They reflect their American creators and never want to admit when they just can't answer a question. CTRL+F will in fact give me 0 results, which is more useful than a wrong result.
However, as a non-native speaker, I really like the fact that I can give the LLM some description of a word that I don't know and it'll figure it out. "those fruits that are kind of bitter but also make my tongue feel like it wants to curl" (= "adstringend" [sic] or so, but who remembers that word or can spell it correctly?)
LLMs are basically like all those type-correction algorithms on steroids. Very helpful sometimes, even if it means I have to doublecheck their output.
Producing text as output is not the way
Note the preamble, FAQs, and that all of the winning entries are now neural networks.
The funny thing is that people use LLMs to do the opposite instead of what is implied to be a smarter thing to do with the above quote.
[1] https://quoteinvestigator.com/2012/04/28/shorter-letter/
Part of my performance review is indirectly using bloat to seem sophisticated and thorough.
Documentation is an interesting use case. There are various kinds of documentation (reference, tutorial, architecture, etc.) and LLMs might be useful for things like
- repetitive formatting and summarization of APIs for reference
- tutorials which repeat the same information verbosely in an additive, logical sequence (though probably a human would be better)
- sample code (though human-written would probably be better)
The tasks that I expect might work well involve repetitive reformatting, repetitive expansion, and reduction.
I think they also might be useful for systems analysis, boiling down a large code base into various kinds of summaries and diagrams to describe data flow, computational structure, signaling, etc.
Still, there is probably no substitute for a Caroline Rose[1] type tech writer who carefully thinks about each API call and uses that understanding to identify design flaws.
Any documentation they write at best re-states what is immediately obvious from the surrounding code (Useless: I need to explain why), or is some hallucination trying to pretend it's a React app.
To their credit they've slowly gotten better now that a lot of documentation already exists, but that was me doing the work for them. What I needed them to do was understand the project from existing code, then write documentation for me.
Though I guess once we're at the point AI is that good, we don't need to write any documentation anymore, since every dev can just generate it for themselves with their favorite AI and in the way they prefer to consume it.
* They'll pretend they understand by re-stating what is written in the README, then proceed to produce nonsense.
Without that effort it's a useless sycophant and is functionally extremely lazy (ie takes short cuts all the time).
Don't suppose you've tried that particular model, after getting it to be thorough?
You don't have to play the game the same way to work there. But it helps to accept that others will play it, and manage your own expectations accordingly.
I don't have tons of examples, but in my experience:
* This worked in toxic environments. They deserve it.
* This doesn't work in a functional environment, because they don't have those bullshit metrics.
If you have to rely on those tricks, it's time to look for another job.
> Depends on what you are looking for. I’ve turned half baked ideas into white papers for plenty of praise. I’ve used them to make my Jira tickets seem complicated and complete. I’ve used them to get praised for writing comprehensive documentation.
This is about giving people a good impression of you so they'll write strong peer feedback.
I will take 60k at this point. I've been living ono half of that for almost 2 years now.
I have no idea how anyone is navigating this job market. Maybe it's just 10x worse and most people here are in the Bay area that's a tiny bit more shielded from this.
>Their number is absolutely miniscule compared to the number of big-tech jobs.
give it another year of layoffs. We'll get there.
Feel for you or anyone surrounded by such others but it is most definitely not everywhere - that is used to justify your presence in a place of work you should not be
Over-fitting proxy measures is one of the scourges of modernity.
The only silver lining is if it becomes so wide spread and easy it loses the value of seeming sophisticated and thorough.
Maybe we should let/encourage this to happen. Maybe letting bloated zombie-like organisations bloat themselves to death would thin the herd somewhat, to make space for organisations that are less “broken”.
At the same time, I strive really hard to influence the environment I am in so it does not value content bloat as a unit of productivity, so hopefully there are at least some places where people can have their sanity back!
If your organisation is functional and you are abusing it by doing that, then you deserve to get fired.
(See also the famous Pascal quote “This would have been a shorter letter if I had the time”).
P.s. for reference I’ve asked an LLM to compress what I wrote above. Here is the output:
When I have a murky idea that’s hard to articulate, I find it helpful to ramble—typing or dictating a stream of semi-structured thoughts—and then ask an LLM to summarize. It often captures what I mean, but more clearly and effectively.
Elsewhere in this comment section, it's discussed about the importance of having original thought, which the summarized text specifically isn't, and has leeched away.
The parent comment has actually made the case against the summarized text being "better" (if we're measuring anything that isn't word count).
An LLM could make something sound articulate even if your input is useless rambling containing the keywords you want to think about. Having someone validate a lack of thought as something useful doesn't seem good for you in the long term
Learning how to condense what I say focuses me to think about what is and isn't important - and it also forces me to think in terms of "style" and "audience".
(My natural writing style is much more verbose - I want to address all sorts of branching objections and tangential concepts. I find parenthesis really useful, because I can dump a bunch of stuff there and it's a clear marker that you can safely skip it all)
LLMs are also useful, because I can ramble, work out my own summary, and then compare to the LLM. Or, when I was just starting out, ramble, get an LLM to summarize, and then try to work out my own summary that captures what it missed.
Aside from practice being inherently beneficial, I also find that being able to form my own summaries helps me catch when the LLM has misunderstood, hallucinated, or just subtly changed the emphasis - for instance, your original example was indeed much cleaner, but I wouldn't have felt like you were really truly a fellow rambler just from reading that.
Hopefully you don't mind a rambling post. If you want a TL;DR an LLM can probably do a decent job ;)
(ChatGPT Summary: Practicing summarization improves clarity, audience awareness, and writing focus—especially for naturally verbose thinkers. LLMs are helpful tools for this, both as a comparison point and a learning aid. Writing your own summaries sharpens understanding and helps catch LLM misinterpretations or emphasis shifts.)
(Yeah, that seems pretty accurate)
Done, now ai is just lossy prettyprinting.
I have it write doc strings. I later ask it to explain a section of code, wherein it uses the doc strings to understand and explain the code to me.
A less lossy way to capture this will probably emerge at some point.
It's much more useful for answering questions that are public knowledge since it can pull from external sources to add new info.
Ideally there's some selection done, and the fact you're receiving it means it's better than a mean answer. But sometimes they haven't even read the LLM output themselves :-(
as long as the text isn't at risk of being written up by HR, I don't particularly care about the tone of the message.
At some point we will stop making decisions about what future readers want. We will just capture the concrete inputs and the reader's LLM will explain it.
A piece of text does not have a single inherently correct interpretation. Its meaning is a relation constructed at run- (i.e. read-)time between the reader, the writer, and (possibly) the things the text refers to, that is if both sides are well enough aligned to agree on what those are.
Words don't speak, they only gesture.
What if we could persist this final LLM context state? Think of it like a queryable snapshot of the 'why' behind code or docs. Instead of just reading a comment, you could load the associated context and ask an LLM questions based on the original author's reasoning state.
Yes, context is model-specific, a major hurdle. And there are many other challenges. But ignoring the technical implementation for a moment, is this concept – capturing intent via persistent, queryable LLM context – a valuable problem to solve? I feel it would be.
So perhaps switching to open-source models of sufficient "power" will obsolete that particular concern (they would be a "development dependency", just like a linter, compiler or code formatter are today).
What about a tool that simply allows other developers to hover over some code and see any relevant conversations the developer had with a model? Version the chat log and attach it to the code basically.
In my experiments with LLMs for writing code, I find that the code is objectively garbage if my prompt is garbage. If I don't know what I want, if I don't have any ideas, and I don't have a structure or plan, that's the sort of code I get out.
I'd love to hear any counterpoints from folks who have used LLMs lately to get academic or creative writing done, as I haven't tried using any models lately for anything beyond helping me punch through boilerplate/scaffolding on personal programming projects.
I pointed this out a few weeks ago with respect to why the current state of LLMs will never make great campaign creators in Dungeons and Dragons.
We as humans don't need to be "constrained" - ask any competent writer to sit quietly and come up with a novel story plot and they can just do it.
https://news.ycombinator.com/item?id=43677863
That being said - they can still make AMAZING soundboards.
And if you still need some proof, crank the temperature up to 1.0 and pose the following prompt to ANY LLM:
Come up with a self-contained single room of a dungeon that involves an
unusual puzzle for use with a DND campaign. Be specific in terms of the
puzzle, the solution, layout of the dungeon room, etc. It should be totally
different from anything that already exists. Be imaginative.
I guarantee 99% of the returns will return a very formulaic physics-based puzzle response like "The Resonant Hourglass", or "The Mirror of Acoustic Symmetry", etc.Some examples:
- "Don't include pointless comments." - The model doesn't keep track of what it's doing as well, I generally just do another pass after it writes the code to simplify things.
- "Keep things simple" - The model cuts corners(often unnecessarily) on things like type safety.
- "Allow exceptions to bubble up" - Claude deletes existing error handling logic. I found that Claude seems to prefer just swallowing errors and adding some logging, instead of fixing the underlying cause of the error, but adding this to the prompt just caused it to remove the error handling that I had added myself.
Haha, I was suspicious, so I tried this, and I indeed got an hourglass themed puzzle! Though it wasn't physics-based - characters were supposed to share memories to evoke emotions, and different emotions would ring different bells, and then you were supposed to evoke a certain type of story. Honestly, I don't know what the hourglass had to do with it.
The output is pretty non-sensical: https://pastebin.com/raw/hetAvjSG
## Room Layout
The room is a simple 30-foot square with a single exit door that's currently sealed. In the center sits a large stone cube (roughly 5 feet on each side) covered in various textured surfaces - some rough like sandpaper, others smooth as glass, some with ridged patterns, and others with soft fabric-like textures.
Around the room, six distinct scent emitters are positioned, each releasing a different aroma (pine, cinnamon, ocean breeze, smoke, floral, and citrus). The room is otherwise empty except for a small stone pedestal near the entrance with a simple lever.
## The Puzzle Concept
This puzzle operates on "synesthetic translation" - converting sensory experiences across different senses. The core concept is entirely verbal and tactile, making it fully accessible without visual components.
## How It Works
When players pull the lever, one of the scent emitters activates strongly, filling the room with that particular aroma. Players must then approach the central cube and touch the texture that corresponds to that smell according to a hidden synesthetic logic.
The connection between smells and textures follows this pattern: - Pine scent → ridged texture (like tree bark) - Cinnamon → rough, granular texture (like spice) - Ocean → smooth, undulating surface (like waves) - Smoke → soft, cloudy texture (like mist) - Floral → velvet-like texture (like petals) - Citrus → bumpy, pitted texture (like orange peel)
After correctly matching three smell-texture pairs in sequence, the door unlocks. However, an incorrect match causes the lever to reset and a new random smell to emerge.
## Communication & Accessibility
The DM describes the smells verbally when they're activated and can describe the various textures when players explore the cube by touch. The entire puzzle can be solved through verbal description, touch, and smell without requiring sight.
For extra accessibility, the DM can add: - Distinct sounds that play when each scent is released - Textured surfaces that have subtle temperature differences - Verbal clues discovered through successful matches
## What Makes This Unique
This puzzle uniquely relies on cross-sensory associations that aren't commonly used in dungeons. It: - Doesn't rely on visuals at all - Uses smell as a primary puzzle component (rare in D&D) - Creates unusual connections between different senses - Has no mathematical, musical, or traditional riddle elements - Can be experienced fully regardless of vision status - Creates interesting roleplaying opportunities as players discuss how different scents "feel" texturally
For the DM, it's easy to describe and implement while still being conceptually unique. Players solve it through discussion, exploration, and experimentation rather than recalling common puzzle patterns.
https://old.reddit.com/r/singularity/comments/1andqk8/gemini...
As a side note, I find the way that you interact with a LLM when doing creative writing is generally more important than the model. I have been having great results with LLMs for creative writing since ChatGPT 3.5, in part because I approach the model with a nucleus of a chapter and a concise summary of relevant details, then have it ask me a long list of questions to flesh out details, then when the questions stop being relevant I have have it create a narrative outline or rough draft which I can finish.
At the same time, however, the people who need to use an LLM for this are going to be the worst at identifying the output’s weaknesses, eg just as I couldn’t write Spanish text, I also couldn’t evaluate the quality of a Spanish translation that an LLM produced. Taken to an extreme, then, students today could rely on LLMs, trust them without knowing any better, and grow to trust them for everything without knowing anything, never even able to evaluate their quality or performance.
The one area that I do disagree with the author, though, is coding. As much as I like algorithms code is written to be read by computers and I see nothing wrong with computers writing it. LLMs have saved me tons of time writing simple functions so I can speed through a lot of the boring legwork in projects and focus on the interesting stuff.
I think Miyazaki said it best: “I feel… humans have lost confidence“. I believe that LLMs can be a great tool for automating a lot of boring and repetitive work that people do every day, but thinking that they can replace the unique perspectives of people is sad.
I'm not making a judgement about the use of LLMs for writing code, just that I do think that code serves the purpose of expressing meaning to machines as well as humans.
unless you're the sole contributor, code is a collaborative effort and will be reviewed by peers to make sure you don't hit any landmines at best, or ruin the codebase at worst. unless you're writing codegen itself I very much would consider writing code as if a human is going to read it.
>“I feel… humans have lost confidence“
Confidence in their fellow man? yes. As the author said a lot of this reliance on AI without proper QA comes down to "nobody cares". Or at least that mentality. And apathy is just as contagious in an environment as passion. If we lose that passion and are simply doing a task to get by and clock out, we're doomed as a species.
For the structure, they are barely useful: Writing is about having such a clear understanding, that the meaning remains when reduced to words, so that others may grasp it. The LLM won't help much with that, as you say yourself.
They’re great at proofreading. They’re also good at writing conclusions and abstracts for articles, which is basically synthesising the results of the article and making it sexy (a task most scientists are hopelessly terrible at). With caveats:
- all the information needs to be in the prompt, or they will hallucinate;
- the result is not good enough to submit without some re-writing, but more than enough to get started and iterate instead of staring at a blank screen.
I want to use them to write methods sections, because that is basically the exact same information repeated in every article, but the actual sentences need to be different each time. But so far I don’t trust them to be accurate with technical details. They’re language models, they have no knowledge or understanding.
LLMs may seem like magic buy they aren't. They operate within the confines of the context they're given. The more abstract the context, the more abstract the results.
I expect to need to give a model at least as much context as a decent intern would require.
Often asking the model "what information could I provide to help you produce better code" and then providing said information leads to vastly improved responses. Claude 3.7 sonnet in Cline is fairly decent at asking for this itself in plan mode.
More and more I find that context engineering is the most important aspect of prompt engineering.
Doesn’t this match pretty much all human creation? It’s easier to judge a book that to write it, it’s easier to watch a rocket going up in the space than to build it, it’s easier to appreciate some Renaissance painting or sculpture than to actually make it.
I commented in another thread. We're using image and video diffusion models for creative:
https://www.youtube.com/watch?v=H4NFXGMuwpY
Still not a fan of LLMs.
I personally tend not to use AI this way. When it comes to writing, that's actually the exact inverse of how I most often use AI, which is to throw a ton of information at it in a large prompt, and/or use a preexisting chat with substantial relevant context, possibly have it perform some relevant searches and/or calculations, and then iterate on that over successive prompts before landing on a version that's close enough to what I want for me to touch up by hand. Of course the end result is clearly shaped by my original thoughts, with the writing being a mix of my own words and a reasonable approximation of what I might have written by hand anyway given more time allocated to the task, and not clearly identifiable as AI-assisted. When working with AI this way, asking to "read the prompt" instead of my final output is obviously a little ridiculous; you might as well also ask to read my browser history, some sort of transcript of my mental stream of consciousness, and whatever notes I might have scribbled down at any point.
It sounds to me that you don't make the effort to absorb the information. You cherry-pick stuff that pops in your head or that you find online, throw that into an LLM and let it convince you that it created something sound.
To me it confirms what the article says: it's not worth reading what you produce this way. I am not interested in that eloquent text that your LLM produced (and that you modify just enough to feel good saying it's your work); it won't bring me anything I couldn't get by quickly thinking about it or quickly making a web search. I don't need to talk to you, you are not interesting.
But if you spend the time to actually absorb that information, realise that you need to read even more, actually make your own opinion and get to a point where we could have an actual discussion about that topic, then I'm interested. An LLM will not get you there, and getting there is not done in 2 minutes. That's precisely why it is interesting.
Synthesizing large amounts of information into smaller more focused outputs is something LLMs happen to excel at. Doing the exact same work more slowly by hand just to prove a point to someone on HN isn't a productive way to deliver business value.
You prove my point again: it's not "just to prove a point". It's about internalising the information, improving your ability to synthesise and be critical.
Sure, if your only objective is to "deliver business value", maybe you make more money by being uninteresting with an LLM. My point is that if you get good at doing all that without an LLM, then you become a more interesting person. You will be able to have an actual discussion with a real human and be interesting.
We were talking about writing, not about vibe coding. We don't use calculators for writing. We don't use API requests for writing (except when we make an LLM write for us).
> Using the right tool for the job is just doing my job well.
I don't know what your job is. But if your job is to produce text that is meant to be read by humans, then it feels like not being able to synthesise your ideas yourself doesn't make you excellent at doing your job.
Again maybe it makes you productive. Many developers, for instance, get paid for writing bad code (either because those who pay don't care about quality or can't make a difference, or something else). Vibe coding obviously makes those developers more productive. But I don't believe it will make them learn how to produce good code. Good for them if they make money like this, of course.
We do however use them to summarize and transform data all the time. Consider the ever present spreadsheet. Huge amounts of data are thrown into spreadsheets and formulas are applied to that data to present us with graphs and statistics. You could do all of that by hand, and you'd probably have a much better "internalization" about what the data is. But most of the time, hand crafting graphs from raw data and internalizing it isn't useful or necessary to accomplish what you actually want to accomplish with the data.
Do you actually think that an LLM can take, say, a Harry Potter book as an input, and give it a grade in such a way that everybody will always agree on?
And to go further, do you actually use LLMs to generate graphs and statistics from spreadsheet? Because that is probably a bad idea given that there are tools that actually do it right.
No, but I also don't think a human can do that either. Subjective things are subjective. I'm not sure I understand how this connects to the idea you expressed that doing various tasks with automation tools like LLMs prevent you from "internalizing" the data, or why not "internalizing" data is necessarily a bad thing. Am I just misunderstanding your concern?
Many of the posts I find here defending the use of LLMs focus on "profitability". "You ask me to give you 3 pages about X? I'll give you 3 pages about X and you may not even realise that I did not write them". I completely agree that it can happen and that LLMs, right now, are useful to hack the system. But if you specialise in being efficient at getting an LLM to generate 3 pages, you may become useless faster than you think. Still, I don't think that this is the point of the article, and it is most definitely not my point.
My point is that while you specialise in hacking the system with an LLM, you don't learn about the material that goes into those 3 pages.
* If you are a student, it means that you are losing your time. Your role as a student is to learn, not to hack.
* More generally as a person, "I am a professional in summarising stuff I don't understand in a way that convinces me and other people who don't understand it either" is not exactly very sexy to me.
If you want to get actual knowledge about something, you have to actually work on getting that knowledge. Moving it from an LLM to a word document is not it. Being knowledgeable requires "internalising" it. Such that you can talk about it at dinner. And have an opinion about it that is worth something to others. If your opinion is "ChatGPT says this, but with my expertise in prompting I can get it to say that", it's pretty much worthless IMHO. Except for tricking the system, in a way similar to "oh my salary depends on the number of bugs I fix? Let me introduce tons of easy-to-fix bugs then".
No one said anything about vibe coding. Using tools appropriately to accomplish tasks more quickly is just common sense. Deliberately choosing to pay 10x the cost for the same or equivalent output isn't a rational business decision, regardless of whether the task happens to be writing, long division, or anything else.
Just to be clear, I'm not arguing against doing things manually as a learning exercise or creative outlet. Sometimes the journey is the point; sometimes the destination is the point. Both are valid.
I don't know what your job is.
Here's one: prepping first drafts of legal docs with AI assistance before handing them off to lawyers for revision has objectively saved significant amounts of time and money. Without AI this would have been too time-consuming to be worthwhile, but with AI I've saved not only my own time but the costs of billable hours on phone calls to discuss requirements, lawyers writing first drafts on their own, and additional Q&A and revisions over email. Using AI makes it practical to skip the first two parts and cut down on the third significantly.
Here's another one: doing security audits of customer code bases for a company that currently advertises its use of AI as a cost-saving/productivity-enhancing mechanism. Before they'd integrated AI into their platform, I would frequently get rave reviews for the quality and professionalism of my issue reports. After they added AI writing assistance, nothing changed other than my ability to generate a greater number of reports in the same number of billable hours. What you're suggesting effectively amounts to choosing to deliver less value out of ego. I still have to understand my own work product, or I wouldn't be able to produce it even with AI assistance. If someone thinks that somehow makes the product less "interesting", well then I guess it's a good thing my job isn't entertainment.
I'd be curious to know whether your legal documents are as good as without LLMs. I wouldn't be surprised at all if they were worse, but cheaper. Talking about security audits, that's actually a problem I've seen: LLMs makes it harder to detect bad audits, and in my experience I have been more often confronted to bad security audits than to good ones.
For both examples, you say "LLMs are useful to make more money". I say "I believe that LLMs lower the quality of the work". It's not incompatible.
As much as the author rightfully complains about the example in the post, a version that only said "explain the downsides of Euler angles in robotics and suggest some alternatives" would obviously be far worse. In this case, the AI helped elevate clear F-level work to maybe a C. That's not an indictment of AI; it's an indictment of low-quality work. LLMs lower the bar to produce passable-looking bad work, but they also lower the bar to produce excellent work. The confirmation bias here is that we don't know how many cases of B-level work became A papers with AI assistance, because those instances don't stand out in the same way.
In the audit example, LLMs aren't doing the audit. They synthesize my notes into a useful starting point to nullify writer's block, and let me focus more of my time on the hard or unique aspects of a given report. It's like having an intern write the first draft for me, typically with some mistakes or oversights, occasionally with a valuable additional insight thrown in, and often with links to a few helpful references for the customer that I wouldn't necessarily have found and included on my own. That doesn't lower the quality; it improves it.
As far as the legal example, it really depends on the complexity of a given instance and the guidance you've provided to your lawyers. A good lawyer won't sign off on something that fails to meet the requested quality bar (if anything, the financial incentive would be for them to err on the side of conservatism and toss out the draft you'd provided). But of course this all depends on you having a clear enough understanding of what you're trying to accomplish, and enough familiarity with legal documents and proficiency with language to shape everything into a passable first draft. AI speeds this up, but if you don't know what you're doing then the AI won't solve that for you. It's a tool like any other, and can be used properly or improperly.
I think that mindet directly correlates with the kind of AI hat prompted this article: "It doesn't matter" in your eyes. You don't see the task as important, only the output and that it makes you money. the craft is less important than what you can sell it for.
Fairly or unfairly, people (including you) will inexorably come to see anything done with AI as ONLY done with AI, and automatically assume that anyone could have done it.
In such a world, someone could write the next Harry Potter and it will be lost in a sea of one million mediocre works that roughly similar. Hidden in plain sight forever. There would no point in reading it, because it is probably the same slop I could get by writing a one paragraph prompt. It would be too expensive to discover otherwise.
I'm expanding on the author's point that the hard part is the input, not the output. Sure someone else could produce the same output as an LLM given the same input and sufficient time, but they don't have the same input. The author is saying "well then just show me the input"; my counterpoint is that the input can often be vastly longer and less organized or cohesive than the output, and thus less useful to share.
To be fair, the first Harry Potter is a kinda average British boarding school story. Rowling is barely an adequate writer (and it shows badly in some of the later books). There was a reason she got rejected by so many publishers.
However, Netscape was going nuts and the Internet was taking off. Anime was going nuts and produced some of the all time best anime. MTV animation went from Beavis and Butthead to Daria in this time frame. Authors were engaging with audiences on Usenet (see: Wheel of Time and Babylon 5). Fantasy had moved from counterculture for hardcore nerd boys to something that the bookish female nerds would engage with.
Harry Potter dropped onto that tinder and absolutely caught fire.
It certainly wasn't the writing that elevated it. I think it was as simple as tapping into an audience who for once wasn't raised as some nuclear family. a Cinderella esque tale of being whisked away from abuse mixed with a hero's journey towards his inevitable clash with the very evil that set this in motion.
The movies definiely helped too. The first few were very well done with excellent child actors. Watching many other fantasy adaptations try to replicate that really shows just how the stars align into making HP a success.
The world-building is meh at best. The magic system is perfunctory. But the characters are strong and the plot is interesting from beginning to end.
This has already been the case for decades. There are probably brilliant works sitting out there on AO3 or whatnot. But you'll never find them because it's not worth wading through the junk. AI merely accelerates what was already happening.
I think "merely" is underselling the magnitude of effect this can have. Asset stores overnight went form "okay I need to dig hard to find something good" to outright useless as it's flooded with unusable slop. Google somehow got worse overnight for technical searches that aren't heavily quieried.
I didn't really desire such accelerations for slop, thanks. At least I could feel good knowing human made slop was learned from sometimes.
I think you are overestimating the people who submit this slop. It’s more like “here’s my assignment, what’s the answer”
I agree, it's weird for parents to say, "Jump through these hoops, and for every dollar you earn grinding sesame for some company, we'll give you an additional two."
Working and educating yourself is decent and dignified, no? Is this a bad deal?
I think that depends radically on the nature of the work. I hold a BS in Computer Science but am at an organization that requires me to use LLMs as part of my performance evaluation; I could protest, but it puts my immigration status at risk (my employer has sponsored me into my current country). I view the things asked of me (using LLMs) as degrading, but I'm unable to effectively protest that despite being well-regarded as an engineer (by peers and past employers) and credentialed (BS in CS).
Put differently, most people do A Job because they need to put food on the table. One of my partners used to work in the veterinary field, which took an immense physical toll on them. They're much happier being (f)unemployed currently, being able to work in the garden and make good food and produce art, but our finances are suffering for it; they're hunting for jobs, but most of the current openings are pretty bad in terms of work/life balance and future opportunity.
Working is not inherently necessary; in our current economic system it's exploitatively-required in order to live any sort of decent and dignified life, and there's loads of stories about people who work but aren't treated with dignity (thru healthcare or housing or food strife).
clearly the billionaires who made it so a decent job isn't even guaranteed to cover rent.
Its not like professors get real training either, but the guy doesn't seem to have gotten any real pedagogy.
I guess that I'm driving at that this guy is awfully young and the essay was a hot take. We should judge it accordingly.
In that way, the prompt is more interesting, and I can’t tell you how many times I’ve gone to go write a prompt because I dunno how to write what I wanna say, and then suddenly writing the prompt makes that shit clear to me.
In general, I’d say that AI is way more useful to compress complex ideas into simple ones than to expand simplistic ideas in to complex ones.
I can describe a novel physics model for a video game. I can do a refresher on concepts like friction, air resistance, gravity, etc. that I don't remember well from school. Then I can describe the constraints and generate code to satisfy it.
If I were to go and learn the physics really in depth and then code it myself, it would take 10x longer.
In a world where the LLM can do the building, the engineer is no longer required.
In my experience, an ideas leader (you know the type) will fail at telling a machine exactly what to do and get bored with the inevitable edge cases, computers saying no, and non-ideas drudgery. This is where I believe every no-code and low-code and WYSIWYG platform and now LLMs fall apart.
A major aspect of programming is translating the messy meatspace to something an extremely fast moron (a computer and I wish I coined this term) understands. And as much of a step change LLMs for writing code are, I have yet to see them take this step.
If I spend 1 hour and write 500 words of prompt to then attach X additional rows of data (e.g. Rows from a table) and the LLM returns X rows of perfect answers. It shouldn't matter that the output ratio is worse than if I had typed those characters myself.
The important thing is whether within that 1 hour (+ few minutes of LLM processing) I managed to get the job done quicker or not.
It's similar to programming, using LLMs is not necessarily to write better code than I personally could but to write good enough code much faster than I ever would.
Bingo. It can be a rubber duck that echoes your mistakes back. Unfortunately, as other commenters have pointed out, the prompt may not be as interesting/iterative as we might suppose: "Here's the assignment, what's the answer".
The issue, IMO, is that some people throw in a one-shot, short prompt, and get a generic, boring output. "Garbage in, generic out."
Here's how I actually use LLMs:
- To dump my thoughts and get help organizing them.
- To get feedback on phrasing and transitions (I'm not a native speaker).
- To improve tone, style (while trying to keep it personal!), or just to simplify messy sentences.
- To identify issues, missing information, etc. in my text.
It’s usually an iterative process, and the combined prompt length ends up longer than the final result. And I incorporate the feedback manually.
So sure, if someone types "write a blog post about X" and hits go, the prompt is more interesting than the output. But when there are five rounds of edits and context, would you really rather read all the prompts and drafts instead of the final version?
(if you do: https://chatgpt.com/share/6817dd19-4604-800b-95ee-f2dd05add4...)
Thanks for the example chat it was a valuable learning for me!
Ironically enough, as I was reading your post this is what convinced me it was written by ChatGPT.
FWIW, the initial draft you gave to chatgpt is better than what you posted.
I think you missed the point of the article. They did not mean it literally: it's a way to say that they are interested in what you have to say.
And that is the point that is extremely difficult to make students understand. When a teacher asks a student to write about a historical event, it's not just some kind of ceremony on the way to a degree. The end goal is to make the student improve in a number of skills: gathering information, making sense of it, absorbing it, being critical about what they read, eventually building an opinion about it.
When you say "I use an LLM to dump my thoughts and get help organising them", what you say is that you are not interested in improving your ability to actually absorb information. To me, it says that you are not interested in becoming interesting. I would think that it is a maturity issue: some day you will understand.
And that's what the article says: I am interested in hearing what you have to say about a topic that you care about. I am not interested into anything you can do to pretend that you care or know about it. If you can't organise your thoughts yourself, I don't believe that you have reached a point where you are interesting. Not that you will never get there; it just takes practice. But if you don't practice (and use LLMs instead), my concern is that you will never become interesting. This time is wasted, I don't want to read what your LLM generated from that stuff you didn't care to absorb in the first place.
Maybe an LLM could have helped the writer say what they meant to.
- To "Translate to language XYZ", and that is not sometimes strightforward and needs iterating like "Translate to language <LANGUAGE> used by <PERSON ROLE> living in <CITY>" and so on.
And the author is right, I use it as 2nd-language user, thus LLM produces better text than myself. However I am not going to share the prompt as it is useless (foreign language) and too messy (bits of draft text) to the reader. I would compare it to passing a book draft thru editor and translator.
You speak English? Write and send your message in English. The receiver can copy-paste it in a translator. This way, they will know that they are not reading the original. So if your translated message sounds inaccurate, offensive or anything like that, they can go back to your original message.
The world will be consumed by AI.
Once upon a time only the brightest (and / or richest) went to college. So a college degree becomes a proxy for clever.
Now since college graduates get the good jobs, the way to give everyone a good job is to give everyone a degree.
And since most people are only interested in the job, not the learning that underpins the degree, well, you get a bunch of students that care only for the pass mark and the certificate at the end.
When people are only there to play the game, then you can't expect them to learn.
However, while 90% will miss the opportunity right there in front of them, 10% will grab it and suck the marrow. If you are in college I recommend you take advantage of the chance to interact with the knowledge on offer. College may be offered to all, but only a lucky few see the gold on offer, and really learn.
That's the thing about the game. It's not just about the final score. There's so much more on offer.
Learning is not just a function of aptitude and/or effort. Interest is a huge factor as well, and even for a single person, what they find interesting changes over time.
I don't think it's really possible to have a large cohort of people pass thru a liberal arts education, with everyone learning the same stuff at the same time, and have a majority of them "suck the marrow" out of the opportunity.
For us the curriculum was the start of the learning, not the end. We'd get a weekly assignment that could be done in an afternoon. Most of the class did the assignments, and that was enough.
There was a small group of us that lived (pretty much) in the lab. We'd take the assignment and run with it, for days, nights, spare periods, whatever. That 10 line assignment? We turned it into 1000 lines every week.
For example the class on sorting might specify a specific algorithm. We'd do all of them. Compete against each other to make the fastest one. Compare one dataset to another. Investigate data distributions. You know, suck the marrow.
(Our professors would also swing by the lab from time to time to see how things were going, drop the odd hint, or prod the bear in a direction and so on. And this is all still undergrad.
I can imagine a History major doing the same. Researching beyond the curriculum. Going down rabbit holes.
My point is though is that you're right. You need to be interested. You need to have this compulsion. You can't tell a person "go, learn". All you can do is offer the environment, sit back, and see who grabs the opportunity.
I get that you cant imagine this playing out. To those interested only in the degree, it's unimaginable. And no, as long as burning-desire is not on the entry requirements, it most certainly will not be the majority.
In truth the lab resources eoild never have coped if the majority did what we did.
By 'liberal arts' I meant the common 4 year, non-vocational education. My major was CS too, but well over half of the time was spent on other subjects.
> I get that you cant imagine this playing out. To those interested only in the degree, it's unimaginable
I can easily imagine what you describe playing out. I just wouldn't call it 'sucking the marrow' (unless you were equally avid in all your classes, which time likely would not permit).
But as you allude to in your last point, the system isn't really designed for that. It's nice when it does effectively support the few who have developed the interest, and have extra time to devote to it, as it did for you.
I'd rather see systems that were designed for it though.
This is because that is what companies care about. It's not a proxy for cleverness or intelligence - it's a box to check.
If you accept that the degree/job relationship is the start, not end, of the reason for being there, then you see other things too.
There are opportunities around the student which are for them, not for their degree, not for their job. There are things you can learn, and never be graded. There are toys to play with you'll never see again. There are whole departments of living experts happy to answer questions.
For example, (this is pre google) I wrote a program and so needed to understand international copyright. I could have gone to the library and read about it. Instead I went to the law faculty, knocked on the door, and found their professor who specialized in intellectual property.
Since the program I wrote was in the medical space, I went to the medical campus, to the medical research library, and found tomes that listed researchers who might benefit. I basically learned about marketing.
If all you care about is the company job, then all you'll see is the degree.
Then fail to actually learn anything and apply for jobs and try to cheat the interviewers using the same AI that helped them graduate. I fear that LLMs have already fostered the first batch of developers who cannot function without it. I don't even mind that you use an LLM for parts of your job, but you need to be able to function without it. Not all data is allowed to go into an AI prompt, some problems aren't solvable with the LLMs and you're not building your own skills if you rely on generated code/configuration for the simpler issues.
That is, the job of a professional programmer includes having produced code that they understand the behavior of. Otherwise you’ve failed to do your due diligence.
If people are using LLMs to generate code, and then actually doing the work of understanding how that code works… that’s fine! Who cares!
If people are just vibe coding and pushing the results to customers without understanding it—they are wildly unethical and irresponsible. (People have been doing this for decades, they didn’t have the AI to optimize the situation, but they managed to do it by copy-pasting from stack overflow).
I have met maybe two people who truly understood the behaviour of their code and both employed formal methods. Everyone else, including myself, are at varying levels of confusion.
Playing the contrarian here, but I'm from a batch of developers that can't function without a compiler, and I'm at 10% of what I can do without an IDE and static analysis.
Sure, there's a huge jump from a line editor like `ed` to a screen editor like `vi` or `emacs`, but from there on, it was diminishing returns really (a good debugger was usually the biggest benefit next) — I've also had the "pleasure" of having to use `echo`, `cat` and `sed` to edit complex code in a restricted, embedded environment, and while it made iterations slower, not that much more slower than if I had a full IDE at my disposal.
In general, if I am in a good mood (and thus not annoyed at having to do so many things "manually"), I am probably only 20% slower than with my fully configured IDE at coding things up, which translates to less than 5% of slow down on actually delivering the thing I am working on.
Same with more advanced editors and IDEs. They help with tediousness, which can hinders insight, but does not help it if you do not have the foundation.
A compiler translates _what you have already implemented_ into another computer runnable language. There is an actual grammar that defines the rules. It does not generate new business logic or assumptions. You have already done the work and taken all the decisions that needed critical thought, it's just being translated _instruction by instruction_. (btw you should check how compilers work, it's fun)
Using an LLM is more akin to copying from Stackoverflow than using a compiler/transpiler.
In the same way, I see org charts that put developers above AI managers, which are above AI developers. This is just smoke. You can't have LLMs generating thousands of lines of code independently. Unless you want a dumpster fire very quickly...
(Yes, these are people with developer jobs, often at "serious" companies.)
Maybe you mean people who are bad at interviews? Or people whose job isn't actually programming? Or maybe "lots" means "at least one"? Or maybe they can strictly speaking do fizzbuzz, but are "in any case bad programmers"? If your claim is true, what do these people do all day (or, let's say, did before LLMs were a thing...)?
> Maybe you mean people who are bad at interviews?
No, the opposite. These developers learn the relevant buzzwords and can string them together convincingly, but fail to actually understand what they're regurgitating. (Very similar to an LLM, actually.)
E.g., these people will throw words like "Dunder method" around with great confidence, but then will completely melt down for fifteen minutes if a function argument has the same name as a module.
When on the job these people just copy-paste existing code from the "serious company" monorepo all day, every day. They call it "teamwork".
I’ve met some really terrible programmers, and some programmers who freeze during interviews.
Predictably they end up with some people on the range from "can't code at all" to "newbie coder without talent"
I don't. I think the world is falling into two camps with these tools and models.
> I now circle back to my main point: I have never seen any form of create generative model output (be that image, text, audio, or video) which I would rather see than the original prompt. The resulting output has less substance than the prompt and lacks any human vision in its creation. The whole point of making creative work is to share one’s own experience
Strong disagree with Clayton's conclusion.
We just made this with AI, and I'm pretty sure you don't want to see the raw inputs unless you're a creator:
https://www.youtube.com/watch?v=H4NFXGMuwpY
I think the world will be segregated into two types of AI user:
- Those that use the AI as a complete end-to-end tool
- Those that leverage the AI as tool for their own creativity and workflows, that use it to enhance the work they already do
The latter is absolutely a great use case for AI.
I am not a creator but I am interested in generative AI capabilities and their limits, and I even suffered through the entire video which tries to be funny, but really isn't (and it'd be easier to skim through as a script than the full video).
So even in this case, I would be more interested in the prompt than in this video.
"Tall man, armor that is robotic and mechanical in appearance, NFL logo on chest, blue legs".,
And so on, embedded in node wiring diagrams to fiddly configs and specialized models for bespoke purposes, "camera" movements, etc.
Seeing this non-compelling prompt would tell me right off the bat that I wouldn't be interested in the video either.
The video is not exactly great, IMO.
Because those who recruit based on the degree aren't worth more than those who get a degree by using LLMs.
Maybe it will force a big change in the way students are graded. Maybe, after they have handed in their essay, the teacher should just have a discussion about it, to see how much they actually absorbed from the topic.
Or not, and LLMs will just make everything worse. That's more likely IMO.
To actually teach this, you do something like this:
"Here's a little dummy robot arm made out of Tinkertoys. There are three angular joints, a rotating base, a shoulder, and an elbow. Each one has a protractor so you can see the angle.
1. Figure out where the end of the arm will be based on those three angles. Those are Euler angles in action. This isn't too hard.
2. Figure out what the angles should be to touch a specific point on the table. For this robot geometry, there's a simple solution, for which look up "two link kinematics". You don't have to derive it, just be able to work out how to get the arm where you want it. Is the solution unambiguous? (Hint: there may be more than one solution, but not a large number.)
3. Extra credit. Add another link to the robot, a wrist. Now figure out what the angles should be to touch a specific point on the table. Three joints are a lot harder than two joints. There are infinitely many solutions. Look up "N-link kinematics". Come up with a simple solution that works, but don't try too hard to make it optimal. That's for the optimal controls course.
This will give some real understanding of the problems of doing this.
(I know jack all about robotics but that sounds like a pretty common assignment, the kind an LLM would regurgitate someone else's homework.)
The answer might be bogus, but the AI will sound confident all the way through.
No wonder sales and upper management love AI
One of the most fun classes I took in undergrad had people complaining about the professor’s teaching capabilities because it was too hard. We shouldn’t cater to the poor performers.
As long as LLM output is what it is, there is little threat of it actually being competitive on assignments. If students are attentive enough to paraphrase it into their own voice I'd call it a win; if they just submit the crap that some data labeling outsourcer has RLHF'd into a LLM, I'd just mark it zero.
If you’re not willing to cross out an entire assignment and return it to the student who handed it in with “ChatGPT nonsense, 0” written in big red letters at the top of it, you should ask yourself what is the point of your assignments in the first place.
But I get it, university has become a pay-to-win-a-degree scheme for students, and professors have become powerless to enforce any standards or discipline in the face of administrators.
So all they can do is give the ChatGPT BS the minimum passing grade and then philosophize about it on their blog (which the students will never read).
I would have thought that giving 0s to correct solutions would lead to successful complaints/appeals.
"No worthy use of an LLM involves other human beings reading its output."
If you use a model to generate code, let it be code nobody has to read: one-off scripts, demos, etc. If you want an LLM to prove a theorem, have it generate some Coq and then verify the proof mechanically. If you ask a model to write you a poem, enjoy the poem, and then graciously erase it.
> Since this is a long thread and we're including a wider audience, I thought I'd add Copilot's summary...
Someone called them out for it, several others defended it. It was brought up in one team's retro and the opinions were divided and very contentious, ranging from, "the summary helped make sure everyone had the same understanding and the person who did it was being conscientious" to "the summary was a pointless distraction and including it was an embarrassing admission of incompetence."
Some people wanted to adopt a practice of not posting summaries in the future but we couldn't agree and had to table it.
If I were to include AI generated stuff into my communication I'd also make it clear as people might guess it anyway.
Rewriting it in my own words would clear the issue, but then why am I even using an AI in the first place ?
Nobody will call you a lazy and incompetent coward for taking the default option: Hit reply-all, write your one-sentence response above all 50 quoted emails, hit send.
I'm going to call out what I see as the elephant in the room.
This is brand new technology and 99% of people are still pretty clueless at properly using it. This is completely normal and expected. It's like the early days of the personal computer. Or Geocities and <blink> tags and under construction images.
Even in those days, incredible things were already possible by those who knew how to achieve them. The end result didn't have to be blinking text and auto-playing music. But for 99% it was.
Similarly, with current LLMs, it's already more than possible to use them in effective ways, without obscuring meaning or adding superfluous nonsense. In ways whose results have none of the author's criticisms apply. People just don't know how to do it yet. Many never will, just like many never learnt how to actually use a PC past Word and Excel. But many others will learn.
No, this is just the de-facto "house style" of ChatGPT / GPT models, in much the same way that that that particular Thomas Kinkade-like style is the de-facto "house style" of Stable Diffusion models.
You can very easily tell an LLM in your prompt to respond using a different style. (Or you can set it up to do so by telling it that it "is" or "is roleplaying" a specific type-of-person — e.g. an OP-ED writer for the New York Times, a textbook author, etc.)
People just don't ever bother to do this.
I feel like for most of my audiences it provides the proper anchor points for effective skimming while still giving me room to include further detail and explanation so that it's there as desired by the reader.
(And responding to my sibling comment, I also use em dashes and semicolons all the time. Has my brain secretly always been an LLM??)
https://chatgpt.com/share/6817c9f4-ed48-8010-bc3e-58299140c8...
In the real world I would at least remove the em dashes. It’s a dead give away for LLM generated text.
You can't just say "don't sound like an LLM." The LLM does not in fact know that it is "speaking like an LLM"; it just thinks that it's speaking the way the "average person" speaks, according to everything it's ever been shown. If you told it "just speak like a human being"... that's what it already thought it was doing!
You have to tell the LLM a specific way to speak. Like directing an image generator to use a specific visual style.
You can say "ape the style of [some person who has a lot of public writing in the base model's web training corpus — Paul Graham, maybe?]". But that coverage will be spotty, and it's also questionably ethical (just like style-aping in image generation.)
But an LLM will do even better if you tell it to speak the in some "common mode" of speech: e.g. "an email from HR", or "a shitpost rant on Reddit" or "an article in a pop-science magazine."
Every day I'm made more aware of how terrible people are at identifying AI-generated output, but also how obsessed with GenAI-vestigating things they don't like or wouldn't buy because they're bad.
I've used LLM before to document command-line tools and APIs I've made; they aren't the final product since I also tweaked the writing and fixed misunderstandings from the LLM. I don't think the author would appreciate the original prompts, where I essentially just dump a lot of code and give instructions in bullet point form on what to output.
These generated documentation are immensely useful, and I use them all the time for myself. I prefer the documentation to reading the code because finding what I need at a glance is not trivial nor is remembering all the conditions, prerequisites, etc.
That being said, the article seems to focus on a use case where LLM is ill-suited. It's not suited for writing papers to pretend you wrote a paper.
> I say this because I believe that your original thoughts are far more interesting
Looking at the example posted, I'm not convinced that most people's original thoughts on gimbal lock will be more interesting than a succinct summary by an LLM.
What about using LLMs to refine or sharpen your existing work? Similar to a Rubber ducky? If you're intentional about maintaining and understanding the theory behind the work, I've found it a useful tool.
> I believe that the main reason a human should write is to communicate original thoughts.
in fairness to the students, how does the above apply to school work?
why does a student write, anyway? to pass an assignment, which has nothing to do with communicating original thoughts-- and whose fault is that, really?
education is a lot of paperwork to get certified in the hopes you'll get a job. it's as bereft of intelectual life as the civil service examinations in imperial china. original thought doesn't enter the frame.
For example if you already have a theory of your code, and you want to make some stuff that is verbose but trivial. It is just more efficient to explain the theory to an LLM and extract the code. I do like the idea of storing the underlying prompt in a comment.
Same for writing. If you truly copy paste output, it's obviously bad. But if you workshop a paragraph 5 or 6 times that can really get you unstuck.
Even the euler angles example. That output would be a good starting point for an investigation.
The most obvious ChatGPT cheating, like that mentioned in this article, is pretty easy to detect.
However, a decent cheater will quickly discover ways to conduce their LLM into producing text that is very difficult to detect.
I think if I was in the teaching profession I'd just leave, to be honest. The joy of reviewing student work will inevitably be ruined by this: there is 0 way of telling if the work is real or not, at which point why bother?
Quite the assertion. If anything the evidence is in favor of the other direction.
It was eye opening to see that most students cheat. By the same token, most students end up successful. It’s why everyone wants their kids to go to college.
They don’t get an exemption if the parents don’t care.
Or, bad money chases out good. Idiots that cheat will get the recommendations for jobs where by maxing the grade. The person that actually works gets set back. Even worse society at large loses and actually educated person. And lastly a school is going to attempt to protect their name by preventing cheating.
There always was a bunch of realistic options to not actually do your submitted work, and AI is merely makes it easier, more detectable and more scalable.
I think it moves the needle from 40 to 75, which is not great, but you'd already be holding your nose at student work half of the time before AI, so teaching had to be about more than that (and TBH it was, when I was in school teachers gave no fuck about submitted work if they didn't validate it by some additional face to face or test time)
Do you have any examples of this? I've never been able to get direct LLM output that didn't feel distinctly LLM-ish.
A study on whether LLMs can influence people on r/changemymind
Teachers will lament the rise of AI-generated answers, but they will only ever complain about the blatantly obvious responses that are 100% copy-pasted. This is only an emerging phenomenon, and the next wave of prompters will learn from the mistakes of the past. From now on, unless you can proctor a room full of students writing their answers with nothing but pencil and paper, there will be no way to know for certain how much was AI and how much was original/rewritten.
Rule 3 of the subreddit quite literally bars people from accusing posts of being AI-generated. I have only visited it a few times in recent times, but I noticed quite a few GPT-speak posts with comments calling it out getting removed and punished.
But I know it's easier said than done: if you get a student to realise that the time they spend at school is a unique opportunity for them to learn and grow, then you're job is almost done already.
I might argue you couldn't really tell if it was "real" before LLMs, either. But also, reviewing work without some accompanying dialogue is probably rarely considered a joy anyway.
Talk to the student, maybe?
I have been an interviewer in some startups. I was not asking leetcode questions or anything like that. My method was this: I would pretend that the interviewee is a new colleague and that I am having coffee with them for the first time. I am generally interested in my colleagues: who are they, what do they like, where do they come from? And then more specifically, what do they know that relates to my work? I want to know if that colleague is interested in a topic that I know better, so that I could help them. And I want to know if that colleague is an expert in a topic where they could help me.
I just have a natural discussion. If the candidate says "I love compilers", I find this interesting and ask questions about compilers. If the person is bullshitting me, they won't manage to maintain an interesting discussion about compilers for 15 minutes, will they?
It was a startup, and the "standard" process became some kind of cargo culting of whatever they thought the interviews at TooBigTech were like: leetcode, system design and whatnot. Multiple times, I could obviously tell in advance that even if this person was really good at passing the test, I didn't think it would be a good fit for the position (both for the company and for them). But our stupid interviews got them hired anyway and guess what? It wasn't a good match.
We underestimate how much we can learn by just having a discussion with a person and actually being interested in whatever they have to say. As opposed to asking them to answer standard questions.
This so much. A writing exercise sharpens your mind, it forces you to think clearly through problems, gives you practice in both letting your thoughts flow onto paper, and in post-editing those thoughts into a coherent structure that communicates better. You can throw it away afterwards, you'll still be a better writer and thinker than before the exercise.
I can’t imagine disincentivising actually getting stuck into programming and incentivising being good at regurgitating info in an exam room being a good thing for CS students.
As a professor today, assignments are the place where I’m happy to throw my students into the “deep end” (go learn a new language and a set of library toolkits while also learning this skill.) Exams just don’t provide that experience. Worse, students tend to cram for exams which is the worst way to retain information. I can’t even imagine thinking that the two are comparable in terms of retention and skill-building.
1. Take home projects where we programmed solutions to big problems. 2. Tests where we had to write programs in the exam on paper during the test.
I think the take home projects are likely a lot harder to grade without AI being used. I'd be disappointed if schools have stopped doing the programming live during tests though. Being able to write a program in a time constrained environment is similar to interviewing, and requires knowledge of the language and being able to code algorithms. It also forces you to think through the program and detect if there will be bugs, without being able to actually run the program (great practice for debugging).
I genuinely believe I had many excellent learning experiences at university, and I can assure you none of them were the times I had to re-write course info and hand it back to them in order to check off a box.
Maybe, if one student does something they might be wrong, but if 90% of students do something, perhaps the assignment is wrong? Doubling down and saying “we’ll force them to do it by hand then!” Is rather blindly missing the point here no?
Those classes are what taught me how to study and really internalize the material. Helped me so much later in college too. I really can't imagine how kids these days are doing it.
Yeah, to recycle a comment [0] from a few months back:
> Yeah, one of their most "effective" uses is to counterfeit signals that we have relied on--wisely or not--to estimate deeper practical truths. Stuff like "did this person invest some time into this" or "does this person have knowledge of a field" or "can they even think straight." [...]we might have to cope by saying stuff like: "Fuck it, personal essays and cover letters are meaningless now, just put down the raw bullet-points."
In other words, when the presentation means nothing, why bother?
Asking students for regurgitated info and then being annoyed because they supplied generic regurgitated info is somewhat telling an attitude no?
You're confusing the artifact with the purpose. Teachers across the nation are not trying to accumulate the largest corpus of distinct human-written reviews of The Great Gatby.
The goal is to elicit some kind of mental practice, and the classic request is for something that helps prove it occurred. The issue is that such proofs are now being counterfeited with unprecedented scale and ease.
When those indicators become debased and meaningless, we need to look for other ways of motivating and validating.
Pre-AI, homework was often copied and then individuals just crammed for the tests.
AI is not the problem for these students, it's that many students are only in it for the diploma.
If it wasn't AI it would just be copying the assignment from a classmate or previous grad.
And I imagine the students who really want to learn are still learning because they didn't cheat then, and they aren't letting AI do the thinking for them now.
I believe that it has improved my writing productivity somewhat, especially when I'm tired and not completely on the ball. Although I don't usually reach for this most of the time (e.g. not for this comment).
Using LLMs to achieve this is just another step in the evolution of a broken education system. The fix? IMO, make the exams for the courses delayed by one semester. So during the exam study-period, the students have to 'catch up' on the lectures they had a few months ago.
> You only have to read one or two of these answers to know exactly what’s up: the students just copy-pasted the output from a large language model, most likely ChatGPT. They are invariably
This is validating. Your imitation completely fooled me (I thought it really was ChatGPT and expected to be told as much in an entirely unsurprising "reveal") and the subsequent description of the style is very much in agreement with how I'd characterize it.
In previous discussions here, people have tried to convince me that I can't actually notice these obvious signs, or that I'm not justified in detecting LLM output this way. Well, it may be the case that all these quirks derive from the definitely-human training data in some way, but that really doesn't make them Turing-test-passing. I can remember a few times that other people showed me LLM prose they thought was very impressive and I was... very much not impressed.
> When someone comments under a Reddit post with a computer-generated summary of the original text, I honestly believe that everyone in the world would be better off had they not done so. Either the article is so vapid that a summary provides all of its value, in which case, it does not merit the engagement of a comment, or it demands a real reading by a real human for comprehension, in which case the summary is pointless. In essence, writing such a comment wastes everyone’s time.
I think you've overlooked some meta-level value here. By supplying such a comment, one signals that the article is vapid to other readers who might otherwise have to waste time reading a considerable part of the article to come to that conclusion. But while it isn't as direct as saying "this article is utterly vapid", it's more socially acceptable, and also more credible than a bald assertion.
The question is: Should we limit AI to keep the old way of learning, or use AI to make the process better? Instead of fixing small errors like grammar, students can focus on bigger ideas like making arguments clearer or connecting with readers. We need to teach students to use AI for deeper thinking by asking better questions.
We need to teach students that asking the right questions is key. By teaching students to question well, we can help them use AI to improve their work in smarter ways. The goal isn’t to go back to old methods for iterating but change how we iterate altogether.
> We need to teach students to use AI for deeper thinking by asking better questions.
Same thing here: the whole point of learning critical thinking is that you don't need to ask someone/something else. Teaching you how to ask the LLM to do it for you is not the same as teaching you how to actually do it.
In my opinion, we need to make students realise that their goal is to learn how to do it themselves (whatever it is). If they need an LLM to do it, then they are not learning. And if they are not learning, there is no point in going to school, they can go work in a field.
My take is teach them to get better at asking questions and then teach them when to use their own understanding to change their answer for the better. How many times has an AI’s answer been 5/10 and with a few fixes it’s a 9/10. That comes with time. Getting them asking questions and learning the “when” later is better at least to me.
Depends, I think. If we are talking about writing an essay (and I believe we are), then the LLM is somewhere between useless and counter-productive.
Of course, if the LLM is used to understand an RFC (I would debate how useful it is for that, but that's another discussion), then it's different. The goal was to understand the RFC, it doesn't really matter how you did it. But the goal of writing an essay is not to end up with a written essay at all. Nobody cares about it, you can burn it right after it's graded.
Whether it be writing or computer programming, or exercising, for that matter, if you aren't willing to put in the work to achieve your goals, why bother?
I couldn’t agree more with the sentiment of this article.
Writing yourself, _writing manually_ is much nicer, to hear your unfiltered thoughts, than condensing them through an LLM, and get average-sounding sentences with no soul. To me, LLM writing is soulless. I even started to turn to Grammarly and Copilot, as these were a mere distraction to the actual task at hand: writing. Instead of writing, I was constantly grammar fixing, and ultimately, nothing got done. I love the gym-analogy https://news.ycombinator.com/item?id=43888803 gave.
Maybe the problem is that the professor doesn't want to read the student work anyway, since it's all stuff he already knows. If they managed to use their prompts to generate interesting things, he'd stop wanting to see the prompts.
It can be used as a personal tutor. How awesome is it to have a tutor always available to answer almost any question from any angle to really help you understand? Yes, AI won't get everything right 100%, but for students who are still learning basics, it's fair to assume that having an AI tutor can yield far better results than having no tutor at all.
It can also be used as a tool for doing mundane work, so you can focus more on the interesting and creative work. Kind of like a calculator or a spreadsheet. Would math majors become better mathematicians if they had to do all calculations by hand?
I think instead of banning AI, education needs to reform. Teaching staff should focus less time on giving lectures and grading papers (those things can be recorded and automated) and more time on ORAL EXAMS where they really probe student's knowledge and there's no possibility of cheating.
Students can and should use AI to help them prepare. E.g. don't ask AI to write an essay for you, write it yourself and ask it to critique it. Don't ask it to give you answers for a test, ask it to ask you questions on the topic and find gaps in your knowledge. Etc.
And about oral exams ... I agree that these are amazing. In seventies and eighties when I was in school, most of exams were oral. But our society is really afraid of these, because oral exams are always subjective.
I wish it was only at stoplights. But then just a few days ago, I witnessed a totally unnecessary accident. Left-lane got green, and someone in the straight lane noticed the movement but didn't look up and drove right into the car in front of them...
The hardest hit industry by AI has been essay writing services.
If anything, it seems they're noticing because the AI is doing a worse job.
I almost had headaches after intense thinking of problems and ways to solve them in lambda prolog. That was the most interesting and satisfying to physically feel the effect of high focus combined with applying what was a new logic.
Computer science at the university, taught me how to learn and explore new ideas. I might sound like my grandpa who told me when I was 8yo that using calculator would lead to people not able to count... and here I am saying that LLM might lead to people who do not know how to write.
Actually, I am a bit concern that we might produce more text in the short term because it is becoming cheap to write tons of documentation with LLMs. But those feel like death by Terms and Conditions, i.e. text that no one reads. So not only we would lose our ability to write, but we can seriously affect our ability to read. Sure LLM can summarize as well, but then we lose the nuances.
Nature is lazy, but should we be lazy and delegate our ability to think (read/write), to a software ? Think about it :)
I agree with the broader point of the article in principle. We should be writing to edify ourselves and take education seriously because of how deep interaction with the subject matter will transform us.
But in reality, the mindset the author cites is more common. Most accounting majors probably don't have a deep passion for GAAP, but they believe accounting degrees get good jobs.
And when your degree is utilitarian like that, it just becomes a problem of minimizing time spent to obtain the reward.
I can’t be the only student who had both the experience of wonderful learning moments, AND could see a badly designed assignment a mile off and wasn’t motivated to give such a thing my full attention no?
As a side note, if you want the prompt, simply ask for it in the assignment. Asking students for one thing and then complaining when you don’t get another is insanity.
At first, I thought they didn't care. However, it was so pervasive that it couldn't be the only explanation. I was forced to conclude they trusted ChatGPT more than themselves to argue their case... (Some students did not care, obviously.)
Can't agree more, but let's take a step back and understand why even in first place somebody uses LLM to generate content? imo everyone feel it's more of grunt work which doesn't deserve their piece of mind to focus, they downscale it as low hanging fruit and let them be automated for them. it's very thin line between getting automated everything mundane vs losing actually creativity.
so it's author's stack ranking on whether writing this content in original thoughts have more value ( research report ) or LLM brings more value ( eg: very basic but heavy content like admission LoRs, essays ).
i feel it's more subjective to the situation being handled at the moment.
The is especially the case when you are about to complain about style, since that can easily be adjusted, by simply telling the model what you want.
But I think there is a final point that the author is also wrong about, but that is far more interesting: why we write. Personally I write for 3 reasons: to remember, to share and to structure my thoughts.
If an LLM is better then me at writing (and it is) then there is no reason for me to write to communicate - it is not only slower, it is counterproductive.
If the AI is better at wrangling my ideas into some coherent thread, then there is no reason for me to do it. This one I am least convinced about.
AI is already much better than me at strictly remembering, but computers have been that since forever, the issue is mostly convinient input/output. AIs makes this easier thanks to speech to text input.
[0]: See eg. https://www.oneusefulthing.org/p/centaurs-and-cyborgs-on-the....
This is especially true for students.
Anyway its like getting better at running because bicycles became a thing: a) pretty soon you are not going to be able to keep up and b) you are better of buying one anyway.
I think this will be no more of a contest than playing chess has been: humans don't stand a chance, but it also doesn't matter because being better or worse than the AI is besides the point.
This is ridiculous. Even if the author has never typed a single character into a prompt box, he can still come to perfectly valid conclusions about the technology just by observing patterns in the outputs that are shoved into his face.
"I wish these astrophysicists had stated up front that they've never created a galaxy. How can they have a well-formed opinion on cosmic structures if they only ever observe them?"
The school should be drilling into students, at orientation, what some school-wide hard rules are regarding AI.
One of the hard rules is probably that you have to write your own text and code, never copy&paste. (And on occasions when copy&paste is appropriate, like in a quote, or to reuse an off-the-shelf function, it's always cited/credited clearly and unambiguously.)
And no instructors should be contradicting those hard rules.
(That one instructor who tells the class on the first day, "I don't care if you copy&paste from AI for your assignments, as if it's your own work; that just means you went through the learning exercise of interacting with AI, which is what I care about"... is confusing the students, for all their other classes.)
Much of society is telling students that everything is BS, and that their job is to churn BS to get what they want. Early "AI' usage popular practices so far looks to be accelerating that. Schools should be dropping a brick wall in front of that. Well, a padded wall, for the students who can still be saved.
Even when no errors are introduced in the process, the outcome is always bad: 3 full paragraphs of text with bullets and everything where the actual information is just the original 1-2 sentences that the model was prompted with.
I never am happy reading one of those; it's just a waste of time. A lot of the folks doing it are not native English speakers. But for their use case, older tools like Grammarly that help improve the English writing are effective without the problematic decompression downsides of this class of LLM use.
Regardless of how much LLMs can be an impactful tool for someone who knows how to use one well, definitely one of the impacts of LLMs on society today is that a lot of people think that they can improve their work by having an LLM edit it, and are very wrong.
(Sometimes, just telling the LLM to be concise can improve the output considerably. But clearly many people using LLMs think the overly verbose style it produces is good.)
More than communicate, I would say to induce thoughts.
I write poetry here and there (on paper, just for me). I like how exploration through lexical and syntactic spaces can be intertwined with semantics and pragnatic matters. More importantly, I appreciate how careful thoughts are playing with attention and other uncharted thoughts. The invisible side effects on mental structures happening in the creation of expression can largely outweight the importance of what is left as an artefact publicly visible.
For a far more trivial example, we can think about how notes in the margin of a book can radically change the way we engage with the reading. Even a careful spare word highlight can be a world of difference in how we engage with the topic. It's the very opposite of "reading" a few pages before realizing that not a single thought percolated into consciousness as it was wandering on something else.
I don't understand this either. I use it a lot, but I never just use what an LLM says verbatim. It's so incredibly obvious it's not written by a human. Most of the time I write an initial draft, ask Claude to check it and improve it, and then I might touch up a few sentences here and there.
> Vibe coding; that is, writing programs almost exclusively by language-model generation; produces an artifact with no theory behind it. The result is simple: with no theory, the produced code is practically useless.
Maybe I still don't know what vibe coding is, but for the few times when I _can_ use an LLM to write code for me, I write a pretty elaborate instruction on what I want, how it should be written, ... Most of the time I use it for writing things I know it can do and seem tedious to me.
IMO the core problem is that in many cases this typical belief holds true.
I went to university to get a degree for a particular field of jobs. I'd generously estimate that about half of my classes actually applied to that field or jobs. The other half were required to make me a more "well rounded student" or something like that. But of course they were just fluff to maximize my tuition fees.
There was no university that offered a more affordable program without the fluff. After all, the fluff is a core part of the business model. But there isn't much economic opportunity without a diploma so students optimize around the fluff.
EDIT: Not a jab at the author per se, more that it's a third or fourth time I see this particular argument in the last few weeks, and I don't recall seeing it even once before.
To play devil's advocate original code alienates you from many programming jobs. This was true before LLMs, and remains true now. Many developers abhor original code. They need frameworks or packages from Maven, NPM, pip, or whatever. They need to be told exactly what to do in the code, but copy/paste is better, and a package that already does it for you is better still. In these jobs, yes, absolutely let a computer write it for you (or at least anybody that is an untrusted outside stranger). Writing the code yourself will often alienate you from your peers and violate some internal process.
AI usage is a lot higher in my work experience among people who no longer code and are now in business/management roles or engineers who are very new and didn't study engineering. My manager and skip level both use it for all sorts of things that seem pointless and the bootcamp/nontraditional engineers use it heavily. Our college hires we have who went through a CS program don't use it because they are better and faster than it for most tasks. I haven't found it to be useful without an enormous prompt at which point I'd rather just implement the feature myself.
As it turns out, a well written ticket makes a pretty good input into an LLM. However, it has the added benefit of having my original thought process well documented, so sometimes I go through the process of writing a ticket / subtask, even if I ended up giving it to an AI tool in the end.
I actually don't think that it is good at that. I have heard of language teachers trying to use it to teach the language (it's a model language, it should be good at it, right?) and realised that it isn't good at that.
Of course I understand the point of your message, which is that you feel your teachers were not helpful and I have empathy for that.
The point is: how good is it at teaching you how the language works? Like the grammar concepts, etc. I don't think it excels at that.
I have to admit I was a bit surprised how bad LLMs are at the continue this essay task. When I read it in the blog I suspected this might have been a problem with the prompt or the using one of the smaller variants of Gemini. So I tried it with Gemini 2.5 Pro and iterated quite a bit providing generic feedback without offering solutions. I could not get the model to form a coherent well reasoned argument. Maybe I need to recalibrate my expectations of what LLMs are capable, but I also suspect that current models have heavy guardrails, use a low temperature and have been specifically tuned for problem solving and avoid hallucinations as much as possible.
Off-topic, but on your parenthetical about SMS "wrong number" texts "(That scam doesn’t even make sense...)", part of why they do it is what's called "warming up" their sending number so that it's seen as legit by carriers and SMS filters. They're also seeing whether you're a real person who responds, in which case they can come back later with a more sophisticated scam (or re-sell you number, which is now more valuable, to another scammer for that purpose).
But you're right that it doesn't make much sense as a text you might receive naturally. Best thing is to not reply so that you're not feeding the beast.
Incidentally, I used ChatGPT to refresh my memory about how this works, and in its initial response it got it backwards, saying that "warming up" is what it does to your number. You can't trust these things one bit! Your post calls it "automated irresponsibility", and I like that.
Thanks for reading my post.
Your benchmark for "long flowing beautiful content" is apple.com? It's competing with Hemingway?
Can you share a link to what you mean?
Am I alone with this?
As always, I reject wholeheartedly what this skeptical article has to say about LLMs and programming. It takes the (common) perspective of "vibe coders", people who literally don't care what code says as long as something that runs comes out the other side. But smart, professional programmers use LLMs in different ways; in particular, they review and demand alterations to the output, the same way you would doing code review on a team.
The implication there is that this is acceptable to pass a robotics class, and potentially this gives them more information about students' comprehension to further improve their instruction and teaching ("...that they have some kind of internal understanding to share").
On that second point, I have yet to see someone demonstrate a "smart, professional programmer use LLMs" in a way where it produces high quality output in their area of expertise, while improving their efficiency and thus saving time for them (compared to them just using a good, old IDE)!
https://x.com/adamwathan/status/1911845073286803923
Armin Ronacher also talks about using LLMs quite a bit, but I don't have as good of an example from his tweets of him straightforwardly saying "yes, they are useful to me!"
It's not just nitpicking: maybe it's only their perception they are more productive? Or maybe they are. But saying it is not convincing enough for me.
I am not looking for formal study level of trust (though even that is frequently debatable), but multiple accounts of this where there is clear quality of output and significant (estimated) time savings would be wonderful.
I don't need much convincing that for many engineers, an LLM can bring incremental speed up (10-20%), though I think that really depends on the personality (eg. do you prefer to fix stuff not created by you or write it nicely from the start?).
So, observing a couple of my colleagues (I am an engineering manager, but have switched back and forth between management and IC roles for the last ~20 years), I've seen them either produce crap, or spend so much time tuning the prompts that it would have been faster to do it without an LLM. They mostly used Github Copilot or ChatGPT (most recent versions as of last few months ago).
I am also keeping out a keen eye for any examples of this (on HN in particular), but it usually turns out things like https://news.ycombinator.com/item?id=43573755
Again, I am not saying it's not being done, but I have struggled to find someone who would demonstrate it happen in a convincing enough fashion — I am really trying to imagine how I would best incorporate this into my daily non-work programming activities, so I'd love to see a few examples of someone using it effectively.
You see the same thing in every argument with LLM skeptics. 'The code is bad. You don't even know what the code is doing." This is obviously false. A professional reads the code they commit and push. A professional doesn't push code they know to be bad.
Teaching a human with motivation, potential and desire to learn is both easier, and more rewarding (for most humans), than attempting to teach LLM to write good code every time — humans tend to value their personal experiences more, whereas LLM relies more on the training corpus. So when I've seen people massage LLM output to be decent or excellent, it took them more time than it would have taken for them to write it from scratch without an LLM.
Which makes LLMs mostly a curiosity, and not a productivity booster. Can it get there? I hope it can, because that would be amazing.
> As a team lead, what would you think of a team member who consistently generated "crap" pull requests?
This is directly answered with my first paragraph: that's exactly what I would think of them, and how I would act on it.Your first question was:
> How is that better?
In the second paragraph, I explained why it's better to do a code review for a crappy pull request that's human-produced vs LLM-generated: it is easier, faster, and more psychologically rewarding.If you are talking about a case where an inexperienced human uses LLM to start off with a crappy code change, but then adapts the output during the review process, and potentially learns through it (though research confirms people learn better when they produce mistakes themselves) — they still won't be able to use LLM to produce comparable code the next time, so they'll have to do the review and improve it by hand before putting it up for review by somebody else, thus negating any productivity gain (which was the original premise), and likely reducing the learning potential.
If there was a question I misinterpreted, please enlighten me. Thanks! :)
That's the answer you just gave me. Good note! (Darryl didn't get the job.)
You're dodging my point. If you are managing a team where people are using LLMs to generate pull requests full of "crap" code (your word), you have a mismanaged team, and would with or without the LLMs, because on a well-managed team people don't create PRs full of crap code.
I'm fine if you want to say LLMs are dangerous tools in the hands of unseasoned developers. Fine, you can have a rule where only trusted developers get to use them. That actually seems pretty sane!
But a trustworthy developer using an LLM isn't going to be pushed by the LLM into creating "crap" PRs, because the LLM doesn't make the PRs, the developer does. If the developer isn't reading the code the LLM is producing, they're not doing their job.
Sometimes you get people saying "ok but reading that code is work so how is the LLM saving any time", which is something you could also say about adding any human developer to a team; their code also has to get reviewed.
So help me understand how your concerns here cohere.
I don't have any issue with someone using an LLM but I have not observed any efficiency gain from those who do — that's my entire point, and the biggest selling point for using coding assistants. I've either seen them produce "crappy code" faster (which, ultimately, they could do by hand as well), or be slower than doing their work "manually".
At the same time, I disagree about teams producing lousy PRs being mismanaged by definition: there are circumstances where doing that is warranted (LLM or no LLM), as long as the long term direction is improving (less crappy code over time). There are plenty of nuances there too.
I’d argue that making students give generic regurgitated info as an assignment is the actual issue. Make a good assignment…
Yes I know the subject area for which I write assessments and know if what is generated is factually correct. If I’m not sure, I ask for web references using the web search tool.
https://chatgpt.com/share/6817c46d-0728-8010-a83d-609fe547c1...
> I didn’t realize how much that could throw things off until I saw an example where the object started moving in a strange way when it hit that point.
Would feel off, because why change the person? And even if it's intented, then I'd say it's not formal to do in an assignement.
Also the point about default LLM settings not doing that is a good point.
I use to work at AWS (Professional Services) and there are a few different writing styles depending on what your audience was. I learned how to write in the different “house styles” before LLMs were a thing. So I know when something doesn’t sound right.
I use LLMs all of the time to write. I’m 99% certain that no one can tell the difference between my writing 100% without an LLM to my writing with one
How about an emoji like library designed exclusively for LLMs, so we can quickly condense context and mood without having to write a bunch of paragraphs, or the next iteration of "txt" speech for LLMs. What does the next step of users optimising for LLMs look like?
I miss the 80's/90's :-(
That part caught my attention. As an English-as-a-second-language speaker myself, I find it so difficult to develop any form of "taste" in English the same way I have in my mother tongue. A badly written sentence in my mother tongue feels painful in a sort of physical way, while bad English usually sound OK to me, especially when asserted in the confident tone LLMs are trained in. I wish I could find a way to develop such sense for the foreign languages I currently use.
Interestingly, when it comes to spoken English, I can learn by imitation way faster.
Libraries are still in every campus, often with internet access.
Traditional media have transitioned to become online content media farms. The NYT Crossword puzzle is now online. Millions of people do Wordle every day online.
This is just kickback. Every paradigm shift needs kickback in order to let the dust settle and for society to readjust and find equilibrium again.
if you can one-shot an answer to some problem, the problem is not interesting.
the result is necessary, but not sufficient. how did you get there? how did you iterate? what were the twists and turns? what was the pacing? what was the vibe?
no matter if with encyclopedia, google, or ai, the medium is the message. the medium is you interacting with the tools at your disposal.
record that as a video with obs, and submit it along with the result.
for high stakes environments, add facecam and other information sources.
reviewers are scrubbing through video in an editor. evaluating the journey, not the destination.
And reviewing video would be a nightmare.
more is better.
you can scrub video with your finger on an iphone. serious review is always high effort, video changes nothing.
Video in itself is not more information by definition. Just look at those automatically generated videos when you try finding a review on an unusual product.
books are great.
hundreds of hours of video of the author writing that book, is strictly more information.
Let's be real... Multi-modal LLMs are scrubbing through the journey :P
not every review is important.
No wonder OpenAI accidentally made LLMs even more sycophantic if this is what people want from them.
Personally, I’ve been enjoying using ChatGPT to explore different themes of writing. It’s fun. In my case the goal is specifically to produce artifacts of text that’s different from what I’d normally produce.
If you use AI all that is important is your ability to specify the problem, of course, as it always has been, you can just reiterate faster.
People say “I saved so much time on perf this year with the aid of ChatGPT,” but ChatGPT doesn’t know anything about your working relationship with your coworker… everything interesting is contained in the prompt. If you’re brain dumping bullet points into an LLM prompt, just make those bullets your feedback and be done with it? Then it’ll be clear what the kernel of feedback is and what’s useless fluff.
I think I is good in two way, one is which you use if as a small helper (basic questions, auto completion...)
Two for getting started on something that you have no idea on ( not to teach you but just give you an idea of what it's and resources to learn more)
The punchline? Bullet point 3 was wrong (it was a PL assignment and I'm 99% sure the AI was picking up on the word macro and regurgitating facts abut LISP). 0 points all around, better luck next time.
There's so much bad writing of valuable information out there. The major sins being: burying the lede, no or poor sectioning, and just generally verbose.
In some cases, like in EULAs and patents that's intentional.
It's the old joke of the teacher who wants students to tried their best and that failure doesn't matter. But when the student follows the process to the best of their ability and fails they are punished while the student who mostly follows the process and then fudges their answer to the correct one is rewarded.
Simply blaming models is an easy way out and creates little value - Maybe changing the medium and exercise to which it transfers could be a thing?
It's time to get creative.
I wish to communicate four points of information to you. I’ll ask ChatGPT to fluff those up into multiple paragraphs of text for me to email.
You will receive that email, recognize its length and immediately copy and paste it into ChatGPT, asking it to summarize the points provided.
Somewhere off in the distance a lake evaporates.
The worst was the answer to the question "How can we utilize AI to greater effect in our work?". A nice open-ended question where they had a beautiful opportunity to show off how knowledgeable and forward thinking they are, right? Especially considering they're the ones behind the massive AI push our product has gone with as of late.
"You can ask it to write emails for you!" Was the one and only thing these multi-milli/billionaires could come up with. Our core product itself is literally an email interface, and we have an AI email generation feature built in...
I had to turn my webcam off because I genuinely laughed out loud at that response for how insanely elementary and useless it was as an answer. It also showed me these people do literally nothing other than answer emails - and even then they're too bloody lazy and give so little of a shit they can't even do that part themselves.
It’s like math homework, you always had to show your working not just give the answer. AI gives us an answer without the journey of arriving at one, which removes the purpose of doing it in the first place.
Since there is no "interdiction" to use any LLM, perhaps it should be mandatory to include the prompt as well when used. Feels like that could be the seed that sparks the curiosity..
I wish there was some way to do the same for programming. Imagine a classroom full of machines with no internet connection, just a compiler and some offline HTML/PDF documentation of languages and libraries.
+-12 years ago I took a course on concepts of programming languages. The exam consisted out of programming exercises in a classroom without internet access.
Grading was based on the number of (hidden) unit tests that succeeded.
https://eelcovisser.org/blog/teaching/2014/02/05/programming...
It helps me spot the bits that feel flat or don’t add much, so I can cut or rework them—while still getting the benefit of the LLM’s idea generation.
It feels like we are getting to this weird situation where we just use LLMs as proxies, and the long, boring text is just for LLMs to talk to each other.
For example:
Person A to LLM A: Give me my money.
LLM A to LLM B: Long formal letter.
LLM B to Person B : Give me my money.
Hopefully, nothing is lost in translation.
If your assignment can be easily performed by an LLM, it’s a bad assignment. Teachers are just now finding out the hard way that these assignments always sucked and were always about regurgitating information pointlessly and weren’t helpful tools for learning lol. I did heaps of these assignments before the existence of LLMs, and I can assure you that the busywork was mostly a waste of time back then too.
People using LLMs is just proof they don’t respect your assignment - and you know what, if one person doesn’t respect your assignment, they’re probably wrong. But if 90% of people don’t respect your assignment? Maybe you should consider whether the assignment is the problem. It’s not rocket science.
> you're not always going to have a calculator in your pocket was the old fib, now it's s/calculator/llama/
So much of education when I was growing up was pointless box-checking, I'm a little satisfied that, finally, LLMs might swing the pendulum back towards valuable work. If any assignment an LLM could handle is limited to 10%, and 90% of the final grade is determined by oral exams, that seems positive.
Is "2x4" equal to 4 + 4, or 2 + 2 + 2 + 2 ? There's only one correct answer! "You ... pick up that can" wow such learning
If LLMs love to produce 3-bullet-point-bold-font copypasta, it's probably because it's the exact varietal of crap that garnered so many upvotes on Quora. Why would I be asked to write a 500-word essay when 50 words would suffice? Maybe let's move beyond regurgitation & rote drudgery
The very first time I enjoyed talking to someone in another language, I was 21. Then an exchange student, I had a pleasant and interesting discussion with someone in that foreign language. On the next day, I realised that I wouldn't have been able to do that without that foreign language. I felt totally stupid: I had been getting very good grades in languages for years at school without ever caring about actually learning the language. And now, it was obvious, but all that time was lost; I couldn't go back and do it better.
A few years earlier, I had this great history teacher in high school. Instead of making us learn facts and dates by heart, she wanted us to actually get an general understanding of a historical event. Actually internalise, absorb the information in such a way that we could think and talk about it. And eventually develop our critical thinking. It was confusing at first, because when we asked "what will the exam be about", she wouldn't say "the material in those pages". She'd be like "well, we've been talking about X for 2 months, it will be about that".
Her exams were weird at first: she would give us articles from newspapers and essentially ask what we could say about them. Stuff like "Who said what, and why? And why does this other article disagree with the first one? And who is right?". At first I was confused, and eventually it clicked and I started getting really good at this. Many students got there as well, of course. Some students never understood and hated her: their way was to learn the material by heart and prove it to get a good grade. And I eventually realised this: those students who were not good at this were actually less interesting when they talked about history. They lacked this critical thinking, they couldn't make their own opinion or actually internalise the material. So whatever they would say in this topic was uninteresting: I had been following the same course, I knew which events happened and in which order. With the other students were it "clicked" as well, I could have interesting discussion: "Why do you think this guy did this? Was it in good faith or not? Did he know about that when he did it? etc".
She was one of my best teachers. Not only she got me interested in history (which had never been my thing), but she got me to understand how to think critically, and how important it is to internalise information in order to do that. I forgot a lot of what we studied in her class. I never lost the critical thinking. LLMs cannot replace that.
There’s a lot of “no, it is the children who are wrong” going on in academia right now and it’s an issue.
1. “When copying another person’s words, one doesn’t communicate their own original thoughts, but at least they are communicating a human’s thoughts. A language model, by construction, has no original thoughts of its own; publishing its output is a pointless exercise.”
LLMs, having being trained using the corpus of the web, I would argue communicate other human’s thoughts particularly well. Only in exercising an avoidance of plagiarism are the thoughts of other human’s evolved into something closer to “original thought” for the would-be plagarizer. But yes, at least a straight copy/paste retains the same rhetoric as the original human.
2. I’ve seen a few advertisements recently leverage “the prompt” as a means to resonate visual appeal.
i.e a new fast food delivery service starting their add with some upbeat music and a visual presentation of somebody typing into a LLM interface, “Where’s the best sushi around me?” And then cue the advertisement for the product they offer.
I like reading and writing stories. Last month, I compared the ability of various LLMs to rewrite Saki's "The Open Window" from a given prompt.[1] The prompt follows the 13-odd attempts. I am pretty sure in this case that you'd rather read the story than the prompt.
I find the disdain that some people have for LLMs and diffusion models to be rather bizarre. They are tools that are democratizing some trades.
Very few people (basically, those who can afford it) write to "communicate original thoughts." They write because they want to get paid. People who can afford to concentrate on the "art" of writing/painting are pretty rare. Most people are doing these things as a profession with deadlines to meet. Unlike you are GRRM, you cannot spend decades on a single book waiting for inspiration to strike. You need to work on it. Also, authors writing crap/gold at a per-page rate is hardly something new.
LLMs are probably the most interesting thing I have encountered since I did the computer. These puritans should get off of their high horse (or down from their ivory tower) and join the plebes.
[1] Variations on a Theme of Saki (https://gist.github.com/s-i-e-v-e/b4d696bfb08488aeb893cce3a4...)
Perhaps that's good, perhaps that's bad, but it certainly doesn't really allow him to see much of the appeal... yet
In a class setting, maybe make the AI-detection an element of take-home assignments - whoever gets the lowest AI-similarity score gets a few points of extra credit or something
As for computer science courses, I'm guessing it's hard to not write simple code that appears AI-generated...so maybe that kind of work needs a written summary to go along with the code as well
I guess you could require a special encrypted keyboard in your plan.
Having spent about two decades reading other humans' "original thoughts", I have nothing else to say here other than: doubt.
Forcing people to do these things supposedly results in a better, more competitive society. But does it really? Would you rather have someone on your team who did math because it let them solve problems efficiently, or did math because it’s the trick to get the right answer?
Writing is in a similar boat as math now. We’ll have to decide whether we want to force future generations to write against their will.
I was forced to study history against my will. The tests were awful trivia. I hated history for nearly a decade before rediscovering that I love it.
History doesn’t have much economical value. Math does. Writing does. But is forcing students to do these things the best way to extract that value? Or is it just the tradition we inherited and replicate just because our parents did?
I remember another parent ranting about their 3rd grade kids “stupid homework” since it had kids learning different ways of summing numbers. I took a look at the homework and replied “wow, the basics out set theory are in here!” We then had a productive discussion of how that arithmetic exercise led to higher math and ways of framing problems.
Similarly, writing produces a different form of thought than oral communication does.
History is a bit different, but a goal of history and literature is (or it least should be) to socialize students and give them a common frame of reference in society.
Finally there is the “you don’t know when you’ll need it defense.” I have a friend who spent most of the last 20 years as a roofer, but his body is starting to hurt. He’s pivoting to CAD drafting and he’s brushing off a some of those math skills he hated learning in school. And now arguing with his son about why it’s important.
Those are the fundamental defenses- that we are seeking not skills but ways of viewing the world + you don’t know what you’ll need. There are obviously limits and tradeoffs to be made, but to some degree yes, we should be forcing students (who are generally children or at least inexperienced in a domain) to things they don’t like now for benefits later.
One counter argument to yours is that when you do need the skills, you can learn them later. It’s arguably easier than it has been at any point in human history. In that context, why front load people with something they hate doing, just because their parents think it’s a good idea? Let them wait and learn it when they need it.
The "wanting to like things" is a highly undervalued skill/trait. It comes down to building a habit through repetition - not necessarily having fun or getting results, but training your mind like a muscle to think putting in effort isn't that bad an activity.
For those growing up I think this is not something that is taught - usually it is already there as a childlike sense of wonder that gets pruned by controlling interests. If education forcing you to do math removes any enthusiasm you had for math, that's largely determined by circumstance. You'd need someone else to tell you the actual joys of X to offset that (and I'd guess most parents/teachers don't practice math for fun), or just spontaneously figuring out how interesting X is totally on one's own which is even rarer.
I didn't have either so I'm a mathophobe, but I'm alright with that since I have other interests to focus on.
Maybe professors are too stringent with their evaluation, or maybe they are not good at teaching people what a passable writing style is, or maybe students simply don't want to accept that if they don't excel at writing, a D or a C is perfectly fine. Perhaps teachers that look for good writing should have separate tests which evaluate students in both scenarios: with and without LLM help.
The same holds true for math: not everybody needs to know how to deduce a proof for every theorem, but in technical sciences, showing that ability and capability will demonstrate how much they are able to think and operate with precision on abstract concepts, very much like in programming. Even if coursework is a bit repetitive, practice does turn shallow knowledge into operational knowledge.
There are greater difficulties that people will have to do in their daily lives than being "forced" to learn how to read, write and do arithmetic. Maybe learning the lesson of overcoming smaller, difficult tasks will allow them to adapt to greater difficulties in the future.
To quote Seneca:
A gem can not be polished with friction, nor a man perfected without trials.
Pithy and succinct takes time.
<https://goodreads.com/book/show/585474.Writing_to_Learn>
The goal is to make something legible, but the reality is we are producing slop. I'm back to writing before my brain becomes lazy.
I've grown to respect typos and slightly misconstructed sentences. It's an interesting dynamic that now what appeared lazy to 2021 eyes actually indicates effort and what appeared polished and effortful in 2021 now indicates laziness.
An example is how the admins of my local compute cluster communicate about downtimes and upgrades etc and they are clearly using AI and it's so damn annoying, it feels like biting into cotton candy fluff. Just send the bullet points! I don't need emojis, I don't need the fake politeness. It's no longer polite to be polite. It doesn't signal any effort.
There's too much information in the World for it to matter, I think is the underlying reason.
As an example, most enterprise communication nears the levels of noise in its content.
So, why not let a machine generate this noise, instead?
Yes, totally. Unfortunately, it takes time and maturity to understand how this is completely wrong, but I feel like most students go through that belief.
Not sure how relevant it is, but it makes me think of two movies with Robin Williams: Dead Poet's Society and Will Hunting. In the former, Robin's character manages to get students interested in stuff instead of "just passing the exams". In the later, I will just quote this part:
> Personally, I don’t give a shit about all that, because you know what? I can’t learn anything from you I can’t read in some fuckin’ book. Unless you wanna talk about you, who you are. And I’m fascinated. I’m in.
I don't give a shit about whether a student can learn the book by heart or not. I want the student to be able to think on their own; I want to be able to have an interesting discussion with them. I want them to think critically. LLMs fundamentally cannot solve that.
Exploring a concept-space with LLM as tutor is a brilliant way to educate yourself. Whereas pasting the output verbatim, passing it as one’s own work, is tragic: skipping the only part that matters.
Vibe coding is fun right up to the point it isn’t. (Better models get you further.) But there’s still no substitute for guiding an LLM as it codes for you, incrementally working and layering code, committing to version control along the way, then putting the result through both AI and human peer code reviews.
Yet these all qualify as “using AI”.
We cannot get new language for discussing emerging distinctions soon enough. Without them we only have platitudes like “AI is a powerful tool with both appropriate and inappropriate uses and determining which is which depends on context”.
we agree. mixus makes that easy — across teams, classes, and communities.
That said, I myself am increasingly reading long texts written by LLMs and learning from them. I have been comparing the output of the Deep Research products from various companies, often prompting for topics that I want to understand more deeply for projects I am working on. I have found those reports very helpful for deepening my knowledge and understanding and for enabling me to make better decisions about how to move forward with my projects.
I tested Gemini and ChatGPT on “utilizing Euler angles for rotation representation,” the example topic used by the author in the linked article. I first ran the following metaprompt through Claude:
Please prepare a prompt that I can give to a reasoning LLM that has web search and “deep research” capability. The prompt should be to ask for a report of the type mentioned by the sample “student paper” given at the beginning of the following blog post: https://claytonwramsey.com/blog/prompt/ Your prompt should ask for a tightly written and incisive report with complete and accurate references. When preparing the prompt, also refer to the following discussion about the above blog post on Hacker News: https://news.ycombinator.com/item?id=43888803
I put the the full prompt written by Claude at the end of the Gemini report, which has some LaTex display issues that I couldn’t get it to fix:https://docs.google.com/document/d/1sqpeLY4TWD8L4jDSloeH45AI...
Here is the ChatGPT report:
https://chatgpt.com/share/681816ff-2048-8011-8e0f-d8cbad2520...
I know nothing about this topic, so I cannot evaluate the accuracy or appropriateness of the above reports. But when I have had these two Deep Research models produce similar reports on topics I understand better, they have indeed deepened my understanding and, I hope, made me a bit wiser.
The challenge for higher education is trying to decide when to stick to the traditional methods of teaching—in this case, having the students learn through the process of writing on their own—and when to use these powerful new AI tools to promote learning in other ways.
The kids these days got everything...
Sounds to me like they asked the students to just regurgitate genetic course info and then complained when that’s what they received. This wasn’t going to lead to an excellent learning moment for these students whether an LLM was used or not.
(AI slop). If it's not worth writing, it's not worth reading.
Perfect.
Back in HS literature class, I had to produce countless essays on a number of authors and their works. It never once occurred to me that it was anything BUT an exercise in producing a reasonably well written piece of text, recounting rote-memorized talking points.
Through-and-through, it was an exercise in memorization. You had to recall the fanciful phrases, the countless asinine professional interpretations, brief bios of the people involved, a bit of the historical and cultural context, and even insert a few verses and quotes here and there. You had to make the word count, and structure your writing properly. There was never any platform for sharing our own thoughts per se, which was sometimes acknowledged explicitly, and this was most likely because the writing was on the wall: nobody cared about these authors or their works, much less enjoyed or took interest in anything about them.
I cannot recount a single thought I memorized for these assignments back then. Passed these with flying colors most usually, but even for me, this was just pure and utter misery. Even in hindsight, the sheer notion that this was supposed to make me think about the subject matter at hand borders on laughable. It took astronomical efforts to even retain all the information required - where would I have found the power in me to go above and beyond, and meaningfully evaluate what was being "taught" to me in addition to all this? How would it have mattered (in specifically the context of the class)? Me actually understanding these topics and pondering about them deeply is completely inobservable through essay writing, which was the sole method of grading. If anything, it made me biased against doing so, as it takes a potentially infinite extra time and effort. And since there was approximately no way for our teacher to make me interested in literature either, he had no chance at achieving such lofty goals with me, if he ever actually aimed for them.
On the other side of the desk, he also had literal checklists. Pretty sure that you do too. Is that any environment for an honest exchange of thoughts? Really?
If you want to read people's original thoughts, maybe you should begin with not trying to coerce them into producing some for you on demand. But that runs contrary to the overarching goal here, so really, maybe it's the type of assignment that needs changing. Or the framework around it. But then academia is set in its ways, so really, there's likely nothing you can specifically do. You don't deserve to have to sift through copious amounts of LLM generated submissions; but the task of essay writing does, and you're now the one forced to carry this novel burden.
LLMs caught incumbent pedagogical practices with their pants down, and it's horrifying to see people still being in denial of it, desperately trying to reason and bargain their ways out of it, spurred on by the institutionally ingrained mutual-hostage scenario that is academia. *
* Naturally, I have absolutely zero formal relation to the field of pedagogy (just like the everyday practice of it in academia to my knowledge). This of course doesn't stop me from having an unreasonably self-confident idea on how to achieve what you think essay writing is supposed to achieve though, so if you want a terrible idea or two, do let me know.
Really? The example used was for a school test. Is there really much original thought in the answer? Do you really want to read the students original thought?
I think the answer is no in this case. The point of the test is to assess whether the student has learned the topic or not. It isn’t meant to share actual creative thoughts.
Of course, using AI to write the answer is contrary to the actual purpose, too, but it isn’t because you want to hear the students creativity, but because it is failing to serve its purpose as a demonstration of knowledge.
Why else would you become a teacher, if you didn't care about what your students think?
Arguably, that's not what teachers mainly do (to an ever increasing proportion).
Most knowledge is easily available. A teacher is teaching students to think in productive ways, communicate their thoughts and understand what others are trying to tell them. For this task, it's essential that the teacher has some idea what the students are thinking, especially when it's something original.
Perhaps the problem is that they are "graded", but this is to motivate the student, and runs against the age-old problem of gamification.
Relying on that to automatically detect their use makes no sense.
From a teaching perspective, if there is any expectation that artificial intelligence is going to stick, we need better teachers. Ones that can come up with exercises that an artificial intelligence can't solve, but are easy for humans.
But I don't expect that to happen. I expect instead text to become more irrelevant. It already has lost a lot of its relevancy.
Can handwriting save us? Partially. It won't prevent anyone from copying artificial intelligence output, but it will make anyone that does so think about what is being written. Maybe think "do I need to be so verborragic?".
Make better assignments.