Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file:
https://gist.github.com/1cg/a6c6f2276a1fe5ee172282580a44a7ac
And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc.
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future.
AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well.
"It is impossible to design a system so perfect that no one needs to be good."
Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves
This is the right thing to say, but even the ones who want to listen can get into bad habits in response to intense schedules. When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… well, I certainly would’ve caved far too much for my own good.IMO the natural fix is to expand your trusting, “this is for you” approach to the broader undergrad experience, but I can’t imagine how frustrating it is to be trying to adapt while admin & senior professors refuse to reconsider the race for a “””prestigious””” place in a meta-rat race…
For now, I guess I’d just recommend you try to think of ways to relax things and separate project completion from diligence/time management — in terms of vibes if not a 100% mark. Some unsolicited advice from a rando who thinks you’re doing great already :)
This is why I'm going to in-person written quizzes to differentiate between the students who know the material and those who are just using AI to get through it.
I do seven quizzes during the semester so each one is on relatively recent material and they aren't weighted too heavily. I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. I hated the high-pressure midterms/finals of my undergrad, so I'm trying to remove that for them.
The pressure was what got me to do the necessary work. Auditing classes never worked for me.
> I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz.
Isn't that what the lectures and homework are for?
Competing with LLM software users, 'honest' students would seem strongly incentivized to use LLMs themeselves. Even if you don't grade on a curve, honest students will get worse grades which will look worse to graduate schools, grant and scholarship committees, etc., in addition to the strong emotional component that everyone feels seeing an A or C. You could give deserving 'honest' work an A but then all LLM users will get A's with ease. It seems like you need two scales, and how do you know who to put on which scale?
And how do students collaborate on group projects? Again, it seems you have two different tracks of education, and they can't really work together. Edit: How do class discussions play out with these two tracks?
Also, manually doing things that machines do much better has value but also takes valuable time from learning more advanced skills that machines can't handle, and from learning how to use the machines as tools. I can see learning manual statistics calculations, to understand them fundamentally, but at a certain point it's much better to learn R and use a stats package. Are the 'honest' students being shortchanged?
I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.
There are some head in the sands, very emotional attitudes about this stuff. (And obviously idiotically uncritical pro AI stances, but I doubt educators risk having those stances)
That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college.
No, it's not.
Nothing around AI past the next few months to a year is clear right now.
It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops.
None of that is certain, of course! That's the whole point: we don't know what's coming.
The genie is out of the bottle, never going back
It's a fantasy to think it will "dry up" and go away
Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS
Plenty of tech becomes exploitative (or more exploitative).
I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs.
Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads.
I personally think GSuite is much better today than it was a decade ago, but that is separate
The underlying hardware has improved, the network, the security, the provenance
Specific to LLMs
1. we have seen rapid improvements and there are a ton more you can see in the research that will be impacting the next round of model train/release cycle. Both algorithms and hardware are improving
2. Open weight models are within spitting distance of the frontier. Within 2 years, smaller and open models will be capable of what frontier is doing today. This has a huge democratization potential
I'd rather see the Ai as an opportunity to break the Oligarchy and the corporate hold over the people. I'm working hard to make it a reality (also working on atproto)
We can't fix social problems with technological solutions.
Every scalable solution takes us closer to Extremistan, which is inherently anti democratic.
Read the Black Swan by Taleb.
Show me actual studies that clearly demonstrate that not only does using an LLM code assistant help make code faster in the short term, it doesn't waste all that extra benefit by being that much harder to maintain in the long term.
Clearly AI is much faster and good enough to create new one-off bits of code.
Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing.
Now with AI coding these take me just a few minutes, done.
But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed.
I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code.
How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see.
1. Open this line of debate similar to you (i.e. the way you ask, the tone you use)
2. Were not interested in actual debate
3. Moved the goalposts repeatedly
Based on past experience entertaining inquisitors, I will not be this time.
There are a whole range of interesting questions here that it’s possible to have a nuanced discussion about, without falling into AI hype and while maintaining a skeptical attitude. But you have to do it from a place of curiosity rather than starting with hatred of the technology and wishing for it to be somehow proved useless and fade away. Because that’s not going to happen now, even if the current investment bubble pops.
If anything, I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. The two technologies are AI and ATProto, I work on both now to give sovereignty back to we the people
My point is, I can get somewhat-useful ai model running at slow-but-usable speed on a random desktop I had lying around since 2024. Barring nuclear war there’s just no way that AI won’t be at least _somewhat_ beneficial to the average dev. All the AI companies could vanish tomorrow and you’d still have a bunch of inference-as-a-service shops appearing in places where electricity is borderline free, like Straya when the sun is out.
Yes, you, a hobbyist, can make that work, and keep being useful for the foreseeable future. I don't doubt that.
But either a majority or large plurality of programmers work in some kind of large institution where they don't have full control over the tools they use. Some percentage of those will never even be allowed to use LLM coding tools, because they're not working in tech and their bosses are in the portion of the non-tech public that thinks "AI" is scary, rather than the portion that thinks it's magic. (Or, their bosses have actually done some research, and don't want to risk handing their internal code over to LLMs to train on—whether they're actually doing that now or not, the chances that they won't in future approach nil.)
And even those who might not be outright forbidden to use such tools for specific reasons like the above will never be able to get authorization to use them on their company workstations, because they're not approved tools, because they require a subscription the company won't pay for, because etc etc.
So saying that clearly coding with LLM assistance is the future and it would be irresponsible not to teach current CS students how to code like that is patently false. It is a possible future, but the volatility in the AI space right now is much, much too high to be able to predict just what the future will bring.
The promise of AI is a capitalist's dream, which is why it's being pushed so much. Do more with less investment. But the reality of AI coding is significantly more nuanced, and particularly more nuanced in spaces outside of the SRE/devops space. I highly doubt you could realistically use AI to code the majority of significant software products (like, say, an entire operating system). You might be able to use AI to add additional functionality you otherwise couldn't have, but that's not really what the capitalists desire.
Not to mention, the models have to be continually trained, otherwise the knowledge is going to be dead. Is AI as useful for Rust as it is for Python? Doubtful. What about the programming languages created 10-15 years from now? What about when everyone starts hoarding their information away from the prying eyes of AI scraper bots to keep competitive knowledge in-house? Both from a user perspective and a business perspective?
Lots of variability here that literally nobody has any idea how any of it's going to go.
In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects.
Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort.
So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us?
Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment.
The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities.
The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on.
Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment.
I think that Microsoft will not be willing to operate Copilot for free in perpetuity.
I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them.
I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it.
I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job".
1) AI companies make money on the tokens they sell through their APIs. At my company we run Claude Code by buying Claude Sonnet and Opus tokens from AWS Bedrock. AWS and Anthropic make money on those tokens. The unit economics are very good here; estimates are that Anthropic and OpenAI have a gross margin of 40% on selling tokens.
2) Claude Code subscriptions are probably subsidized somewhat on a per token basis, for strategic reasons (Anthropic wants to capture the market). Although even this is complicated, as the usage distribution is such that Anthropic is making money on some subscribers and then subsidizing the ultra-heavy-usage vibe coders who max out their subscriptions. If they lowered the cap, most people with subscriptions would still not max out and they could start making money, but they'd probably upset a lot of the loudest ultra-heavy-usage influencer-types.
3) The biggest cost AI companies have is training new models. That is the reason AI companies are not net profitable. But that's a completely separate set of questions from what inference costs, which is what matters here.
Our university is slowly stumbling towards "AI Literacy" being a skill we teach, but, frankly, most faculty here don't have the expertise and students often understand the tools better than teachers.
I think there will be a painful adjustment period, I am trying to make it as painless as possible for my students (and sharing my approach and experience with my department) but I am just a lowly instructor.
People need to learn to do research with LLMs, code with LLMs, how to evaluate artifacts created by AI. They need to learn how agents work at a high level, the limitations on context, that they hallucinate and become sycophantic. How they need guardrails and strict feedback mechanisms if let loose. AI Safety connecting to external systems etc etc.
You're right that few high school educators would have any sense of all that.
I do know people who would get egregiously wrong answers from misusing a calculator and insisted it couldn't be wrong.
Not to mention programming is a meta skill on top of “calculators”
This is my exact experience as well and I find it frustrating.
If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with.
Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely.
It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else?
"AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.
I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move...
To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted)..
I once got "implement a BCD decoder" with about a 1"x4" space to do it.
I'm concerned about handwriting, which is a lost skill, and how hard that will be on the TAs who are grading the exams. I have stressed to students that they should write larger, slower and more carefully than normal. I have also given them examples of good answers: terse and to the point, using bulleted lists effectively, what good pseudo-code looks like, etc.
It is an experiment in progress: I have rediscovered the joys of printing & the logistics moving large amounts of paper again. The printer decided half way through one run to start folding papers slightly at the corner, which screwed up stapling.
I suppose this is why we are paid the big bucks.
Oh man, this reminds me of one test I had in uni, back in the days when all our tests were in class, pen & paper (what's old is new again?). We had this weird class that taught something like security programming in unix. Or something. Anyway, all I remember is the first two questions being about security/firewall stuff, and the third question was "what is a socket". So I really liked the first two questions, and over-answered for about a page each. Enough text to both run out of paper and out of time. So my answer to the 3rd question was "a file descriptor". I don't know if they laughed at my terseness or just figured since I overanswered on the previous questions I knew what that was, but whoever graded my paper gave me full points.
So how do you handle kids who can‘t write well? The same way we‘ve been handling them all along — have them get an assessment and determine exactly where they need support and what kind of support will be most helpful to that particular kid. AI might or might not be a part of that, but it‘s a huge mistake to assume that it has to be a part of that. People who assume that AI can just be thrown at disability support betray how little they actually know about disability support.
It's embarrassing to see this question downvoted on here. It's a valid question, there's a valid answer, and accessibility helps everyone.
There's not such thing as "disabled people who can't write well", there's individuals with specific problems and needs.
Maybe there's jessica who lost her right hand and is learning to write with the left who gets extra time. Maybe there's joe who has some form of nerve issue and uses a specialized pen that helps cancel out tremors. Maybe sarah is blind and has an aide who writes it or is allowed to use a keyboard or or or...
Reasonable accommodations absolutely should be made for children that need them.
But also just because you’re a bad parent and think the rules don’t apply to you doesn’t mean your crappy kid gets to cheat.
Parents are the absolute worst snowflakes.
This is the key part. I'm doing a part-time graduate degree at a major university right now, and it's fascinating to watch the week-to-week pressure AI is putting on the education establishment. When your job as a student is to read case studies and think about them, but Google Drive says "here's an automatic summary of the key points" before you even open the file, it takes a very determined student to ignore that and actually read the material. And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade.
Schools are struggling to figure out how to let students use AI tools to be more productive while still learning how to think. The students (especially undergrads) are incredibly good at doing as little work as possible. And until you get to the end-of-PhD level, there's basically nothing you encounter in your learning journey that ChatGPT can't perfectly summarize and analyze in 1 second, removing the requirement for you to do anything.
This isn't even about AI being "good" or "bad". We still teach children how to add numbers before we give them calculators because it's a useful skill. But now these AI thinking-calculators are injecting themselves into every text box and screen, making them impossible to avoid. If the answer pops up in the sidebar before you even ask the question, what kind of masochist is going to bother learning how to read and think?
In my first year of college my calculus teacher said something that stuck with me "you learn calculus getting cramps on your wrists", yeah, AI can help remember things and accelerate learning, but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.
Depends. You might end up going quite far without even opening up the hood of a car even when you drive the car everyday and depend on it for your livelihood.
If you're the kind that likes to argue for a good laugh, you might say "well, I don't need to know how my car works as long as the engineer who designed it does or the mechanic who fixes it does" - and this is accurate but it's also accurate not everyone ended up being either the engineer or the mechanic. It's also untrue that if it turned out it would be extremely valuable to you to actually learn how the car worked, you wouldn't put in the effort to do so and be very successful at it.
All this talk about "you should learn something deeply so you can bank on it when you will need it" seems to be a bit of a hoarding disorder.
Given the right materials, support and direction, most smart and motivated people can learn how to get competent at something that they had no clue about in the past.
When it comes to smart and motivated people, the best drop out of education because they find it unproductive and pedantic.
My argument is that when you have at least a basic knowledge of how things work (be it as a musician, a mechanical engineer or a scientist) you are in a much better place to know what you want/need.
That said, smart and motivated people thrive if they are given the conditions to thrive, and I believe that physical interfaces have way less friction than digital interfaces, turning a knob is way less work than clicking a bunch of menus to set up a slider.
If I were to summarize what I think about AI it would be something like "Let it help you. Do not let it think for you"
My issue is not with people using AI as a tool, bit with people delegating anything that would demand any kind of effort to AI
If reading an AI summary of readings is all it takes to make an exercise a facade, then the exercise was bad to begin with.
AI is certainly putting pressure on professors to develop better curricula and evaluations, and they don’t get enough support for this, imho.
That said, good instruction and evaluation techniques are not some dark art — they can be developed, implemented, and maintained with a modest amount of effort.
If the sole purpose of college is to rank students, and funnel them to high prestige jobs that have no use for what they actually learn in college then what the students are doing is rational.
If however the student is actually there to learn, he knows that using ChatGPT accomplishes nothing. In fact all this proves is that most students in most colleges are not there to learn. Which begs the question why are they even going to college? Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies.
College for the "consumer" student isn't worth much in comparison.
This topic comes up all the time. Every method conceivable to rank job candidates gets eviscerated here as being counterproductive.
And yet, if you have five candidates for one job, you're going to have to rank them somehow.
I do not. This is your problem, companies. Now, I am aware that I have to give out grades and so I walk through the motions of doing this to the extent expected. But my goal is to instruct and teach all students to the best of my abilities to try to get them all to be as educated/useful to society as possible. Sure, you can have my little assessment at the end if you like, but I work for the students, not for the companies.
But I would have been pretty angry to have been educated in topics that did not turn out to be useful in industry. I deliberately selected courses that I figured would be the most useful in my career.
I think this is mostly accurate. Schools have been able to say "We will test your memory on 3 specific Shakespeares, samples from Houghton Mifflin Harcourt, etc" - the students who were able to perform on these with some creative dance, violin, piano or cello thrown in had very good chances at a scholarship from an elite college.
This has been working extremely well except now you have AI agents that can do the same at a fraction of the cost.
There will be a lot of arguments, handwringing and excuse making as students go through the flywheel already in motion with the current approach.
However, my bet is it's going to be apparent that this approach no longer works for a large population. It never really did but there were inefficiencies in the market that kept this game going for a while. For one, college has become extremely expensive. Second, globalization has made it pretty hard for someone paying tuition in the U.S. to compete against someone getting a similar education in Asia when they get paid the same salary. Big companies have been able to enjoy this arbitrage for a long time.
> Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies
Now that everyone has access to labor cheaper than the cheapest English speaking country in the world, humanity will be forced to adapt, forcing us to rethink what has seemed to work in the past
I didn't get it. How can printing avoid AI? And more importantly is this AI-resistance sustainable?
Does this literally work? It adds slightly more friction, but you can still ask the robot to summarize pretty much anything that would appear on the syllabus. What it likely does it set expectations.
This doesn't strike me as being anti-AI or "resistance" at all. But if you don't train your own brain to read and make thoughts, you won't have one.
Hell, in Italy we used to have an editor called Bignami make summaries of every school topic.
In any case, I don't know what to think about all of this.
School is for learning, if you skip the hard part you not gonna learn, your lost.
At this point auto AI summaries are so prevalent that it is the passive default. By shifting it to require an active choice, you’ve make it more likely for students to choose to do the work.
It’s not much more effort. The level of friction is minimal. But we’re talking about the activation energy of students (in an undergrad English class, likely teenagers). It doesn’t take much to swing the percentage of students who do the reading.
My strategy was to print out copies of an unassigned shorter poem by an author covered in lecture. Then I’d hand it out at the beginning of class, and we’d spend the whole time walking through a close reading of that poem.
It kept students engaged, since it was a collaborative process of building up an interpretation on the basis of observation, and anyone is capable of noticing patterns and features that can be fed into an interpretation. They all had something to contribute, and they’d help me to notice things I’d never registered before. It was great fun, honestly. (At least for me, but also, I think, for some of them.) I’d also like to think it helped in some small way to cultivate practices of attention, at least for a couple of hours a week.
Unfortunately, you can’t perform the same exercise with a longer work that necessitates reading beforehand, but you can at least break out sections for the same purpose.
I concur completely with Fadiman's comment as that has been my experience despite that I have been using computer screens and computers for many decades and that I am totally at ease with them for reading and composing documentation.
Books and printed materials have physical presence and tactility about them that are missing from display screens. It is hard to explain but handling the physical object, pointing to paragraphs on printed pages, underlining text with a pencil and sticking postit notes into page margins adds an ergonomic factor that is more conducive to learning and understanding than when one interacts with screens (including those where one can write directly to the screen with a stylus).
I have no doubt about this, as I've noticed over the years if I write down what I'm thinking with my hand onto paper I am more likely to understand and remember it better than when I'm typing it.
It's as if typing doesn't provide as tighter coupling with my brain as does writing by hand. There is something about handwriting and the motional feedback from my fingers that makes me have a closer and more intimate relationship with the text.
That's not to say I don't use screens—I do but generally to write summaries after I've first worked out ideas on paper (this is especially relevant when mathematics is involved—I'm more cognitively involved when using pencil and paper).
"TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option."
And later in OA it states that the cost to a student is $0.12 per double sided sheet of printing.
In all of my teaching career here in the UK, the provision of handouts has been a central cost. Latterly I'd send a pdf file with instructions and the resulting 200+ packs of 180 sides would be delivered on a trolley printed, stapled with covers. The cost was rounding error compared to the cost of providing an hour of teaching in a classroom (wage costs, support staff costs, building costs including amortisation &c).
How is this happening?
Public universities are always underfunded.
Universities can get more money by putting the cost on the students and then they cover it with gov grants and loans.
> They don't care about students or education, they care about wasting resources and making a lot of money in the process.
This obv isn’t a push by parents because I can’t imagine parents I know want their kids in front of a screen all day. At best they’re indifferent. My only guess is the teachers unions that don’t want teachers grading and creating lesson plans and all the other work they used to do.
And since this trend kid scores or performance has not gotten better, so what gives?
Can anyone comment if it’s as bad as this and what’s behind it.
The older one has a chromebook and uses it for research and production of larger written projects and presents—the kind of things you’d expect. The younger one doesn’t have any school-supplied device yet.
Both kids have math exercises, language worksheets, short writing exercises, etc., all done on paper. This is the majority of homework.
I’m fine with this system. I wish they’d spend a little more time teaching computer basics (I did a lot of touch typing exercises in the 90’s; my older one doesn’t seem to have those kind of lessons). But in general, there’s not too much homework, there’s good emphasis on reading, and I appreciate that the older one is learning how to plan, research, and create projects using the tool he’ll use to do so in future schooling.
* People needed to be taught digital skills that were in growing demand in the workplace.
* The kids researching things online and word-processing their homework were doing well in class (because only upper-middle-class types could afford home PCs)
* Some trials of digital learning produced good results. Teaching by the world's greatest teachers, exactly the pace every student needs, with continuous feedback and infinite patience.
* Blocking distractions? How hard can that be?
Writing with a word processor that just helps you type, format, and check spelling is great. Blocking distractions on a general-purpose computer (like a phone or a tablet) is as hard as handing locked-down devices set up for the purpose, and banning personal devices.
Sure, you could get an education for cheap from a community college, or free from various online sources, or for the best education possible, get paid to learn on the job. If you attend a university though, you're getting prestige by showing how much of your money, or someone else's money, you can burn through.
It's not like anyone's taking undergraduate classes at Harvard or Stanford because the teaching assistants actually instructing are going to provide above-average instruction. They aren't even concerned with tenured professors teaching performance; they put publishing metrics first.
Students have always looked for ways to minimize the work load, and often the response has been to increase the load. In some cases it has effectively become a way to tech you to get away with cheating (a lesson this even has some real-world utility).
Keeping students from wasting their tuition is an age-old, Sisyphean task for parents. School is wasted on the young. Unfortunately youth is also when your brain is most receptive to it.
First, extremely cumbersome and error-prone to type compared to swipe-typing on a soft keyboard. Even highlighting a few sentences can be problematic when spanning across a page boundary.
Second, navigation is also painful compared to a physical book. When reading non-fiction, it’s vital to be able to jump around quickly, backtrack, and cross-reference material. Amazon has done some good work on the UX for this, but nothing is as simple as flipping through a physical book.
Android e-readers are better insofar as open to third-party software, but still have the same hardware shortcomings.
My compromise has been to settle on medium-sized (~Kindle or iPad Mini size) tablets and treat them just as an e-reader. (Similar to the “kale phone” concept ie minimal software installed on it … no distractions.) They are much more responsive, hence fairly easy to navigate and type on.
That said, I always thought exams should be the moment of truth.
I had teachers that spoke broken english, but I'd do the homework and read the textbook in class. I learned many topics without the use of a teacher.
This made sense a couple of decades ago. Today, it's just bizarre to be spending $150 on a phonebook-sized packet of reading materials. So much paper and toner.
This is what iPads and Kindles are for.
To make it more palpable for an IT worker: "It's just bizarre to give a developer a room with a door, so much sheetrock and wood! Working with computers is what open-plan offices are for."
Also, the university isn't covering the cost here. The students are. And buying the Kindle would be cheaper than the printing cost of the packet itself.
So I stand by my point. If you don't want distraction, get Kindles.
And even iPads are pretty good. They tend to sit flat so you're not "hiding" your screen the way you can with a laptop or phone, and people often aren't using messaging or social apps on them so there are no incoming distractions.
You see a policy, and your clever brains come up with a way to get around it, "proving" that the new methodology is not perfect and therefore not valuable.
So wrong. Come on people, think about it -- to an extent ALL WE DO is "friction." Any shift towards difficulty can be gained, but also nearly all of the time it provides a valuable differentiator in terms of motivation, etc.
> TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option.
Lol $150 for reading packets? Not even textbooks? Seriously the whole system can fuck off.
We continue to teach children (at least in the EU) to write by hand, to do calculations manually throughout their entire schooling, when in real life, aside from the occasional scrap note, all writing is done on computers and calculations are done by machine as well. And, of course, no one teaches these latter skills.
The result on a large scale is that we have an increasingly incompetent population on average, with teaching staff competing to see who can revert the most to the past and refusing to see that the more they do this, the worse the incompetent graduates they produce.
The computer, desktop, FLOSS, is the quintessential epistemological tool of the present, just as paper was in the past. The world changes, and those who fall behind are selected out by history; come to terms with that. Not only, those who lag behind ensure that few push forward an evolution for their own interest, which typically conflicts with that of the majority.
This isn't my article nor do I know this Educator but I like her approach and actions taken:
https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teac...
1. Instead of putting up all sorts of barriers between students and ChatGPT, have students explicitly use ChatGPT to complete the homework
2. Then compare the diversity in the ChatGPT output
3. If the ChatGPT output is extremely similar, then the game is to critique that ChatGPT output, find out gaps in ChatGPT's work, insights it missed and what it could have done better
4.If the ChatGPT output is diverse, how do we figure out which is better? What caused the diversity? Are all the outputs accurate or are there errors in some?
Similarly, when it comes to coding, instead of worrying that ChatGPT can zero shot quicksort and memcpy perfectly, why not game it:
1. Write some test cases that could make that specific implementation of `quicksort` or `memcpy` fail
2. Could we design the input data such that quicksort hits its worst case runtime?
3. Is there an algorithm that would sort faster than quicksort for that specific input?
4. Could there be architectures where the assumptions that make quicksort "quick", fail to hold true? Instead, something simpler and worse on paper like a "cache aware sort" actually work faster in practice than quicksort?
I have multiple paragraphs more of thought on this topic but will leave it at this for now to calibrate if my thoughts are in the minority
>Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option
Does a student need to print out multiple TYCO Packets ? If so, only the very rich could afford this. I think educations should go back to printed books and submitting you work to the Prof. on paper.
But submitting printed pages back to the Prof. for homework will avoid the school saying "Submit only Word Documents". That way a student can use the method they prefer, avoiding buying expensive software. One can then use just a simple free text editor if they want. Or even a typewriter :)
What could it mean for an "option" to be "required"?