77 pointsby cmsefton6 hours ago22 comments
  • recursivedoubts5 hours ago
    I have mentioned this in a few comments: for my CS classes I have gone from a historical 60-80% projects / 40-20% quizzes grade split, to a 50/50 split, and have moved my quizzes from being online to being in-person, pen-on-paper with one sheet of hand-written notes

    Rather than banning AI, I'm showing students how to use it effectively as a personalized TA. I'm giving them this AGENTS.md file:

    https://gist.github.com/1cg/a6c6f2276a1fe5ee172282580a44a7ac

    And showing them how to use AI to summarize the slides into a quiz review sheet, generate example questions with answer walk throughs, etc.

    Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves: the projects are designed to draw them into the art of programming and give them decent, real-world coding experience that they will need, even if they end up working at a higher level in the future.

    AI can be a very effective tool for education if used properly. I have used it to create a ton of extremely useful visualizations (e.g. how twos complement works) that I wouldn't have otherwise. But it is obviously extremely dangerous as well.

    "It is impossible to design a system so perfect that no one needs to be good."

    • bbor4 hours ago
      You seem like a great professor(/“junior baby mini instructor who no one should respect”, knowing American academic titles…). Though as someone whose been on the other end of the podium a bit more recently, I will point out the maybe-obvious:

        Of course I can't ensure they aren't just having AI do the projects, but I tell them that if they do that they are cheating themselves
      
      This is the right thing to say, but even the ones who want to listen can get into bad habits in response to intense schedules. When push comes to shove and Multivariate Calculus exam prep needs to happen but you’re stuck debugging frustrating pointer issues for your Data Structures project late into the night… well, I certainly would’ve caved far too much for my own good.

      IMO the natural fix is to expand your trusting, “this is for you” approach to the broader undergrad experience, but I can’t imagine how frustrating it is to be trying to adapt while admin & senior professors refuse to reconsider the race for a “””prestigious””” place in a meta-rat race…

      For now, I guess I’d just recommend you try to think of ways to relax things and separate project completion from diligence/time management — in terms of vibes if not a 100% mark. Some unsolicited advice from a rando who thinks you’re doing great already :)

      • recursivedoubts4 hours ago
        Yes, I expect that pressure will be there, and project grades will be near 100% going forward, whether the student did the work or not.

        This is why I'm going to in-person written quizzes to differentiate between the students who know the material and those who are just using AI to get through it.

        I do seven quizzes during the semester so each one is on relatively recent material and they aren't weighted too heavily. I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz. I hated the high-pressure midterms/finals of my undergrad, so I'm trying to remove that for them.

        • WalterBright3 hours ago
          > I hated the high-pressure midterms/finals of my undergrad

          The pressure was what got me to do the necessary work. Auditing classes never worked for me.

          > I do some spaced-repetition questions of important topics and give students a study sheet of what to know for the quiz.

          Isn't that what the lectures and homework are for?

      • analog313 hours ago
        The irony is that on-time completion of is probably the #1 source of project failure in the real world.
      • 3 hours ago
        undefined
    • mmooss3 hours ago
      I think that's a great approach. I've thought about how to handle these issues and wonder how you handle several issues that come to mind:

      Competing with LLM software users, 'honest' students would seem strongly incentivized to use LLMs themeselves. Even if you don't grade on a curve, honest students will get worse grades which will look worse to graduate schools, grant and scholarship committees, etc., in addition to the strong emotional component that everyone feels seeing an A or C. You could give deserving 'honest' work an A but then all LLM users will get A's with ease. It seems like you need two scales, and how do you know who to put on which scale?

      And how do students collaborate on group projects? Again, it seems you have two different tracks of education, and they can't really work together. Edit: How do class discussions play out with these two tracks?

      Also, manually doing things that machines do much better has value but also takes valuable time from learning more advanced skills that machines can't handle, and from learning how to use the machines as tools. I can see learning manual statistics calculations, to understand them fundamentally, but at a certain point it's much better to learn R and use a stats package. Are the 'honest' students being shortchanged?

    • softwaredoug5 hours ago
      Do you find advocating for AI literacy to be controversial amongst peers?

      I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.

      There are some head in the sands, very emotional attitudes about this stuff. (And obviously idiotically uncritical pro AI stances, but I doubt educators risk having those stances)

      • libraryofbabel4 hours ago
        Not OP, but I would imagine (or hope) that this attitude is far less common amongst peer CS educators. It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia. The best-positioned students will be the ones who can operate these tools effectively but with a critical mindset, while also being able to do without AI as needed (which of course makes them better at directing AI when they do engage it).

        That said I agree with all your points too: some version of this argument will apply to most white collar jobs now. I just think this is less clear to the general population and it’s much more of a touchy emotional subject, in certain circles. Although I suppose there may be a point to be made about being more slightly cautious about introducing AI at the high school level, versus college.

        • danaris4 hours ago
          > It is so clear that AI tools will be (and are already) a big part of future jobs for CS majors now, both in industry and academia.

          No, it's not.

          Nothing around AI past the next few months to a year is clear right now.

          It's very, very possible that within the next year or two, the bottom falls out of the market for mainstream/commercial LLM services, and then all the Copilot and Claude Code and similar services are going to dry up and blow away. Naturally, that doesn't mean that no one will be using LLMs for coding, given the number of people who have reported their productivity increasing—but it means there won't be a guarantee that, for instance, VS Code will have a first-party integrated solution for it, and that's a must-have for many larger coding shops.

          None of that is certain, of course! That's the whole point: we don't know what's coming.

          • verdverm3 hours ago
            It is clear that AI had already transformed how we do our jobs in CS

            The genie is out of the bottle, never going back

            It's a fantasy to think it will "dry up" and go away

            Some other guarantees over the next few years we can make based on history: AI will get batter, faster, and more efficient like everything else in CS

            • oblio3 hours ago
              Yeah, like Windows in 2026 is better than Windows in 2010, Gmail in 2026 is better than Gmail in 2010, the average website in 2026 is better than in 2015, Uber is better in 2026 than in 2015, etc.

              Plenty of tech becomes exploitative (or more exploitative).

              I don't know if you noticed but 80% of LLM improvements are actually procedural now: it's the software around them improving, not the core LLMs.

              Plus LLMs have huge potential for being exploitative. 10x what Google Search could do for ads.

              • verdverman hour ago
                You're crossing products with technology, also some cherry picking of personal perspectives

                I personally think GSuite is much better today than it was a decade ago, but that is separate

                The underlying hardware has improved, the network, the security, the provenance

                Specific to LLMs

                1. we have seen rapid improvements and there are a ton more you can see in the research that will be impacting the next round of model train/release cycle. Both algorithms and hardware are improving

                2. Open weight models are within spitting distance of the frontier. Within 2 years, smaller and open models will be capable of what frontier is doing today. This has a huge democratization potential

                I'd rather see the Ai as an opportunity to break the Oligarchy and the corporate hold over the people. I'm working hard to make it a reality (also working on atproto)

                • oblioan hour ago
                  Every time I hear "democratization" from a techbro I keep thinking that the end state is technofeudalism.

                  We can't fix social problems with technological solutions.

                  Every scalable solution takes us closer to Extremistan, which is inherently anti democratic.

                  Read the Black Swan by Taleb.

            • danaris3 hours ago
              OK? Prove it.

              Show me actual studies that clearly demonstrate that not only does using an LLM code assistant help make code faster in the short term, it doesn't waste all that extra benefit by being that much harder to maintain in the long term.

              • jjavan hour ago
                No such studies can exist since AI coding has not been around for a long term.

                Clearly AI is much faster and good enough to create new one-off bits of code.

                Like I tend to create small helper scripts for all kinds of things both at work and home all the time. Typically these would take me 2-4 hours and aside from a few tweaks early on, they receive no maintenance as they just do some one simple thing.

                Now with AI coding these take me just a few minutes, done.

                But I believe this is the optimal productivity sweet spot for AI coding, as no maintenance is needed.

                I've also been running a couple experiments vibe-coding larger apps over the span of months and while initial ramp-up is very fast, productivity starts to drop off after a few weeks as the code becomes more complex and ever more full of special case exceptions that a human wouldn't have done that way. So I spend more and more time correcting behavior and writing test cases to root out insanity in the code.

                How will this go for code bases which need to continuously evolve and mature over many years and decades? I guess we'll see.

              • verdverm2 hours ago
                I'll be frank, tried this with a few other people recently and they

                1. Open this line of debate similar to you (i.e. the way you ask, the tone you use)

                2. Were not interested in actual debate

                3. Moved the goalposts repeatedly

                Based on past experience entertaining inquisitors, I will not be this time.

                • libraryofbabelan hour ago
                  Yeah. At this point, at the start of 2026, people that are taking these sorts of positions with this sort of tone tend to have their identity wrapped up in wanting AI to fail or go away. That’s not conducive to a reasoned discussion.

                  There are a whole range of interesting questions here that it’s possible to have a nuanced discussion about, without falling into AI hype and while maintaining a skeptical attitude. But you have to do it from a place of curiosity rather than starting with hatred of the technology and wishing for it to be somehow proved useless and fade away. Because that’s not going to happen now, even if the current investment bubble pops.

                  • verdverman hour ago
                    wholehearted agreement

                    If anything, I see this moment as one where we can unshackle ourselves from the oligarchs and corporate overlords. The two technologies are AI and ATProto, I work on both now to give sovereignty back to we the people

          • cirrusfan2 hours ago
            I get a slow-but-usable ~10tk/s on kimi 2.5 2b-ish quant on a high end gaming slash low end workstation desktop (rtx 4090, 256 gb ram, ryzen 7950). Right now the price of RAM is silly but when I built it it was similar in price to a high end macbook - which is to say it isn’t cheap but it’s available to just about everybody in western countries. The quality is of course worse than what the bleeding edge labs offer, especially since heavy quants are particularly bad for coding, but it is good enough for many tasks: an intelligent duck that helps with planning, generating bog standard boilerplate, google-less interactive search/stackoverflow ("I ran flamegraph and X is an issue, what are my options here?” etc).

            My point is, I can get somewhat-useful ai model running at slow-but-usable speed on a random desktop I had lying around since 2024. Barring nuclear war there’s just no way that AI won’t be at least _somewhat_ beneficial to the average dev. All the AI companies could vanish tomorrow and you’d still have a bunch of inference-as-a-service shops appearing in places where electricity is borderline free, like Straya when the sun is out.

            • danarisan hour ago
              Then you're missing my point.

              Yes, you, a hobbyist, can make that work, and keep being useful for the foreseeable future. I don't doubt that.

              But either a majority or large plurality of programmers work in some kind of large institution where they don't have full control over the tools they use. Some percentage of those will never even be allowed to use LLM coding tools, because they're not working in tech and their bosses are in the portion of the non-tech public that thinks "AI" is scary, rather than the portion that thinks it's magic. (Or, their bosses have actually done some research, and don't want to risk handing their internal code over to LLMs to train on—whether they're actually doing that now or not, the chances that they won't in future approach nil.)

              And even those who might not be outright forbidden to use such tools for specific reasons like the above will never be able to get authorization to use them on their company workstations, because they're not approved tools, because they require a subscription the company won't pay for, because etc etc.

              So saying that clearly coding with LLM assistance is the future and it would be irresponsible not to teach current CS students how to code like that is patently false. It is a possible future, but the volatility in the AI space right now is much, much too high to be able to predict just what the future will bring.

              • blackcatsec16 minutes ago
                I never understand anyone's push to throw around AI slop coding everywhere. Do they think in the back of their heads that this means coding jobs are going to come back on-shore? Because AI is going to make up for the savings? No, what it means is tech bro CEOs are going to replace you even more and replace at least a portion of the off-shore folks that they're paying.

                The promise of AI is a capitalist's dream, which is why it's being pushed so much. Do more with less investment. But the reality of AI coding is significantly more nuanced, and particularly more nuanced in spaces outside of the SRE/devops space. I highly doubt you could realistically use AI to code the majority of significant software products (like, say, an entire operating system). You might be able to use AI to add additional functionality you otherwise couldn't have, but that's not really what the capitalists desire.

                Not to mention, the models have to be continually trained, otherwise the knowledge is going to be dead. Is AI as useful for Rust as it is for Python? Doubtful. What about the programming languages created 10-15 years from now? What about when everyone starts hoarding their information away from the prying eyes of AI scraper bots to keep competitive knowledge in-house? Both from a user perspective and a business perspective?

                Lots of variability here that literally nobody has any idea how any of it's going to go.

          • libraryofbabel4 hours ago
            I agree with you that everything is changing and that we don’t know what’s coming, but I think you really have to stretch things to imagine that it’s a likely scenario that AI-assisted coding will “dry up and blow away.” You’ll need to elaborate on that, because I don’t think it’s likely even if the AI investment bubble pops. Remember that inference is not really that expensive. Or do you think that things shift on the demand side somehow?
            • saltcured2 hours ago
              I think the "genie" that is out of the bottle is that there is no broad, deeply technical class who can resist the allure of the AI agent. A technical focus does not seem to provide immunity.

              In spite of obvious contradictory signals about quality, we embrace the magical thinking that these tools operate in a realm of ontology and logic. We disregard the null hypothesis, in which they are more mad-libbing plagiarism machines which we've deployed against our own minds. Put more tritely: We have met the Genie, and the Genie is Us. The LLM is just another wish fulfilled with calamitous second-order effects.

              Though enjoyable as fiction, I can't really picture a Butlerian Jihad where humanity attempts some religious purge of AI methods. It's easier for me to imagine the opposite, where the majority purges the heretics who would question their saints of reduced effort.

              So, I don't see LLMs going away unless you believe we're in some kind of Peak Compute transition, which is pretty catastrophic thinking. I.e. some kind of techno/industrial/societal collapse where the state of the art stops moving forward and instead retreats. I suppose someone could believe in that outcome, if they lean hard into the idea that the continued use of LLMs will incapacitate us?

              Even if LLM/AI concepts plateau, I tend to think we'll somehow continue with hardware scaling. That means they will become commoditized and able to run locally on consumer-level equipment. In the long run, it won't require a financial bubble or dedicated powerplants to run, nor be limited to priests in high towers. It will be pervasive like wireless ear buds or microwave ovens, rather than an embodiment of capital investment.

              The pragmatic way I see LLMs _not_ sticking around is where AI researchers figure out some better approach. Then, LLMs would simply be left behind as historical curiosities.

              • danarisan hour ago
                The first half of your post, I broadly agree with.

                The last part...I'm not sure. The idea that we will be able to compute-scale our way out of practically anything is so much taken for granted these days that many people seem to have lost sight of the fact that we have genuinely hit diminishing returns—first in the general-purpose computing scaling (end of Moore's Law, etc), and more recently in the ability to scale LLMs. There is no longer a guarantee that we can improve the performance of training, at the very least, for the larger models by more than a few percent, no matter how much new tech we throw at it. At least until we hit another major breakthrough (either hardware or software), and by their very nature those cannot be counted on.

                Even if we can squeeze out a few more percent—or a few more tens of percent—of optimizations on training and inference, to the best of my understanding, that's going to be orders of magnitude too little yet to allow for running the full-size major models on consumer-level equipment.

            • danaris3 hours ago
              I think that even if inference is "not really that expensive", it's not free.

              I think that Microsoft will not be willing to operate Copilot for free in perpetuity.

              I think that there has not yet been any meaningful large-scale study showing that it improves performance overall, and there have been some studies showing that it does the opposite, despite individuals' feeling that it helps them.

              I think that a lot of the hype around AI is that it is going to get better, and if it becomes prohibitively expensive for it to do that (ie, training), and there's no proof that it's helping, and keeping the subscriptions going is a constant money drain, and there's no more drumbeat of "everything must become AI immediately and forever", more and more institutions are going to start dropping it.

              I think that if the only programmers who are using LLMs to aid their coding are hobbyists, independent contractors, or in small shops where they get to fully dictate their own setups, that's a small enough segment of the programming market that we can say it won't help students to learn that way, because they won't be allowed to code that way in a "real job".

            • LtWorf3 hours ago
              If they start charging what it costs them for example…
              • libraryofbabel3 hours ago
                There is so much confusion on this topic. Please don't spread more of it; the answers are just a quick google away. To spell it out:

                1) AI companies make money on the tokens they sell through their APIs. At my company we run Claude Code by buying Claude Sonnet and Opus tokens from AWS Bedrock. AWS and Anthropic make money on those tokens. The unit economics are very good here; estimates are that Anthropic and OpenAI have a gross margin of 40% on selling tokens.

                2) Claude Code subscriptions are probably subsidized somewhat on a per token basis, for strategic reasons (Anthropic wants to capture the market). Although even this is complicated, as the usage distribution is such that Anthropic is making money on some subscribers and then subsidizing the ultra-heavy-usage vibe coders who max out their subscriptions. If they lowered the cap, most people with subscriptions would still not max out and they could start making money, but they'd probably upset a lot of the loudest ultra-heavy-usage influencer-types.

                3) The biggest cost AI companies have is training new models. That is the reason AI companies are not net profitable. But that's a completely separate set of questions from what inference costs, which is what matters here.

      • recursivedoubts4 hours ago
        AI is extremely dangerous for students and needs to be used intentionally, so I don't blame people for just going to "ban it" when it comes to their kids.

        Our university is slowly stumbling towards "AI Literacy" being a skill we teach, but, frankly, most faculty here don't have the expertise and students often understand the tools better than teachers.

        I think there will be a painful adjustment period, I am trying to make it as painless as possible for my students (and sharing my approach and experience with my department) but I am just a lowly instructor.

        • softwaredoug4 hours ago
          Honestly defining what to teach is hard

          People need to learn to do research with LLMs, code with LLMs, how to evaluate artifacts created by AI. They need to learn how agents work at a high level, the limitations on context, that they hallucinate and become sycophantic. How they need guardrails and strict feedback mechanisms if let loose. AI Safety connecting to external systems etc etc.

          You're right that few high school educators would have any sense of all that.

          • WalterBright3 hours ago
            I don't know anyone who learned arithmetic from a calculator.

            I do know people who would get egregiously wrong answers from misusing a calculator and insisted it couldn't be wrong.

            • softwaredoug3 hours ago
              Yes but I was also taught to use a calculator, and particular the advanced graphing calculators.

              Not to mention programming is a meta skill on top of “calculators”

      • subhobroto4 hours ago
        > I find, as a parent, when I talk about it at the high school level I get very negative reactions from other parents. Specifically I want high schoolers to be skilled in the use of AI, and particular critical thinking skills around the tools, while simultaneously having skills assuming no AI. I don’t want the school to be blindly “anti AI” as I’m aware it will be a part of the economy our kids are brought into.

        This is my exact experience as well and I find it frustrating.

        If current technology is creating an issue for teachers - it's the teachers that need to pivot, not block current technology so they can continue what they are comfortable with.

        Society typically cares about work getting done and not much about how it got done - for some reason, teachers are so deep into the weeds of the "how", that they seem to forget that if the way to mend roads since 1926 have been to learn how to measure out, mix and lay asphalt patches by hand, in 2026 when there are robots that do that perfectly every-time, they should be teaching humans to complement those robots or do something else entirely.

        It's possible in the past, that learning how to use an abacus was a critical lesson but once calculators were invented, do we continue with two semesters of abacus? Do we allow calculators into the abacus course? Should the abacus course be scrapped? Will it be a net positive on society to replace the abacus course with something else?

        "AI" is changing society fundamentally forever and education needs to change fundamentally with it. I am personally betting that humans in the future, outside extreme niches, are generalists and are augmented by specialist agents.

        • netsharc3 hours ago
          I'm also for education for AI awareness. A big point on teaching kids about AI should also be a lot about how unreliable they can be.

          I had a discussion with a recruiter on Friday, and I said I guess the issue with AI vs human is, if you give a human developer who is new to your company tasks, the first few times you'll check their work carefully to make sure the quality is good. After a while you can trust they'll do a good job and be more relaxed. With AI, you can never be sure at any time. Of course a human can also misunderstand the task and hallucinate, but perhaps discussing the issue and the fix before they start coding can alleviate that. You can discuss with an AI as much as you want, but to me, not checking the output would be an insane move...

          To return to the point, yeah, people will use AI anyway, so why not teach them about the risks. Also LLMs feel like Concorde: it'll get you to where you want to go very quickly, but at tremendous environmental cost (also it's very costly to the wallet, although the companies are now partially subsidizing your use with the hopes of getting you addicted)..

      • copilot_king4 hours ago
        [dead]
    • buckle80175 hours ago
      hopefully you've also modified the quizzes to be handwriting compatible.

      I once got "implement a BCD decoder" with about a 1"x4" space to do it.

      • recursivedoubts5 hours ago
        We just had our first set of in person quizzes and I gave them one question per page, with lots of space for answers.

        I'm concerned about handwriting, which is a lost skill, and how hard that will be on the TAs who are grading the exams. I have stressed to students that they should write larger, slower and more carefully than normal. I have also given them examples of good answers: terse and to the point, using bulleted lists effectively, what good pseudo-code looks like, etc.

        It is an experiment in progress: I have rediscovered the joys of printing & the logistics moving large amounts of paper again. The printer decided half way through one run to start folding papers slightly at the corner, which screwed up stapling.

        I suppose this is why we are paid the big bucks.

        • NitpickLawyer4 hours ago
          > I have also given them examples of good answers: terse and to the point

          Oh man, this reminds me of one test I had in uni, back in the days when all our tests were in class, pen & paper (what's old is new again?). We had this weird class that taught something like security programming in unix. Or something. Anyway, all I remember is the first two questions being about security/firewall stuff, and the third question was "what is a socket". So I really liked the first two questions, and over-answered for about a page each. Enough text to both run out of paper and out of time. So my answer to the 3rd question was "a file descriptor". I don't know if they laughed at my terseness or just figured since I overanswered on the previous questions I knew what that was, but whoever graded my paper gave me full points.

      • logicchains5 hours ago
        Was it a Perl exam?
    • thenipper5 hours ago
      How do you handle kids w/ a learning disability who can't effectively write well?
      • baubino4 hours ago
        Reasonable accommodations have been made for students with disabilities for decades now. While there might be some cases where AI might be helpful for accommodating students, it is not, nor should it be, a universal application because different disabilities (and different students) require different treatment and support. There‘s tons of research on disability accommodations and tons of specialists who work on this. Most universities have an entire office dedicated to supporting students with disabilities, and primary and secondary schools usually have at least one person who takes on that role.

        So how do you handle kids who can‘t write well? The same way we‘ve been handling them all along — have them get an assessment and determine exactly where they need support and what kind of support will be most helpful to that particular kid. AI might or might not be a part of that, but it‘s a huge mistake to assume that it has to be a part of that. People who assume that AI can just be thrown at disability support betray how little they actually know about disability support.

      • recursivedoubts5 hours ago
        We have a testing center at Montana State for situations like this. I deliver my tests in the form of a PDF and the testing center administers it in a manner appropriate for the student.
      • leviathant4 hours ago
        >How do you handle kids w/ a learning disability who can't effectively write well?

        It's embarrassing to see this question downvoted on here. It's a valid question, there's a valid answer, and accessibility helps everyone.

        • wredcoll4 hours ago
          It's a question that's too vague to be usefully answered especially on a forum like this.

          There's not such thing as "disabled people who can't write well", there's individuals with specific problems and needs.

          Maybe there's jessica who lost her right hand and is learning to write with the left who gets extra time. Maybe there's joe who has some form of nerve issue and uses a specialized pen that helps cancel out tremors. Maybe sarah is blind and has an aide who writes it or is allowed to use a keyboard or or or...

          • zajio1am2 hours ago
            There is a specific condition called dysgraphia that pretty much fits descripion "can't write well".
        • ThrowawayR24 hours ago
          In the context of the immediate problems of AI in education, it's not a relevant thing to bring up. Finding ways for students with disabilities to succeed in higher education has been something that institutions have been handling for many decades now. The one I attended had well defined policies for faculty and specialist full time staff plus facilities whose sole purpose was to provide appropriate accommodations to such students and that was long, long ago. There will undoubtedly be some kind of role in the future for AI as well but current students with disabilities are not being left high and dry without it.
        • sjwgjnj4 hours ago
          Because it’s another nonsensical “think of the children” argument for why nothing should ever change. Your comment really deserves nothing more than an eye roll emoji, but HN doesn’t support them.

          Reasonable accommodations absolutely should be made for children that need them.

          But also just because you’re a bad parent and think the rules don’t apply to you doesn’t mean your crappy kid gets to cheat.

          Parents are the absolute worst snowflakes.

          • danadam4 hours ago
            > Your comment really deserves nothing more than an eye roll emoji, but HN doesn’t support them.

            (◔_◔)

            • bmacho4 hours ago
              There is -.-" for exasperation/annoyance
  • ageitgey5 hours ago
    > “Over the years I’ve found that when students read on paper they're more likely to read carefully, and less likely in a pinch to read on their phones or rely on chatbot summaries,” Shirkhani wrote to the News. “This improves the quality of class time by orders of magnitude.”

    This is the key part. I'm doing a part-time graduate degree at a major university right now, and it's fascinating to watch the week-to-week pressure AI is putting on the education establishment. When your job as a student is to read case studies and think about them, but Google Drive says "here's an automatic summary of the key points" before you even open the file, it takes a very determined student to ignore that and actually read the material. And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade.

    Schools are struggling to figure out how to let students use AI tools to be more productive while still learning how to think. The students (especially undergrads) are incredibly good at doing as little work as possible. And until you get to the end-of-PhD level, there's basically nothing you encounter in your learning journey that ChatGPT can't perfectly summarize and analyze in 1 second, removing the requirement for you to do anything.

    This isn't even about AI being "good" or "bad". We still teach children how to add numbers before we give them calculators because it's a useful skill. But now these AI thinking-calculators are injecting themselves into every text box and screen, making them impossible to avoid. If the answer pops up in the sidebar before you even ask the question, what kind of masochist is going to bother learning how to read and think?

    • thomasfortes4 hours ago
      Last weekend I was arguing with a friend that physical guitar pedals are better for creativity and exploration of the musical space than modelers even though modelers have way more resources for a fraction of the cost, the physical aspect of knobs and cables and everything else leads to something that's way more interactive and prone to "happy mistakes" than any digital interface can offer.

      In my first year of college my calculus teacher said something that stuck with me "you learn calculus getting cramps on your wrists", yeah, AI can help remember things and accelerate learning, but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.

      • subhobroto3 hours ago
        > but if you don't put the work to understand things you'll always be behind people that know at least with a bird eye view what's happening.

        Depends. You might end up going quite far without even opening up the hood of a car even when you drive the car everyday and depend on it for your livelihood.

        If you're the kind that likes to argue for a good laugh, you might say "well, I don't need to know how my car works as long as the engineer who designed it does or the mechanic who fixes it does" - and this is accurate but it's also accurate not everyone ended up being either the engineer or the mechanic. It's also untrue that if it turned out it would be extremely valuable to you to actually learn how the car worked, you wouldn't put in the effort to do so and be very successful at it.

        All this talk about "you should learn something deeply so you can bank on it when you will need it" seems to be a bit of a hoarding disorder.

        Given the right materials, support and direction, most smart and motivated people can learn how to get competent at something that they had no clue about in the past.

        When it comes to smart and motivated people, the best drop out of education because they find it unproductive and pedantic.

        • thomasfortes3 hours ago
          Yes, you can and I know just enough of cars to not be scammed by people, but not to know how the whole engine works, and I also don't think that you should learn everything that you can learn, there's no time for that, that's why I made the bird view comment.

          My argument is that when you have at least a basic knowledge of how things work (be it as a musician, a mechanical engineer or a scientist) you are in a much better place to know what you want/need.

          That said, smart and motivated people thrive if they are given the conditions to thrive, and I believe that physical interfaces have way less friction than digital interfaces, turning a knob is way less work than clicking a bunch of menus to set up a slider.

          If I were to summarize what I think about AI it would be something like "Let it help you. Do not let it think for you"

          My issue is not with people using AI as a tool, bit with people delegating anything that would demand any kind of effort to AI

    • csa2 hours ago
      > And if no one reads the original material, the class discussion is a complete waste of time, with everyone bringing up the same trite points, and the whole exercise becomes a facade.

      If reading an AI summary of readings is all it takes to make an exercise a facade, then the exercise was bad to begin with.

      AI is certainly putting pressure on professors to develop better curricula and evaluations, and they don’t get enough support for this, imho.

      That said, good instruction and evaluation techniques are not some dark art — they can be developed, implemented, and maintained with a modest amount of effort.

  • sashank_15094 hours ago
    At some level, this is a problem of unmotivated students and college mostly being just for signaling as opposed to real education.

    If the sole purpose of college is to rank students, and funnel them to high prestige jobs that have no use for what they actually learn in college then what the students are doing is rational.

    If however the student is actually there to learn, he knows that using ChatGPT accomplishes nothing. In fact all this proves is that most students in most colleges are not there to learn. Which begs the question why are they even going to college? Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies.

    • rr8084 hours ago
      It starts at admissions where learning is not a rewarded activity. You should be making impact in the community, doing some performative task that isn't useful for anything except making you different to your class mates who naively read the books and do the classwork honestly.
    • the_snooze3 hours ago
      College is wildly useful for motivated students: the ones who go out of their way to pursue opportunities uniquely available to them like serving as TAs, doing undergrad research, rising up the ranks in clubs and organizations, etc. They graduate not just with a credential but social capital. And it's that social capital that shields you from ChatGPT.

      College for the "consumer" student isn't worth much in comparison.

    • testfoobar4 hours ago
      For elite colleges, it is a pithy aphorism that the hardest part is getting in.
    • WalterBright3 hours ago
      > Surely there is a cheaper and more time efficient way to ranking students for companies.

      This topic comes up all the time. Every method conceivable to rank job candidates gets eviscerated here as being counterproductive.

      And yet, if you have five candidates for one job, you're going to have to rank them somehow.

      • jrm43 hours ago
        As a college instructor, one issue I find fascinating is the idea that I'm supposed to care strongly about this.

        I do not. This is your problem, companies. Now, I am aware that I have to give out grades and so I walk through the motions of doing this to the extent expected. But my goal is to instruct and teach all students to the best of my abilities to try to get them all to be as educated/useful to society as possible. Sure, you can have my little assessment at the end if you like, but I work for the students, not for the companies.

        • WalterBright3 hours ago
          I didn't suggest you should care about company selection processes.

          But I would have been pretty angry to have been educated in topics that did not turn out to be useful in industry. I deliberately selected courses that I figured would be the most useful in my career.

          • jrm4an hour ago
            Right, but that is the thing I pay attention to. Again, I want to hear from former students that I did right by them, not current companies asking for free screening.
    • subhobroto3 hours ago
      > At some level, this is a problem of unmotivated students and college mostly being just for signaling as opposed to real education.

      I think this is mostly accurate. Schools have been able to say "We will test your memory on 3 specific Shakespeares, samples from Houghton Mifflin Harcourt, etc" - the students who were able to perform on these with some creative dance, violin, piano or cello thrown in had very good chances at a scholarship from an elite college.

      This has been working extremely well except now you have AI agents that can do the same at a fraction of the cost.

      There will be a lot of arguments, handwringing and excuse making as students go through the flywheel already in motion with the current approach.

      However, my bet is it's going to be apparent that this approach no longer works for a large population. It never really did but there were inefficiencies in the market that kept this game going for a while. For one, college has become extremely expensive. Second, globalization has made it pretty hard for someone paying tuition in the U.S. to compete against someone getting a similar education in Asia when they get paid the same salary. Big companies have been able to enjoy this arbitrage for a long time.

      > Maybe this institution is outdated. Surely there is a cheaper and more time efficient way to ranking students for companies

      Now that everyone has access to labor cheaper than the cheapest English speaking country in the world, humanity will be forced to adapt, forcing us to rethink what has seemed to work in the past

  • zkmon5 hours ago
    >This academic year, some English professors have increased their preference for physical copies of readings, citing concerns related to artificial intelligence.

    I didn't get it. How can printing avoid AI? And more importantly is this AI-resistance sustainable?

    • coffeefirst5 hours ago
      The students were reading AI summaries rather than the original text.

      Does this literally work? It adds slightly more friction, but you can still ask the robot to summarize pretty much anything that would appear on the syllabus. What it likely does it set expectations.

      This doesn't strike me as being anti-AI or "resistance" at all. But if you don't train your own brain to read and make thoughts, you won't have one.

      • epolanski5 hours ago
        I was reading summaries online 25 years ago as well.

        Hell, in Italy we used to have an editor called Bignami make summaries of every school topic.

        https://www.bignami.com/

        In any case, I don't know what to think about all of this.

        School is for learning, if you skip the hard part you not gonna learn, your lost.

        • skeptic_ai5 hours ago
          Instead of learning the things that can be done by ai, learn how to use the ai as that’s the only edge you got left.
    • secabeen5 hours ago
      You can't easily copy and paste from a printout into AI. Sure, you can track down the reading yourself online, and then copy and paste in, but not during class, and not without some effort.
      • xigoi5 hours ago
        LLM services have pretty much flawless OCR for printed text.
      • stephenbez5 hours ago
        It’s easy to take a picture of a printout and then ask AI about it. Not that hard even when it’s many pages.
    • Flavius5 hours ago
      This approach is just cheap theater. It doesn't actually stop AI, it just adds a step to the process. Any student can snap a photo, OCR the text and feed it into an LLM in seconds. All this policy accomplishes is wasting paper and forcing students to engage in digital hoop-jumping.
      • mbreese5 hours ago
        It’s not theater. It introduces friction into the process. And when there is friction in both choices (read the paper, or take a photo and upload the picture), you’ll get more people reading the physical paper copy. If students want to jump through hoops, they will, but it will require an active choice.

        At this point auto AI summaries are so prevalent that it is the passive default. By shifting it to require an active choice, you’ve make it more likely for students to choose to do the work.

        • Flavius4 hours ago
          That friction is trivial. You are comparing the effort of snapping a photo against the effort of actually reading and analyzing a text. If anyone chooses to read the paper, it's because they actually want to read it, not because using AI was too much hassle.
        • blell4 hours ago
          Any AI app worth its salt allows you to upload a photo of something and it processes it flawlessly in the same amount of time. This is absolutely worthless teather.
          • mbreese4 hours ago
            It’s not the time that’s the friction. It’s the choice. The student has to actively take the picture and upload it. It’s a choice. It takes more effort than reading the autogenerated summary that Google Drive or Copilot helpfully made for the digital PDF of the reading they replaced.

            It’s not much more effort. The level of friction is minimal. But we’re talking about the activation energy of students (in an undergrad English class, likely teenagers). It doesn’t take much to swing the percentage of students who do the reading.

            • blell4 hours ago
              Are you really comparing the energy necessary to read something to taking a photo and having an ai read it for you. You are not comparing zero energy to some energy, you are comparing a whole lot of energy to some energy.
          • LtWorf3 hours ago
            The quotas for summarising text and parsing images and then summarising text aren't the same. As you surely know.
            • blell2 hours ago
              Who’s paying for that? Certainly not the users (yet).
      • ulrashida5 hours ago
        Students tend to be fairly lazy, so this may simply mean another x% of the class reads the material rather than scanning in the 60 pages of reading for the assignment.
      • jrm43 hours ago
        You fundamentally misunderstand the value of friction. The digital hoop-jumping, as you call it, is a very very useful signal for motivation.
  • cbfrench2 hours ago
    Over a decade ago now, I was teaching college English as a grad student, and my colleagues and I were always trying to come up with ways to keep kids from texting and/or being online in class.

    My strategy was to print out copies of an unassigned shorter poem by an author covered in lecture. Then I’d hand it out at the beginning of class, and we’d spend the whole time walking through a close reading of that poem.

    It kept students engaged, since it was a collaborative process of building up an interpretation on the basis of observation, and anyone is capable of noticing patterns and features that can be fed into an interpretation. They all had something to contribute, and they’d help me to notice things I’d never registered before. It was great fun, honestly. (At least for me, but also, I think, for some of them.) I’d also like to think it helped in some small way to cultivate practices of attention, at least for a couple of hours a week.

    Unfortunately, you can’t perform the same exercise with a longer work that necessitates reading beforehand, but you can at least break out sections for the same purpose.

  • hilbert422 hours ago
    ""When you read a book or a printed course packet, you turn real pages instead of scrolling, so you have a different, more direct, and (I think) more focused relationship with the words,” Fadiman wrote."

    I concur completely with Fadiman's comment as that has been my experience despite that I have been using computer screens and computers for many decades and that I am totally at ease with them for reading and composing documentation.

    Books and printed materials have physical presence and tactility about them that are missing from display screens. It is hard to explain but handling the physical object, pointing to paragraphs on printed pages, underlining text with a pencil and sticking postit notes into page margins adds an ergonomic factor that is more conducive to learning and understanding than when one interacts with screens (including those where one can write directly to the screen with a stylus).

    I have no doubt about this, as I've noticed over the years if I write down what I'm thinking with my hand onto paper I am more likely to understand and remember it better than when I'm typing it.

    It's as if typing doesn't provide as tighter coupling with my brain as does writing by hand. There is something about handwriting and the motional feedback from my fingers that makes me have a closer and more intimate relationship with the text.

    That's not to say I don't use screens—I do but generally to write summaries after I've first worked out ideas on paper (this is especially relevant when mathematics is involved—I'm more cognitively involved when using pencil and paper).

  • 2b3a514 hours ago
    Quote from OA

    "TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option."

    And later in OA it states that the cost to a student is $0.12 per double sided sheet of printing.

    In all of my teaching career here in the UK, the provision of handouts has been a central cost. Latterly I'd send a pdf file with instructions and the resulting 200+ packs of 180 sides would be delivered on a trolley printed, stapled with covers. The cost was rounding error compared to the cost of providing an hour of teaching in a classroom (wage costs, support staff costs, building costs including amortisation &c).

    How is this happening?

  • anilakar5 hours ago
    At 150 eurobucks apiece, printed freshman coursebooks were prohibitively expensive in uni. We just pirated everything as a consequence.
    • Symbiote5 hours ago
      At my university in actual Europe, many copies of the required textbooks were available in the library. Printing was free.
    • Flavius5 hours ago
      That's the whole point. They don't care about students or education, they care about wasting resources and making a lot of money in the process.
      • mistrial95 hours ago
        some do and some don't. the "outrage" button is appropriate for the first part (don't care about students; waste resources to increase profits), but destructive for the second (we do care about students; we use resources in the classroom). It is hard to discuss this important topic when things go to "yelling" immediately?

        > They don't care about students or education, they care about wasting resources and making a lot of money in the process.

  • bko5 hours ago
    Who is behind this over digitization of primary school? My understanding is that in the Us pretty much all homework and tests are done on computers or iPads.

    This obv isn’t a push by parents because I can’t imagine parents I know want their kids in front of a screen all day. At best they’re indifferent. My only guess is the teachers unions that don’t want teachers grading and creating lesson plans and all the other work they used to do.

    And since this trend kid scores or performance has not gotten better, so what gives?

    Can anyone comment if it’s as bad as this and what’s behind it.

    • el_benhameen5 hours ago
      My kids are in elementary school in the SF area (although pretty far in the ‘burbs) and this is not my experience.

      The older one has a chromebook and uses it for research and production of larger written projects and presents—the kind of things you’d expect. The younger one doesn’t have any school-supplied device yet.

      Both kids have math exercises, language worksheets, short writing exercises, etc., all done on paper. This is the majority of homework.

      I’m fine with this system. I wish they’d spend a little more time teaching computer basics (I did a lot of touch typing exercises in the 90’s; my older one doesn’t seem to have those kind of lessons). But in general, there’s not too much homework, there’s good emphasis on reading, and I appreciate that the older one is learning how to plan, research, and create projects using the tool he’ll use to do so in future schooling.

    • michaelt5 hours ago
      A few decades ago:

      * People needed to be taught digital skills that were in growing demand in the workplace.

      * The kids researching things online and word-processing their homework were doing well in class (because only upper-middle-class types could afford home PCs)

      * Some trials of digital learning produced good results. Teaching by the world's greatest teachers, exactly the pace every student needs, with continuous feedback and infinite patience.

      * Blocking distractions? How hard can that be?

      • nine_k5 hours ago
        Reading with AI summaries jumping into your eyes is like writing in a word processor that completes sentences and paragraphs for you.

        Writing with a word processor that just helps you type, format, and check spelling is great. Blocking distractions on a general-purpose computer (like a phone or a tablet) is as hard as handing locked-down devices set up for the purpose, and banning personal devices.

  • dlcarrier5 hours ago
    In pretty much any school system, just complain that the printout is not compatible with your text-to-speech engine, and the instructor will be required to provide an electronic version, no questions asked.
    • berhunter4204 hours ago
      Or you can fold your tuition dollars into cranes and burn them as performance art.
      • dlcarrieran hour ago
        Isn't that the core of how universities operate, in the first place?

        Sure, you could get an education for cheap from a community college, or free from various online sources, or for the best education possible, get paid to learn on the job. If you attend a university though, you're getting prestige by showing how much of your money, or someone else's money, you can burn through.

        It's not like anyone's taking undergraduate classes at Harvard or Stanford because the teaching assistants actually instructing are going to provide above-average instruction. They aren't even concerned with tenured professors teaching performance; they put publishing metrics first.

      • jfengel4 hours ago
        Students have never understood the value of school work. It's a hard thing to understand. None of the assignments are asked because the teacher wants to know the answer. They already know. So it all closely resembles busy work. AI is perfectly designed to do busy work.

        Students have always looked for ways to minimize the work load, and often the response has been to increase the load. In some cases it has effectively become a way to tech you to get away with cheating (a lesson this even has some real-world utility).

        Keeping students from wasting their tuition is an age-old, Sisyphean task for parents. School is wasted on the young. Unfortunately youth is also when your brain is most receptive to it.

  • azinman25 hours ago
    Computers have not advanced education — the data shows the opposite. I think we should just go back to physical books (which can be used!), and pen and paper for notes and assignments.
  • edge174 hours ago
    This is a bit off topic, but why are used books so expensive on abebooks, thriftbooks, amazon so expensive compared to booksales, etc? I recall a time when a lot of these online stores were selling them for a few cents (granted, it was a long time ago and it was still called zShops on Amazon).
    • rr8084 hours ago
      Do you mean a few cents plus $5 shipping? I think they still exist but often results are ranked by total cost now which is clearer.
  • arnavpraneet5 hours ago
    I might be wrong but I fear this strategy might unfairly punish e-readers which imo offer the best of both worlds
    • sodality25 hours ago
      I've brought my kindle to even the most strict of technology-banned lectures (with punishments like dropping a letter grade after one violation, and failing you after two), and never have they given me a problem when asked. They realize the issue isn't the silicon or lithium, it's the distractions it enables. I'm sure I could connect to some LLM on it, it's just that no one ever will.
    • mmahemoff4 hours ago
      I’ve tried many e-readers since early Kindle but I keep coming back to two fundamental problems with e-ink, both relevant to education.

      First, extremely cumbersome and error-prone to type compared to swipe-typing on a soft keyboard. Even highlighting a few sentences can be problematic when spanning across a page boundary.

      Second, navigation is also painful compared to a physical book. When reading non-fiction, it’s vital to be able to jump around quickly, backtrack, and cross-reference material. Amazon has done some good work on the UX for this, but nothing is as simple as flipping through a physical book.

      Android e-readers are better insofar as open to third-party software, but still have the same hardware shortcomings.

      My compromise has been to settle on medium-sized (~Kindle or iPad Mini size) tablets and treat them just as an e-reader. (Similar to the “kale phone” concept ie minimal software installed on it … no distractions.) They are much more responsive, hence fairly easy to navigate and type on.

    • PlatoIsADisease5 hours ago
      Its obvious they don't care.

      That said, I always thought exams should be the moment of truth.

      I had teachers that spoke broken english, but I'd do the homework and read the textbook in class. I learned many topics without the use of a teacher.

  • crazygringo5 hours ago
    > TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option.

    This made sense a couple of decades ago. Today, it's just bizarre to be spending $150 on a phonebook-sized packet of reading materials. So much paper and toner.

    This is what iPads and Kindles are for.

    • nine_k5 hours ago
      No, the cost of the paper, toner, and binding is the cost of providing of a provably distraction-free environment.

      To make it more palpable for an IT worker: "It's just bizarre to give a developer a room with a door, so much sheetrock and wood! Working with computers is what open-plan offices are for."

      • crazygringo2 hours ago
        What kind of distraction are you getting on your Kindle...?

        Also, the university isn't covering the cost here. The students are. And buying the Kindle would be cheaper than the printing cost of the packet itself.

        So I stand by my point. If you don't want distraction, get Kindles.

        And even iPads are pretty good. They tend to sit flat so you're not "hiding" your screen the way you can with a laptop or phone, and people often aren't using messaging or social apps on them so there are no incoming distractions.

  • jrm43 hours ago
    College instructor here. One thing I'm seeing here that's kind of funny is how badly so many of you are misunderstanding the value of "friction."

    You see a policy, and your clever brains come up with a way to get around it, "proving" that the new methodology is not perfect and therefore not valuable.

    So wrong. Come on people, think about it -- to an extent ALL WE DO is "friction." Any shift towards difficulty can be gained, but also nearly all of the time it provides a valuable differentiator in terms of motivation, etc.

  • raincole5 hours ago
    If textbooks weren't so expensive I'd be more cheering on them.

    > TYCO Print is a printing service where professors can upload course files for TYCO to print out for students as they order. Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option.

    Lol $150 for reading packets? Not even textbooks? Seriously the whole system can fuck off.

  • kkfxan hour ago
    Nothing strange nor new: the average teacher is reactionary even at top universities, generally incapable of evolving, much like the stereotypical average vegetable seller.

    We continue to teach children (at least in the EU) to write by hand, to do calculations manually throughout their entire schooling, when in real life, aside from the occasional scrap note, all writing is done on computers and calculations are done by machine as well. And, of course, no one teaches these latter skills.

    The result on a large scale is that we have an increasingly incompetent population on average, with teaching staff competing to see who can revert the most to the past and refusing to see that the more they do this, the worse the incompetent graduates they produce.

    The computer, desktop, FLOSS, is the quintessential epistemological tool of the present, just as paper was in the past. The world changes, and those who fall behind are selected out by history; come to terms with that. Not only, those who lag behind ensure that few push forward an evolution for their own interest, which typically conflicts with that of the majority.

  • Mathnerd3145 hours ago
    If you are flipping through the reading to find a quote, then printed readings are hard to beat, unless you can search for a word with digital search. But speed reading RSVP presentation beats any kind of print reading by a mile, if you are aiming for comprehension. So, it is hard to say where the technology is going. Nobody has put in the work to really make reading on an iPad as smooth and fluid as print, in terms of rapid page flipping. But the potential is there. It is kind of laughable how the salesman will be saying, oh it has a fast processor, and then you open up a PDF and scroll a few pages fast and they start being blank instead of actually having text.
  • 6stringmerc4 hours ago
    My thesis paper about a course for Freshman Composition Writing to stress fundamentals by way of using quill, pencil, pen, and finally a typewriter, was written 20 YEARS AGO in response to Spell Check and Auto Predict at the time...2006...

    This isn't my article nor do I know this Educator but I like her approach and actions taken:

    https://www.npr.org/2026/01/28/nx-s1-5631779/ai-schools-teac...

  • subhobroto4 hours ago
    I have been thinking about this and it seems like it's an asset that students want to do as little work as possible to get course credits. They also love playing games of various sorts. So instead of killing trees, printing pages of materials out and having students pay substantial sums to the printing press so we can inject distance between students reading the material and ChatGPT, why not turn it around completely?

    1. Instead of putting up all sorts of barriers between students and ChatGPT, have students explicitly use ChatGPT to complete the homework

    2. Then compare the diversity in the ChatGPT output

    3. If the ChatGPT output is extremely similar, then the game is to critique that ChatGPT output, find out gaps in ChatGPT's work, insights it missed and what it could have done better

    4.If the ChatGPT output is diverse, how do we figure out which is better? What caused the diversity? Are all the outputs accurate or are there errors in some?

    Similarly, when it comes to coding, instead of worrying that ChatGPT can zero shot quicksort and memcpy perfectly, why not game it:

    1. Write some test cases that could make that specific implementation of `quicksort` or `memcpy` fail

    2. Could we design the input data such that quicksort hits its worst case runtime?

    3. Is there an algorithm that would sort faster than quicksort for that specific input?

    4. Could there be architectures where the assumptions that make quicksort "quick", fail to hold true? Instead, something simpler and worse on paper like a "cache aware sort" actually work faster in practice than quicksort?

    I have multiple paragraphs more of thought on this topic but will leave it at this for now to calibrate if my thoughts are in the minority

  • jmclnx5 hours ago
    While I fully agree with this, this quote bothers me:

    >Shorter packets can cost around $20, while longer packets can cost upwards of $150 when ordered with the cheapest binding option

    Does a student need to print out multiple TYCO Packets ? If so, only the very rich could afford this. I think educations should go back to printed books and submitting you work to the Prof. on paper.

    But submitting printed pages back to the Prof. for homework will avoid the school saying "Submit only Word Documents". That way a student can use the method they prefer, avoiding buying expensive software. One can then use just a simple free text editor if they want. Or even a typewriter :)

  • everybodyknows5 hours ago
    > This semester, she is requiring all students to have printed options.

    What could it mean for an "option" to be "required"?

    • 2 hours ago
      undefined