We already had AI proof education.
It's a shame that they are also way more susceptible to cheating with AI.
Assignments and projects are great for learning, but suck for evaluation.
Another example, lit classes where the grade is based on time limited, open book exams, hand written in "blue books"
Read the book, pay attention in class, spend 90 min writing an essay, and you are done.
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
How do you know you actually learned, instead of being fed slop by the AI that isn't true at all? If you didn't study, then I doubt you'll really know if the AI is lying to you or not. I have to wonder if your teacher will too, sounds like they have kind of checked-out from actually teaching.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
Isn't that really what all these AI companies are doing too? It sure seems like it is.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
Maybe the medical profession is a counter example.
I’d argue that dealing with any high criticality operational incident is like an in person exam (maybe even the most difficult kind, the open book one) if you are the one responsible for fixing it. Everyone is looking at you, you have time pressure to solve it ASAP and you can’t afford the time to dig through all the docs on the spot. So there’s at least some similarity with some real life situations.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
What a narrow set of skills to send into your economy.
Context helps immensely, for example. Think of what you can do that someone outside tech can't.
What is the "it" that AI does for you?
This is assuming you know how to get good work out of AI in the first place. But even that is turning out to be a skill in and of itself.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
One of my best college professors would review such essays in-person, one-on-one twice each semester.
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
This statement is more defensible after removing “only”. If it “only” hurt the cheaters, there would be no need to police cheating at all.
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
Gyms aren't redundant because tractors exist.
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
Oh
You just said that it was a waste of time. So was it or not?
> that option is also trivially available outside of college, it's called “email”.
How many experts have you cold emailed over the years and how much of their time have you taken?