The purpose of examination in a college or university setting is to empirically verify that a student has his own genuine understanding and could actually apply it. It is that simple.
This is why some courses even allow to bring a textbook to an exam, because if one does not [already] understand the principles and techniques, the textbook is of no use.
However, bringing pocket calculators, smartphones is considered a cheating, precisely because by using these devises one could appear as possessing one’s own understanding.
This is exactly why using GPTs feels like cheating, because it actually is. It is even a cheating squared, because neither you, nor the LLM actually posses any genuine knowledge.
Neither have one’s own genuine understanding. built up from the first principles, like a rigorous mathematical prof or a logical deduction, but somehow comes up with a mere appearance.
And this is basically it. Any educator worth his name would confirm this simple thesis.
When people are writing on social media that they “felt left behind” and “overwhelmed”, and that “it is immoral to show GPT’s output to people”, it is exactly why. Because it is a cheating.
Again, it is not just like a bringing a pocket calculator to an high-school arithmetic test and then to mog all the classmates, while the calculator is inexact most of the time and occasionally gives a wrong answer.
It is, literally, coming up with a stream of very confident, apparently knowledgeable stream of pretentious and self-assertive verbiage, which, turns out, after careful examination, to be a vague hand-waving based on a lousy thinking, full of nonsense, contradictions and subtle bullshit.
Or the code that neither compiles nor runs nor passes all the tests (which, by the way, is not enough – the crucial property of a good code is to have an one-to-one correspondence with the conceptual hierarchy of the inherent complexity of the problem domain, with its structure being properly captured as a set of layers of abstractions).
This is why the task of writing an essay (which is to express one’s own genuine understanding of a subject) is completely ruined by a generative AI. Neither the student nor the LLM have any understanding whatsoever.
There is no other way but a bottom-up build up from the from principles, where each step is justified (and verified) by the previous results, like a rigorous mathematical prof. Neither you nor an LLM have such capacity.
It is that simple.