slop noun
Cambridge dictionary
- food that is more liquid than it should be and is therefore unpleasant
- liquid or wet food waste, especially when it is fed to animals
Oxford Learner’s Dictionary
- waste food, sometimes fed to animals
- liquid or partly liquid waste, for example urine or dirty water from baths
There is also a very related term “goyslop” from internet sewers (losers are always looking for someone to blame and hate [instead of themselves]).
AI slop
The term “AI slop” seems to have been originated in the digital creators (so to speak) communities, where they label software-generated images as such.
According to some digital creators, these images are “soulless”, glossy and ultimately inartistic (which is another question – they Mimi common art forms very well, and sometimes stroke an emotion, which is an operational definition of an art).
Anyway, the meaning is clear and unambiguous, and the term itself is already a well-established part of the Internet culture.
The problem is that AI-generated texts are much worse. They delude and even destroy the very notion that we call “knowledge”, which is used to be an ultimate power.
The fundamental problem is that any LLM or an “AI assistant” is ultimately an impostor by the very definition of this word:
noun: impostor a person who pretends to be someone else in order to deceive others, especially for fraudulent gain.
This is exactly what any “reasoning LLM” or any other model is. Not because “I think so” (because I am smart), but precisely due to the actual [inference] algorithms and data-structures being used.
There should be a reference to the infamous Dijkstra paper – “how do we know that the adder will work for all possible inputs, blah-blah-blah”. The only way to know is to understand how the adder has been built, and that, by design, it works the same for all numbers [as long as the representation is correct].
- We know that, by design, any LLM is, in principle, an impostor.
- This is the only (in principle) way to know [not to believe or guess].
The very related Dijekstrasque question is “How do we know that the AI slop which an LLM spews out is correct?”.
It isn’t. It is only look convising and coherent (because it has been carefully engineered to look that way).
Yes, idiots will send (already doing so) pages and pages of such AI slop to each other as an act of virtue signalling and to justify (and reinforce to themselves) their assumed and self-proclaimed “intelligence”, while demanding a [“deserved”] position of an “elite knowledge worker” (hi, Cal) among mere mortals. Yes, this is what it is.
This shift to an AI slop is a sort of outsourcing the understanding to an LLM, obtaining an instant “gratification” of possesing a “knowledge” (that other people [presumably] lack), which is actually not your own.
Paradoxically, developing a proper understanding – tracing everything back to, and then building it back bottom-up from the first principles, (despite that we forget and out cognitive abilities decline sharply with age) already is the most valuable skill.
And yes, to spot a subtle bullshit in a seemingly convincing and coherent AI slop one has to be a true expert (being a mere impostor won’t do) with exactly this kind of infallible bottom-up built knowledge (which is, still, power).