Okay, lets look at the “better side” of things.

The good thing about using LLMs is that you do not have to deal with Google Search and any fucking Social Media.

Imagine a painfully typical scenario – you want to clarify or better understand something you already vaguely knew or at least aware of. You type a query into Google Search, and you get… a fucking CEO fucked-up list of Ad-infested links to various web pages – either the largest social media containment boards (StackOverflow, Reddit, Medium), or some CEO’d blogs, when you are either greeted with a wall of text (usually directly pasted from tutorials and docs), ads, pop-ups, and other distractions, or some narcissistic asshole’s low-effort over-verbose crappy verbiage about “how fucking smart he is”.

This is only the tip of the iceberg (a stupid fucking cliche, I know). When you are being landed on some HN or Plebdit page what happens exactly? You get a wall of text, full of “insights” from various random assholes, who are either clueless or just want to show off their self-proclaimed “expertise”. Most of the content is about “look, ma, I know this and that”, and in general it is about “I” and “me” and “my experience”.

Recall that focused attention is a limited resource (just like physical endurance). If you spend it on reading all that emotionally charged crap, you have almost none of it left for actually concentrating and thinking about the subject you are trying to learn about. This is not just a “theory” – this is a well-known cognitive bias called the spotlight effect. When you read all that self-centered bullshit, your brain gets hijacked by the emotional content, and you end up focusing on the wrong things. And the worst part is that the dopamine/cortisol rollercoaster you get from reading all that crap leaves you mentally exhausted, and spent of any willpower and motivation, and the dopamine and adrenaline levels has to be restored to their base-lines in order to feel (acutally be) motivated and focused again.

With LLMs, hopefully you can avoid all that crap. You can just ask a question, and get a concise, focused answer (slop), without any distractions. You can get the information you need, without having to wade through all the bullshit. At least, this is the theory.

The best part of all this is that one can refine the queries by feeding back to it some of it formulations and terminology and even code snippets, thus progressively refining (up to some theoretical “fixed point”) the quality of the slop being generated. This is something that is not possible with traditional search engines.

At a “meta-level” all the LLM providers, of course, collect and feed back all user’s queries and interactions as a new training data – literally feeding it on its own slop – thus “improving” the models over time.

This, by the way, is the answer to the common question “how exactly do LLMs improve over time?”, provided that the high quality content with both high-quality text and related high-quality code very close to each other is very rare and scarce – just a few old good books by distinguished authors here and there. Not more than 50 or 60 at all. Recall that LLMs operate at the level of “tokens” (mere “morphology” and syntax, without any “semantics” or “understanding” of any kind), and the amount of high-quality token streams (in a proper order, so that the “distances” are “short”) is very limited.

By feeding back the user queries and the resulting slop, which is already a mix of [somehow] related text and code, the LLM providers can “amplify” the amount of “high-quality” token streams, thus improving the models over time, while charging you for every token.

And the results are absolutely amazing, at least for the top-tier models like Gemini or Grok or Claude. Being over-constrained by sophisticated prompt-“engineering” techniques (stating the required to apply principles and techniques of a non-bullshit computer science of the last 60 years or so), they can spew out some really good slop, which is often indistinguishable from the best FP textbook code examples, crafted with a principle-guided understanding (math) and careful attention to details and minimal representations by a distinguished author.

The only question is – now what? Who will pay me for doing it?

It is easy to imagine that all this will quickly turn into a much steeper “race to the bottom”, where for $20/hour or even less you will be asked to turn out at the end of the day a “high-quality” slop of the amount that originally took weeks or months of fully understand and years of studying and practice to produce – a trick which is actually possible nowadays. An experienced “Senior Software Engineer” could literally bootstrap (or at least prototype) a complex system (more realistically – a distinct layer of it) in a single day, and ton polish the whole thing in a week or two.

I am not completely sure that I really want to participate in such a race, after everything I have seen (I’ve seen things you people wouldn’t believe).