This is how things work: you, being a “literally who?” nobody, wrote approximately 13 pages of titles of articles, about exactly how and why the current incarnation of AI is only an appearance of “intelligence” and a textbook cognitive illusion, and then some fuck wrote a single blog post and got all the fucking HN top page and what not. Life is harsh, competition for being [recognized as] “smart” is throat cutting, you know.
https://nooneshappy.com/article/appearing-productive-in-the-workplace/
Well, his main thesis that “Oh look, I have some prior experience and studied the subject and now someone just trying to vibecode me (ME!) out”. This is only the beginning, lmao.
Ask yourself, what it is exactly amounts to – to “study” and to “know” the subject? We all know “the 10,000 hours of deliberate practice” meme, which came from the realm of mastering an instrument, ballet, applied math and thai boxing, but there are other ways too.
I have “studied” in every possible way, from boring University Courses (they tried for the first time to teach C instead of Fortran and I have read K&R just like they did), I have read tons of classic CS books, like of Barbara Liskov or Richard Bird, I have watched a half of early Coursera, MIT Open Courseware, and a fucking shitton of crappy narcissistic Youtube tutorials (just a few of them were any good), and so on.
And I have read some low iq crap on /g/. A lot of it.
Here is the catch. Instead of watching a narcissistic “very smug” tutorial, which explains a basic CS concepts in some modern context (let’s say it is Rust), extracting a 3 min of a worthy content from an hour of crap, instead of parsing through Stackoverflow and, god forbid, HN, you can learn from a decent LLM, provided you know what to prompt for. (Asking the right question is a half of the correct answer, you know).
And, of course, if you extract these little “pieces of advice” and “programming tricks” into a single, coherent prompt or a “skill”… it is not that simple, however. – you have to know half of the answer so it will autocomplete the another half more-or-less correctly.
No, it won’t be the way Trinity learned to fly a chopper, it is actually the opposite – an good LLM, like Gemini-3-pro, is like a hose to your brain. The content is so dense and intense that the brain gets overwhelmed in minutes, trying to see the whole picture and to reconcile what it spouts out with what you already know.
This is the most crucial activity with an LLM feedback loop – to reconcile what it spouts out with what you already know. The most difficult and overwhelming challenge is which part of your map of the territory needs to be updated and why.
But once you getting used to this, there is no limits. Instead of endlessly digging for a rare tiny grains of knowledge from wast bare fields of sun-dried manure (social media) one could just prompt out the right answer by asking the right question.
I do not have to be you and “study the filed” (I have already studied several). Now I just can bend the slop generator my way and outperform you.
There is much more than mere appearance of being “productive” to it. Yes, it is, and will be, systematically grossly misused most of the time, but it is also possible to do old good things in different ways.
The only thing the blog post got right, is that one absolutely should not try to “design” something in a field one is completely ignorant of. But this is kinda obvious, because it boils down to the fundamental question any intelligent person asks all the time – “how do you know this isn’t a total bullshit”, or (a much more difficult one) – what part of this verbiage is a subtle bullshit. Without these two being your second nature you cannot even begin to see things as they really are.