So, it seems like this is the time to somehow sum up the current AI hype (way through the roof) and the immediate and long term consequences of it.
First of all, a proper education – studying the fundamental underlying principles instead of particulars – which used to be an unofficial mantra of MIT, pays off again.
One just sets particular constraints to a coding LLM and use it as a whole-data-center-powerful constraint satisfaction engine that spews out a slop, which then can be used for rapid prototyping and minimal-viable products. The properly constrained slop can even be used as the basis of a project, which then undergo proper continuous improvement and refinement by a human expert (who knows the whys).
The key skill is knowing what to ask. This is what a proper, non-bullshit CS and PL theories are for. One has to be very specific, asking for particular common idioms, design patterns and proper abstract interfaces, forcing the generator to implicitly (through your specific requirements) “apply” all the fundamental principles discovered over the last 70 years of CS research (Michael Jackson and Barbara Liskov, mostly).
Otherwise, if you do not care and just what to get shit done, which is what 99% of coders (do not confuse them with programmers, given that a proper programming is an applied mathematics), then you are lucky. You are getting some 90% of cost reduction and god know how many orders of magnitude of “productivity” – the dream of being “productive” without [wasting time on] understanding of any kind is at the core of so-called creativity and so-called creative people.
The best use case is “filling up the gaps” in crappy, verbose, convoluted, stateful imperative OO APIs, such as fucking J2EE, Android, everything from MS, webshit in general, especially the abomination called React, and so on. This is, indeed, just coding, not programming, which begins with undersanding at many levels, and the data-center-sized LLMs (which charge you per token and use your prompts and results for free to improve its training data) are already good at.
The convoluted, designed by a brain-dead people, verbose shit, which people of refinement and a good taste wouldn’t touch. even for small money – things like pre-Kotlin Android apps or framework-based webshit – now could be generated in minutes, instead of months (when at least some understanding of this crap was required, together with reading about a ton of irrelevant technical and implementation details).
Again, filling the parameters of API calls without understanding is what LLMs really excel with, and the “productivity gains” are of orders of magnitude. Those who are already in a position to sell such crap (constractor coding, remote coders and what not) have had their once-in-a-lifetime moment – a literal free lunch.
For the rest of us, this is an “adapt quickly or get extinct” moment. The careful, slow-paced, and thus very expensive craft, based on proper understanding and disciplined attention to details (just like in any art or craftsmanship) would quickly die off and become a niche and hobby activity, just like most of the real crafts of the past replaced by a mass-production of a cheap slop. This is just the basic laws of economics.
Are you sure that it will be you who will survive producing, let’s say, katanas or anything custom? There are now hundreds of millions of people with [mostly crappy] CS decrees and even more so of half-assed “cowboy coders” (who swarm /g/ and HN) who can produce crappy code fast and cheap, but, of course, not as cheap, leave alone as fast, as the top-tier LLMs already do. So, there will be a large socioeconomic “disruption” and “shift”, comparable, perhaps, only to what happened to what happened to the people whose occupations were centered around horses, in the 1920s and 1930s.
None of these considerations are hypothetical or just an opinion. At least 5 major, mega-corp backed LLMs are already here, and demoralization, demotivation and apathy they cause are very real. One has no choice but to question what’s to be done and how to adapt to survive.
There are, at least hypothetical and subtle, benefits too. A lowest quality imperative crap which “just works” will probably get washed out, even from the github, since modern LLMs will spew out more-or-less “standardized” (a sort of a “common sense”) code, after many-many iterations of feed-back loops to the training data, but this, probably, would never happen, since the training data is Github, not the ML or Haskell (or Scala3) standard libraries or the Google’s monorepo – the code which has been really crafted, and continuously refined by a lot of actually qualified people.
By the way, there is a real imperial test for the LLM technology – the Google’s model trained on its monorepo and all the accompanied verbiage, should, at least in theory, if all their engineering practice worth a dime (and they do), would yield a model, which will outperform everything in existence in the context of C++ (only) coding, since what Google do is the world’s most refined software engineering process out there. coping with a scale and with the in-principle crappy (due to really stupid, imperative-and-then-OO “design” decisions being frozen by the promise of backward compatibility) language.
Training a model on Google’s monorepo shall be the killer app for the LLM tech. They have done this already, of course, but something seem to get wrong (which is understable, because C++ is indeed a fucking abomination, and dealing with it at the levels of mere syntax is nearly impossible).
Anyway, this is (more or less) where we are. Welcome to the Brave New World of cheap and crappy (but who cares?) automation and dramatic cost-cutting for a coding slop generation at a scale. Enjoy the ride.