There is something to be actually realized as soon as possible (which I recently experienced directly) –the condition-probability-based “glorified autocomplete” could generate some small pieces of code (with intelligent prompting) that simply above (in intrinsic quality) 90% of all the related crap that could be found on Github, which easily extrapolates to 90% of people in existence who identify themselves as “programmers”. Period.
The code will be better than high-dopamine, “laser-focused” (on Adderall or just amphetamines), but not so well-educated on fundamentals zealots could produce. Better than inexperienced in realities of the actual open-source software ecosystems academics could even hope to write, no matter how good their theoretical understanding is. With very specific and properly constrained prompting, which takes into account the major results of the last 50-or-60 years of PL and CS research, especially in the field of math-based Functional Programming and Immutable Data Structures, one can get almost optimal (approaching perfection small pieces of code, provided you understand what theoretical-limited perfection actually looks like in this particular case, and why.
The code that just uses well-understood and familiar mathematical concepts is the obvious “case study”. It can spew out stuff which is on-par with Haskell or Ocaml/SML or Scala3 standard libraries, that took decades to evolve from a serious research in abstract algebra structures and non-bullshit semantics-based, FP-centered PL theory, along with years-and-years of trials-and-errors at the implementation side. (No webshit or async crap, of course. the garbage in – garbage out principle shines like the moon).
This fact renders subpar “programmers”, the ones who lack the necessary deep understanding of what they are doing and why (which goes all the way back, via CS and its underlying mathematics, via the fundamental abstractions, via universal recurring patterns, back to What Is) which call themselves [correctly] [mere] “coders”. They are already obsolete as “computers” of the ancient past – the people who memorize and apply mathematical formulas to calculate stuff for rich people. Obsolete as “translators” – the people who memorized the rules of two languages and can “interpret” between them using pen and paper, and so on. Film-based photo shops and mechanical topographies also come to mind. It is actually over, no more two more weeks.
Again, we do not talk here about webshit or async crap. We are talking about Algebraic Data Types, proper non-leaking ADTs, modern advanced static-typing with proper sum-types, Domain-Driven and Test-Driven development, rigorous formal modeling in pure-functional languages (or the pure subsets of good , math-based languages) and stuff like that, when normies would ask “Where is the code?” Yes, all serious, non bullshit Computer Science itself , indeed, converges to just a few “things” like Algebraic Types, ADTs , Abstraction and Specification (as per Barbara Liskov) and augmented with advanced types The Lambda Calculus, and the algebraic structure which arise [back] from the dots-between-arrows of functional composition. Remember that everything has already been solved the golden age of CS, which culminated, via Scheme, SML, and Miranda, in the Haskell 98 report and the last good things like Clojure or Scala ot Hasell 2010 were just direct consequences – application of the accumulated math-base theoretical knowledge.
Yes, yes. strong claims, so lets unpack a little.
There actually exist, such things as an optimal pieces of code – the individual code blocks that approach a limit of perfection – simply because there are such thing as Universe (Objective Reality or What Is). There is literally nothing to remove, simplify or clarify (or otherwise significantly improve) in, say, a pure functionallibrary code which use a generalizes ADT for sequences. Just because a sequence is a properly generalized abstraction from What Is (there is a “reason” why mRNA is a sequence – this is a minimal (not just in terms of “stuff being used”, but “in a universal priniple”), good-enough, and thus “optimal” structure), so the code which has been abstracted over a proper tail-recursive fold
(which uses the accumulator pattern) and thus the code pays an inevitable price of an extra reverse
(which is also defined in terms of fold
), that implicitly uses a Monoidal Functorial abduct algebraic structure of a sequence (again, abstracted out from What Is), which the best guys like Bartosz, have been studied for years and years, cannot be improved any further in principle.
The same could be said about pre-sorted or otherwise constrained tree-like structures, tables and some directed acyclic graphs (which, again, are proper captures of some aspects of What Is), which are (for exactly the same reasons) are good-enough for everything (and yes, there is a non-commutative monoidal structure which underly your [GPU-executed] Linear Algebra for AI, and even in the weighted sums, which actually underly all your silly probabilistic models, because the very notion of a weighted sum [of the causal factors] is a universal concept. which, in turn, partially captures the Causality itself).
Thus, a proper fixed-point-like process has converged to a global optimum (at least for sequences) with respect to a “good-enough epsilon” of an implementation constraints. The code retains the fundamental referential transparency property, with all the nice things – direct corollaries from being math or logic –follow. By the way, non-bullshit “math and logic” are also grounded in the very same What Is.
To understand and to be able to realize such things people spent decades of studying and reading. To come up with an actual code requires years before one could formulate all the constraints and representation invariant clearly, taking into account the underlying algebraic structure and the facts that there is “nothing more out there” to it. (The fact that one could “construct” way more abstract purely theoretical crap like latices or full Categories is irrelevant, and should point to a the threshold of crossing from abstractions captured from Reality into the realm of a rigorous abstract delirium).
And now such code (for prototyping purposes) can be generated in seconds. Seconds. Orders of magnitude change in “productivity”. Yes, one has to understand some concepts (here, there) before putting them into a prompt as explicit constraints and requirements, but prompts can be just copied, collected from the inputs of other people, who will even pay you from the privilege (hi, big tech) and then sold for profit and reused.
The years of education in a proper non-bullshit tech school like MIT (one can study by oneself on their study materials and courses) will not (and never) be wasted, but the 100k of financial debt probably won’t be repaid so easy.
Amazingly (or rather as an direct corollary or even implication from how it actually works) it cannot fix technical debts, which, again, is just a poilite name for stupid and lame early decisions. So no GPT will fix piles of crap like WinAPI or Ethereum (lmao), but with rapid prototyping (that avoids imperative crap) from the first principles, with focusing on the proper fundamentals of CS (yes, there are such things – principles the principles which connect properly captured abstractions, back to What Is) prototyping of way better things can be done at a tiny fraction of costs and manpower. I shut you not.
Conversely, it cannot just generate any actually novel solutions, like yet another crapto, with the proper algebraic structures to underly the almost optimal chain implementation (it is just trees, not even graphs, with an additional constraint of the consistent nested hashes ), and with the G-Machine (which implements the Haskell evaluation strategy) as its runtime and the smallest possible superset of the some typed lambda calculus, not even required to be as general as the System F Omega (what else could possible be better suited for so called “smart contracts”) – the thing at which Cardano and Charles (with all the hired academic super stars) have spectacularly failed (and things that narcissistic and arrogant, fountaining with bullshit vitaliks could not even begin to understand), simply because there is not enough or no training data for G-Machines and properly constrained trees.
Notice that going slowly by carefully and very specifically (with all the possible theoretical constraints and strictest possible static typing on algebraic data types) prompting all the way back form the relevant math, a qualitatively superior (doe to substantially less stupidity dept – the proper term) prototype would take months instead of years, and just a coupe of MIT-like smart guys instead of an office full of “hires” and interns. Jane Street-like trading establishment could be prototyped at the fraction of the costs and manpower, provided you know what to ask – know a half of the right answer, as the ancient (but even mode valid today) maxim goes..
So, by now , with this generative technology literally everywhere in a massive bubble, we ought have already been literally flooded with all kinds of actually useful and well-designed, properly implemented software for all domains and problems imaginable , but we aren’t , because there is no demand. The real demand is to keep the old crap work and maybe little changes here and there, no one really wants anything new (due to the bias of complexity and really high costs).
If you are “in the IT field” and have any difficulty to comprehend and understand this simple article, which I wrote on a whim in one sitting without any “assistants”, it is time to panic.