Today’s bullshit was Surge CEO Says ‘100x Engineers’ Are Here. Yesterday it was “Gemini at IMO Gold level”. Again, mere appearances are not the facts of reality, but this realization requires a bit more education and old-fashioned intelligence, similar to that of Hesse or Sartre.
What does it mean nowadays to be an engineer, and to be a 100x? It seems that the meaning is what they call “productivity”, which is time elapsed for putting together some spaghetti webshit .without any understanding whatsoever, from hundreds of lowest quality amateur node_modules
in a few minutes. This is what 100x means for them, and this is, of course, bullshit.
Here is what may be a real 100x:
- find out an optimal generalized monadic interfaces to clearly separate “effects” from “pure” code, so that a compiler can check and enforce these abstraction barriers, so a much simpler (less complex), safer (via non-overlapping guarantees) language runtimes can be developed.
- write a close to optimum automatic differentiation engine, in Rust, lets say (because the more compile-time constraints are satisfied – the better), so it can be interfaced in all higher-level scripting languages. Yes, yes, we have PyTorch, but it is a cowboy-coding crap.
- OK, lets just evolve Octave (which has absolutely amazing “proper” high-level DSL for matrix manipulations) to support modern JIT tech, like Intel SYCL and what not.
In general, if the current megacorp-level (a datacenter-sized) models posses any 10x (which they do only in terms of the speed of generation what only appears to be a “good” code), it is kind of an obvious choice to apply it to “perfect” (noting more to simplify or take away) the foundational parts, like standard libraries and compilers, and the underlying mathematical models and representations (expression graphs, IR, etc).
Have we ever seen any compiler/stdlib ABI improvements due to application of mega AI models? Nope. Zero.
Here are another meme-benchmarks (tasks) for coding AI, so we all would say “wooooow” in awe and would bow to them as to some gurus and masters, being actually superior in capacities and refinement.
Take the best results in the most difficult parts of the classic CS and PL theory and make some progress towards reducing unnecessary complexity and removing “the technical debt” of ignorance and over-zeal.
The best results, of course, are the math-based, reverentially transparent mostly-functional languages, (and Haskell that did almost everything right). The task is well-defined and well-understood – to add “necessary” minimal imperative features in the only proper way - that the referential transparency property will be retained and that all the imperative features will be clearly separated by the abstraction barriers, which are checked, enforced and even inferred by a compiler. And this is not even “that difficult” – the “what color is your function” is an intuitive step in the right direction. No? Why no?
If the models are so good at code generation, and the productivity is that high, why not just rewrite from scratch the best codebases we ever have, to make it “just right” by removing accidental complexity and “stupidity dept”?
- Haskell (effects in the type-system, better set of libraries, removing unnecessary, redundant abstractions),
- Ocaml (cleaning up all the mess at the level of macros, take the best emergent syntactic sugar in),
- Scala3 (even more refinement, more removing of the clutter, even more convergence to the local optimums)
And above all, lets unify core languages and stdlibs based on useful accidental findings of other, rival “sects”. There is a set of features that is “just right”, orthogonal but complementary to each other. The classic FP/PL world has lots discoveries and the intuitive notion that “all modern imperative languages tend to become more and more similar in its features” is the right intuition and, indeed, an optimum is “Out There”, just like, lets say, how to properly use heat to cook food “just right” .
But they just cannot. The models do not posses the required capacities. They cannot come with novel solutions by combining the best parts already “known”. They only can re-create a new slop from the old one. Enough slop – like webshit or amateur Python, which does not even use the most idiomatic and optimal features of the language in the right way – will give rise to more slop of just the same low-effort crap.
And yes, being able to generate a low-effort imperative slop at the 100x speed is exactly what your “100x engineers” really are. This has something to do with enshittification of knowledge in general, of which “pronous” and other “modern progressive social constructs” are mere direct consequence.