https://twitter.com/ID_AA_Carmack/status/1925710474366034326
I have read the notes. they are a mess.
For me, Carmack, aside from being a legend, is sort of Goggins of imperative procedural programming, who learned everything by doing without studying the theories first.
His ultimate strength is, it seems, in a focused doing, ploughing through a problem, if you will, without being exceedingly dramatic.
Learning from experience (actual trails and errors and quick feedback loops) and gradual improvement of his own “emergent” intuitive understanding – ones own mental model of how things should be done.
And he is, indeed, what he is. He missed the essense of FP (the referential transparency property and the resulting equational reasoning) while was arguing about pure functions as a sort of esoteric stuff.
He acknowledged the power of high-level languages and, implicitly, that a proper level of abstraction should match the level of the concepts in the problem domain, the principle that math-inclined FP folks have discovered long ago.
There are a lot deep insights in serious FP, including the fact that the shape of the data (of an algebraic data type) determine the shape of functions (operations), and even more general, that Algebraic Data Types capture some universal structures or common patterns.
Another the most fundamental aspect is that in FP one never “sees” the actual values, one is only declaring /what has to be done /eventually, using structural pattern-matching and case-analysis. It is not a random coincidence that patterns can be used everywhere, or that individual clauses define partial functions, or that everything can be uniformly “curried” and partially applied, even at the type-level. FP goes as deep as the Curry-Howard Correspondence, which connects FP, through the properly captured recurrent universal patterns, back to What Is. FP, like math itself, is about recurring patterns.
Anyway, he is the champion of procedural imperative programming (and concrete data structs), and not particularly abstract-mathematics inclined, so be it.
As for AI, just observability is not enough. There are much more than the visual system in human and animals learning process. There are emotions and feelings, which can be traced back to the levels of certain neuromodulators, which play a crucial role in the learning process, both as motivation and in habitual repetition, which leads, via cognitive feedback loops, to various forms of reinforcement.
It has to be as complex as Skinner’s Behaviorism (which is a gross oversimplification in itself), and mere LLMs (with all the memes, like “attention” and “reasoning”) are obviously not enough – they lack necessary machinery of emotions and feelings, leave alone actual (non-abstract) awareness and self-awareness (for which another kind of a specialized machinery is required),which has been evolved in biology.
So, no, it is not that simple as the current mass-hysteria about LLMs and “AI” suggests. The whole bunch of specialized brain areas and especially well-defined signaling among them are just “missing” from the current models. It takes a different kind of the mind to see this clearly.
Reinforcement learning as the systematic way of “strengthen” the connections within a NN, and thus presumably capturing the feedback to build a frequency-based probabilistic abstract models is not enough either.
Frequency-based probabilistic “knowledge” of events in the environment is different from the conditional probabilities of words (or tokens) about the environment, while ultimately they are incomplete views of The Same.
Ultimately, evolution builds biological structures within the brain – from amygdala, to specialized “areas” and “cortexes” (which are being replicated in the developmental process from a single cell to an organism) which reflects the constraints of the particular environment in which it happen to evolve. This is the central principle (not dissimilar to that the shape of the data determined the shape of the code).
Again, there are actual evolved structures, not just probabilities. The whole probabilistic approach is wrong. It is a gross oversimplification, using simple abstraction of counting. Mother Nature does not count. She build structures.
We are nowhere near to understand what is “I” in the “AI”. As for the games, the notion of an “analogy” and and similarity (which is a high-level one and seemingly can be captured by the mathematical notion of an isomorphism, while at the biological level it is not just “looks the same”, but especially “feels the same”). The notion that the current AI completely lacks.