Given the US govt. security agencies recommendations for using Memory Safe Languages (as if there is more than one!), let’s bootstrap the right understanding of what the “memory safety” means from the first principles. The “first principles” will be indeed the first ones, because we will go all the way back to the underlying recurring patterns in What Is, from which the fundamental mathematical notions have been captured, properly generalized, abstracted away, named and their properties systematically studied. ...
The Rust Cheat Sheet
An intuitive understanding – approximate, imprecise but not wrong in the principles. A level of vague abstractions which capture the right underlying relationships – nothing essential is missed, nothing imaginary and non-existent is added. Everything “just naturally follows” from (can be explained by) the “right [intuitive] understanding”. This is what our ancient ancestors used before they have evolved the “scientific method”. Almost everything, however, can be “explained” (to a fool) by an arbitrary super-natural make-believe nonsense, so we have to be very suspicious to what the “talking heads” are saying and question everything. ...
On Rust (again)
Reading Rust theads on /g/ is an endless source of astonishment . We live in the age of enshitffication of knowledge and even information, where literal genens are obsessed with emotionally charged narratives about make-believe problems. Anyway, whatever. The real and fundamental innovation behind Rust is this. The great well-educated by studding the underlying principles of everything around them “old-timers”, like Joe Armstrong (R.I.P.), discovered the mathematical /fact that sharing and mutation cannot coexist in principle. The fundamental mathematical property of “referential transparency” is necessary to make any guarantee about the correctness of a program. The notion is that everything is an expression and any expression is “pure”, and only denotes a value (to which it can be eventually reduced or evaluated to. Thus, any expression can be substituted by a value it denotes (eventually evaluated to) without changing the meaning (semantics) of the any other expression within a program. This is the essence of functional programming, and it is a fundamental property of all correct programs. ...
LLMs-generated Rust code
Rust code is the ultimate evidence of the principal inability for a probability-based generating algorithm (based on sampling from a “learned” probability distribution over “tokens”) to come up with something that passes the type checker, but for the most trivial cases. The “causality” is that generation of the complex syntactic forms without the actual, proper understanding principles and heuristics is, well, “problematic”. The running example is these were subtle “already borrowed” panics, where the issue in with the underlying semantics, and the syntax is “correct”. The problem is that a recursive function has to drop all the borrows before a recursive call, and this constraint cannot be expressed in the syntax unless by redundant bindings which will automatically be drooped at the end of the scope. ...
Software In The Era of AI
https://www.youtube.com/watch?v=LCEmiRjPEtQ and, of course, the No.1 spot on the Chuddie safe space https://news.ycombinator.com/item?id=44314423. Karpathy is shilling “Cursor” and other cloud-based mettered AI services (which have to pay back their debts). Probably has an interest in it and some other meme AI startups. Nothing to see here. We should some day know which marketing “genious” came up with this “winning strategy” – to metter every single token (byte) and try to sell this to corporations. Corporations do not want to be mettered like that, they want to metter normies, the way Cellular operators do, and they never use any normies plans themselves. ...
Forecast Bitcoin Future Price
DESCRIPTION: Read some Shiller for Christ sake. I am so tired. These things, when I suddenly stumble upon them, render me speechless and feel wasted and burned out https://www.kaggle.com/code/vanpatangan/forecast-bitcoin-future-price This is a Jupyter notebook (an .ipynb), originally developed on Google Colab, where they have all the datasets and the tools to run the notebooks quickly. It uses some pre-defined package called Prophet (just imagine!) , which I assume contain some basic “state-of-the-art” statistical inference crap. ...
Yes, it is time to scream and panic
There is something to be actually realized as soon as possible (which I recently experienced directly) –the condition-probability-based “glorified autocomplete” could generate some small pieces of code (with intelligent prompting) that simply above (in intrinsic quality) 90% of all the related crap that could be found on Github, which easily extrapolates to 90% of people in existence who identify themselves as “programmers”. Period. The code will be better than high-dopamine, “laser-focused” (on Adderall or just amphetamines), but not so well-educated on fundamentals zealots could produce. Better than inexperienced in realities of the actual open-source software ecosystems academics could even hope to write, no matter how good their theoretical understanding is. With very specific and properly constrained prompting, which takes into account the major results of the last 50-or-60 years of PL and CS research, especially in the field of math-based Functional Programming and Immutable Data Structures, one can get almost optimal (approaching perfection small pieces of code, provided you understand what theoretical-limited perfection actually looks like in this particular case, and why. ...
Systematic testing and non-bulshit TTD
Testing interacts with your dopamine system, so you will a small yay! every time all tests passed. This is crucial, because motivation tend to decays exponentially and to experience inevitable “crashes” after spikes. TTD is sort of a direct consequence of type-driven (or “types first”) approach to prototyping. Ideally, each type is associated (in an one-to-one correspondence) with each distinct concept in the problem domain, at an appropriate level of abstraction. ...
Coding LLM failures
Maybe it is time to settle this discurse once and for all, and move on. Recently I asked Deepseek (otherwise amazing), Grok (a meme) and Gemini about idiomatic way of enabling syntax highlighting in blocks of code, which may appear in an eww or nov modes. What they all give me is some stylized pasta from StackOverflow and Github about polymode and mmm-mode and how to use regexps to find particular blocks and to apply some low-level function on a range. ...
Vibe coding explained
Reading less social media bullshit and using ones own well-trained mind sometimes helps to clarify complex and noisy topics. Here is why, in principle, any LLM coding (I cannot call this crap “programming” because there is no uderstanding involved) will always yield sub-par bullshit. LLMs operate on tokens, which are abstract numbers, if you will. Parts of words of a human language are being associated with a distinct token, and then a probabilistic graph-like structure is created out of these tokens, using a fundamental back-propagation algorithm. ...