Neurons "reuse"

There is an inportant subtlety when on is trying to interpret what a Neural Network actually does – each neuron, it seems, gets activated on a different set of inputs which corresponds to very different set of features. It is most prominent in a computer vision settings, when a selected neuron reacts on completely unrelated parts of inputs, say of cats and of cars. Let’s see what is going on out there....

October 8, 2023 · <lngnmn2@yahoo.com>

Solving async-await for Rust

So, you want to add these async/~await/ keywords? First of all, it has already been seriously researched by the C#/F# .NET guys. Just learn what they have come up with. One’s own principle-guided reasoning could proceed like this: The fundamental difference between ordinary procedures and async procedures is the whole protocol for calling and returning of values, and dealing with actual implementation of the corresponding mechanisms (abstract at this point, but has to reuse what is already out there)....

October 6, 2023 · <lngnmn2@yahoo.com>

Transformers bullshit everywhere

There is another meme “scientific” paper (well, it is a “research paper”, which does not have be correct lmao) about trying to interpret of what transformers actually do. When the hype was at its peak, I wrote an article about “handwaving with too abstract math” or “sweeping the meaning under the rug”. I had very strong intuition that I have seen this before, and now I will show it. Where all have seen this kind of sophisticated bullshitting with abstract entities taken out of context (from another, highly remote and ephemeral levels of abstraction) bing used to explain a natural phenomena?...

October 6, 2023 · <lngnmn2@yahoo.com>

High Level

I finally found a well-written no-bullshit book about CS. It says, among other things: There is no need to define a representation of the values False and True in terms of values of some other type. Conceptually, values of type Bool are simply (denoted by) the expressions False and True. Of course, the computer’s internal representation of Bool and all other data is in terms of bits, but we don’t need to know any of the details of how that works to write programs....

October 5, 2023 · <lngnmn2@yahoo.com>

The Junk Foods of Programming

A small disclaimer: I’ve lived in India for a few years, I have some good friends there and I think I begin to really understand the some cultural aspects which govern this vastly complex and spontaneous society. Nowadays everyone, it seems, is either a programmer or an AI researcher or both. When they are not a crapto “engineers”, of course. Just like chef Gusteau from the Ratatouille movie famously proclaimed – “Anyone can cook”....

October 3, 2023 · <lngnmn2@yahoo.com>

LLM Bullshit-3

It is more or less obvious why AI and LLM bubble is so huge - imagine just charging money for every https request to a RESTful API, without, literally, being responsible about the quality of the responce (it is not out fault if a LLM returned bullshit to you, or, which is much worse – a highly sophisiticated, convincing subtle bullshit). Again, there is not enough good code to train a model on it....

October 2, 2023 · <lngnmn2@yahoo.com>

More Whys

Have you ever thought why the Set Theory and Predicate Logic looks “the same” when being visualized using Venn and Euler diagrams? Are these partitions is the most fundamental abstract building block? Most of the examples which are used to explain logic has been drawn from “natural categories” of biological species - mammals, reptiles, men. These are distinct partitions indeed, but how they came to be as they are? It is because somewhere in the past a literal “fork”, a mutation (or a whole set of these) occurred (and the resulting population survived)....

September 29, 2023 · <lngnmn2@yahoo.com>

The Way

There are something (a generalized class of algorithms) called “backtracking search”. The main property is that an algorithm goes back onse a dead end is reached, or a certain threshold of maximum steps. There are two “strategies” for these algorithms (of how to expand the “fringe”) - one is called “depth-first”, another – “breadth-first”. The first one goes “fast” and “narrow” (inforamaly), where another one goes “slowly”, and “layer after layer”....

September 25, 2023 · <lngnmn2@yahoo.com>

Notes on proper abstractions and ADTs

Special concern for precise definitions, clarity and brevity (omitting of what is clear from the context), high level of abstraction, and proper generality (just like Sets or Numbers). Both algebraic types (“products” and “sums”), could be used “like tables”, with new columns being added without affecting any code that is already out there. This implies a by name instead of position-based (offset) access. This, in turn, is the fundamental, definitive property of structs over tuples....

September 24, 2023 · Ln Gnmn

Formulating the problem

These are just assorted notes for now, which shall become something ready to be formalized. Non-bullshit The objective is to train a NN which captures subtle recurrent patterns among many well-chosen (and well-defined) features. The proper set of features that, in turn, captures the most relevant aspects of reality is what determines the distinction between a modest success or a total failure of this ML approach. All the features should be actual “measurements” of something real, like “Open Interest” or the “Long/Short ratio” and other obvious measurements like “Volume”....

September 11, 2023 · Ln Gnmn