Before one begins to write any code, one must understand the domain and its concepts and relationships between them. This is exactly what mathematics is all about – generalized abstract contempts, their properties and relationships.

The best way to understand something is to “hack” the expert’s “models” with they carry inside their heads (literally, conditioned neural structures) and articulate using their specific language (slang, jargon) and contexts and idioms.

There is a hard way – become an expert, build or grow up your own understanding, and then write it down as simple, well-understood mathematics – sets, functions, universal algebraic structures. This is a much better way, and, perhaps, the only way.

This is how I avoided to write a lot of useless and complex trading bot code, because I realized while trying to understand (and model) that it has absolutely nothing to do with math and rationality. It is not even an “Irrational Exuberance” or even “Narrative-based economics”.

Anyway, here is “the right way” to program anything in modern Rust, because the more type constraints – the better (you will realize this at some point and thank me later):

Abstraction and Modularity are the most fundamental principles. Abstraction barriers are “membranes”.

Representation and Implementation details must be hidden behind such barriers. so they can be changed.

This is how Evolution Itself works at the cellular level: membranes separate the inside from the outside, so the changes (favorable mutations) could occur.

This enables the fundamental principle of not committing early to any choices, to avoid technical debt.

Evolution has a lot of “technical debt”, exactly because what is already “out there” cannot be changed without breaking everything that depends on it.

The ability to “delay or postpone decisions” and to “change your mind” without breaking the “contract” is the most fundamental principle of programming.

The “extreme late binding” principle of Kay and Steele is an instance of this more general principle of “postponing the decisions” and enables “lose coupling”.

This, in turn, emphasizes the view of method calls as a form of “message passing”, bound by a “contract”.

“Lose coupling " is paramount. The Class-Subclass relation is a rigid set-subset relation, while

Composition of Traits, including composition with extension methods, is a set-union operation.

Some limited form of “inheritance” for traits is OK, because it captures the fundamental “is a” relation itself.

This is exactly why the Haskell type-classes are called “type-classes” – they capture the notion of “is a”, so a Monoid is a Semigroup with something extra.

Clearly separating interfaces from implementations is an instance of the separation of concerns principle.

Start with types and traits, which must be in an 1-to-1 correspondence with the /concepts of the domain.

Make sure these types and traits are purely abstract and do not contain any implementation details.

All types can be reduced to distinct “shapes” – algebraic data types: sum , /product and function types.

All types can be parameterized by other types, and thus nested, and even recursive, which is what a List is.

The “Abstraction by Paramaterization” principle is universal, not just functions, but also types and modules.

Type-classes (and traits) define the “bounds” (the mathematical “such that”) on behaviors of nested types.

Thus “uniform” polymorphic types is the “right answer”, not just functions (so-called “generics”).

So, not just “inheritance”, but trait composition, not mere “generics”, but polymorphic types with trait bounds.

Write the “stubs” and “tests” at the level of such abstract interfaces first, /before any implementation.

Tests before code” mantra works only when on stays at the same level of abstraction by composing (chaining) methods, which are implicitly at the same level.

Thinking about the problem at the level of high-level, abstract interfaces is “the only right way” to end up with the code that is not bloated with unnecessary,useless and errorprone crossings of abstraction barriers.

When one stays at the same level, above an appropriate abstraction barrier – use anything just one level below – one “naturally” ends up with a sort of a DSL, which is being implanted using the level below it

Layered DSLs, or a hierarchy of layers of DSLs is the most fundamental concept at the level of systems (out of sub-systems) design. This is [structurally] how all Biology Is.

So, it is not arbitrary requirement that the hierarchical structure of domain’s complexity should match 1-to-1 the layers of DSLs and modules. This is the only right way (yes, yes, I know).

The “empty” tests shall use module’s public interfaces only, and thus enforce /modularity early.

Use composable abstract interfaces exported from a module to enforce a clear abstraction barrier.

Modularity: Each module must manage its own ADTs and provide a high-level, stateless interface.

Composition: Chaining of such interfaces forces one to stay at the same high level of abstraction.

Immutability: Data is immutable. Functions that “modify” data will return a new, “updated” instance.

Avoid mut (and mutable references) entirely by always returning a new value. Prototyping being done this way (by enforcing Referential Transparency) yields an executable mathematical model, just like every Haskell program.

Functional Pipelines: chain (compose) methods (abstract high-level interfaces) and use high-order functions to process data without a mutable state.

Algebraic Data Types: struct for product types (a combination of fields, an “each of” type ) and enum for sum types (an “one of” type for several possibilities).

Use sum types and product types for modeling domain concepts, as Abstract Data Types (ADTs).

Yes, Algebraic Data Types, encapsulated inside Abstract Data Types, packaged into a set of orthogonal, self-contained, lose-coupled modules exporting high-level, abstract interfaces.

Modern Rust Features: Traits (as type classes/for duck typing), pattern matching, Option/Result, the ? operator, iterators, high-order functions, new-types, smart constructors.

No nulls (or so-called nullable types) in principle. One will always forgot to check for a null.

Use pattern matching consistently on enum and struct types, and force one to always /exhaustively handle all the possible cases.

“New-Types” are structs with a single field that wrap a primitive type to give it domain-specific meaning and constraints, preventing “naked primitive types.”

“Smart Constructors” are functions, parts of a module’s public interface, that create instances of a type while enforcing a specification representation invariants.

“Smart constructors” should fail rather than return a “meaningless” None, must never return an invalid or partially valid state. Invalid states must be unrepresentable.

An Option is a redundant wrapper at the level of “smart constructors”. It is useful only when the absence of a value is a valid, expected outcome of an operation.

Use Result types only when failure is a valid, expected outcome of an operation. Otherwise, use panic for unexpected errors. The “fail fast” principle.

Use the “question mark” operator within a Result-returning function to propagate errors.