I have mentioned three possible failure points. But none are very well described.

There are some interesting issues here.

How can failure be predicted with any certainty?

We know that a system must be capable of failure and that a set of propositions must be refutable to have any meaning.

Following Urbas, if a set of assumed proofs have ⊥ as a consequence (the ⊥ sign is often dropped), then anything follows.

Δ ⊢ ⊥ / Δ ⊢ α (Weak-R)

Can we conclude that just not failing success, in terms of Radix?

When I consider the $400T bandied about as the potential market for DeFi, I see only a very large, ultimately comical discrepancy between the reality of now and this vague vision.

Repeating that vague vision in the form of we’re gonna make it, is just gambling sloganeering.

You can take it as fun if you like, but one thing I am sure of is that it (that sort of activity) will never approach the idealised goal.

I comment on the ill-conceived nature of this elsewhere.

There are further questions:-

Why has the route never been mapped out in terms of models, suppositions and range of metrics?

Is it because

- The route is unknown and, therefore, cannot be mapped.
- No one has known how to map out this route.
- No one has wanted to map out the route for fear of the conclusion that would be drawn.

I am thinking about 2. at the moment.

I think the basic layer uses mechanisms that have resulted in an intellectual impasse or blindness.

That impasse is built into the model, which is finally a mathematical and logical model.

Allowing the logic to be **limited** to the idea that the DLT (blockchain) solves problems forces the idea that the DLT can solve **all** associated problems.

That assumption is false, therefore very limiting.

It is possible to logically build on the logic of the DLT without contradicting it.

Now, we already know that assumption is false, as we see many other blockchains concentrate on building certain, specific solutions.

Those solutions cannot be evaluated and understood unless seen in relation to their own as well as the surrounding ecosystem.

In Radix, comparisons with other chains have been made in terms of their base layer abilities.

That is insufficient to grasp the value of what might be created on top, irrespective of the base layer.

The thrust of Radix is the innovation that exists on top of the base layer, the programmatic tools that work with assets’ intrinsic definitions.

I have seen very little if any, of the properties and attributes methodically discussed in Radix circles and how they can be composed or combined compared to other DeFi implementations that address the same problem.

There seems to be a hitch: People are still thinking finance, as in the finance they think they know but have magically transferred to the DLT. Those are not the correct criteria.

DeFi is not the finance that people think they know, and I suspect that most making that evaluation do not know much about TradFi either.

What is a measure of value?

A measure of value for any technology must be how and to what extent it is used.

A road’s technology is measured in terms of the vehicles that enter and leave it, combined with the quality of the journey.

A single vehicle goes from A to B.

Many vehicles go from *A*_{i} to *B*_{n}.

Lights on the road direct, slow or stop that traffic.

There is a huge disparity between the amount of energy needed to signal a stop to the traffic in a particular area and the large amount of energy taken up by stopping and then starting those vehicles.

What if that energy could be harnessed?

Some of the math involved in LLMs, RAGS and associated technology, such as vector databases and knowledge graphs, involves understanding multidimensionality and computing proximity, surface area, and volume in many dimensions.

In multidimensional space, the terms adjacent, near and distant have different meanings from three-dimensional space.

But we are still concerned with limits.

Looking at category theory, we are also concerned with delineation and limits.

Why does this matter?

Because when we have a clear, logical model in general terms of what is happening in the domain of interest, the DLT, we can begin to look at each constituent element.

For instance, as mentioned, what is meant by value?

What is finance, and what is money in this context?

Perhaps neither of these is a single well-known and agreed-upon concept.

(Obviously, they are not)

It is a bit tedious of me to quote from Wikipedia here, but I just what to get across the idea of looking at fundamental structures.

### The Terms Functor and Adjunction

An **adjunction** is a relationship that two functors may exhibit, intuitively corresponding to a weak form of equivalence between two related categories. Two functors that stand in this relationship are known as **adjoint functors**, one being the **left adjoint** and the other the **right adjoint**.

By definition, an adjunction between categories C and D is a pair of functors (assumed to be covariant)

F
:
D
→
C
and
G
:
C
→
D

and, for all objects X in C and Y in D, a bijection between the respective morphism sets hom CFY,X≅ home DY,GX

such that this family of bijections is natural in X and Y. Naturality here means that there are natural isomorphisms between the pair of functors C(F-,X):D

→ Setop and D(-,GX):D→ Setop

for a fixed X in C, and also the pair of functors C(FY,-):C→ Set

and D(Y,G-):C→ Set for a fixed Y in D.

The functor F is called a **left adjoint functor** or **left adjoint to G**, while G is called a **right adjoint functor** or **right adjoint to F**. We write F⊣ G

.

An adjunction between categories C and D is somewhat akin to a “weak form” of equivalence between C and D, and indeed, every equivalence is an adjunction. In many situations, an adjunction can be “upgraded” to an equivalence, by a suitable natural modification of the involved categories and functors.

What has been arrived at here is the mathematical definition of a limit, a boundary, and its counterpart, equivalence.

Analysis can become more precise when considering the modelling provided by monads (and comonads), given that a monad is a generalisation of the concept of a monoid.

A one-object category, a monoid, corresponds to the set of elements and the morphisms (arrows, or in the formalism of string diagrams, lines) between the object and itself correspond to the binary operation (the morphism). In the formalism of string diagrams, the horizontal represents consecutive and the vertical, parallel transformation sequence.

We have just seen that boundaries look very different in multidimensional vector space, however, they must correspond to the same categorical principals.

We have the verifiability of transactions on the DLT; these exist as bounded elements. It can be seen that, conceptually, this can be projected into sparse vector space.

In practice, what would be achieved is the proper return of those projected elements after the transformations performed in vector space. In other words, a series of permissions would be passed into the vector space, reflected back after data queries and transformations, without which the information passed from the AI model would be rejected.

Coming back to earth for a second, it is very strange that the domain of AI has the most fuzzy boundaries that lend themselves to exploiting people through various forms of imitation and coercion.

At the same time, something similar may be said of DeFi, but it occurs in a very different way.

There is a project called GEO, written up recently here Building a Decentralized Brain with AI & Crypto by StreamingFast which is looking closely at these issues and building a decentralised knowledge graph.

And yes, if you follow the links to the GeoBrowser, you will see a category for crypto assets. Radix is listed at the top of panel five.

The clarity of the boundaries, permissions, and locks of the DLT can be used to protect people and constrain the exploitation that is an intrinsic danger of AI.

The modest cost but high volume revenue from this allows the DLT to create value and prove its worth.