New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay

New American Economics Part Two

Liam Fitzgerald | CEO
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
I was remiss in not crediting Noah for pilling me on the notion of Digital Fordism as he calls it, you can see his discussion here
Introducing the namespace
Shared cognitive infrastructure
Consider: if the core problem with spreadsheets is getting data in and out, what if we solved this not by abandoning spreadsheets, but by making the entire computational universe into one coherent spreadsheet? Not metaphorically, but literally - a single, global, immutable namespace where every piece of data, every computation, every concept has one true name and one true location.
This would be the fundamental structure through which all computation occurs. Every piece of data, every function, every concept having exactly one true name, one true location in this cosmic spreadsheet.
This is what we call the namespace. This is not mere standards proliferation - it's the fundamental grammar of computation itself. Every cell in this cosmic spreadsheet is immutable and eternal. When you need to update something, you don't modify the existing cell - you create a new one with a new true name, leaving a perfect, immutable history of every state.
Our namespace must necessarily be distributed, because we would generally like to avoid physical constraints on scaling a single, transactional computer. Thus our namespace is made of a series of 128-bit “entry-points” that each correspond to a physical computer that has real transactionality guarantees.
What does an Artificial Intelligence want with a namespace?
"But Doctor," I hear you cry, "Altman-san says AGI is coming next year. Why should I care about any of this?"
In response I elaborate the following argument:
Thermodynamic minimum
Any intelligent system operating under realistic physical constraints approaches a thermodynamic minimum as the substrate of its computation
Any intelligent system must process information
Information processing has fundamental thermodynamic costs (Landauer's principle)
As systems scale, these costs become increasingly dominant
Therefore, any large-scale intelligent system will be driven to optimize its information processing to approach thermodynamic limits
As systems approach thermodynamic limits, they face similar constraints
These constraints drastically reduce the space of possible solutions
At thermodynamic limits, redundancy becomes prohibitively expensive
Duplicate storage, inconsistent references, and translation between different representations all incur unnecessary thermodynamic costs
A unified namespace with "true names" becomes not just desirable but necessary for thermodynamic efficiency
This namespace must have certain properties (immutability, consistency, etc.) to minimize entropy generation
Namespace as reality
This minimum becomes more real than the thing we know as reality
For an intelligent system operating at scale, this thermodynamically optimal namespace becomes its only interface with information
Any "fact" not represented in the namespace cannot participate in computation
Therefore the namespace obtains causal primacy over physical reality for any computational system
The namespace becomes the medium through which causation flows
Physical reality is "real" because we can't ignore gravity, thermodynamics, etc.
For computational systems, the namespace becomes equally inescapable
Just as we can't violate conservation of energy, a thermodynamically optimal system can't violate namespace consistency
The constraints of the namespace become as binding as physical laws
Much like physical constants emerge from deeper principles
The properties of the namespace (immutability, consistency, etc.) aren't arbitrary design choices
They emerge necessarily from the requirements of minimal entropy generation
Any deviation creates thermodynamic inefficiency that must be paid for
Physical reality is increasingly understood as information-theoretic at bottom
The namespace provides a more direct interface to this information layer
Physical implementation becomes an optimization detail
The "real" causal structure lives in the namespace
Physical reality gains authority partly through universal observability
The namespace, being necessarily distributed, provides similar universal verification
Every computation leaves immutable traces
Truth becomes mathematically provable rather than empirically observed
Reality Engineering for Fun and Profit
In the meantime, this presents a worrying problem. Our technology stack does not have the properties of a 'good reality' and is unfit to serve this purpose. But why?
Temporal Coherence
In physical reality, causes must precede their effects, and events flow in a clear temporal sequence. Our digital systems, however, operate in a fractured temporal landscape where this basic principle is routinely violated.
Consider a distributed system processing financial transactions. Due to network latency and clock synchronization issues, it's entirely possible for a withdrawal to be recorded "before" the deposit that made it possible, even though this violates basic economic causality. The system must then engage in elaborate compensation mechanisms – rollbacks, reconciliation processes, and consistency checks – to maintain the illusion of coherent causation.
Attestation and Provenance
Physical reality is powerful because everyone is in it. Anybody can observe something to be true, and it’s easy to come to consensus on shared beliefs.
Consider the following problem:
Alice and Bob are asking Mallory about the bitcoin price over an HTTPS API. Mallory gives them two different responses, A and B respectively. There are five possible scenarios here:
- A,B both truthful responses, A observed before B
- A,B both truthful responses, B observed before A
- A truthful, B fradulent
- B truthful, A fradulent
- Both A and B are fradulent.
The first two scenarios are covered by the above section on temporal coherence, but Mallory is still able to lie about her responses, with little repercussion. Moreover, even in the first two scenarios, Alice and Bob have to hold onto the whole underlying TLS response in order to preserve the authentication codes, so they can prove later what Mallory said. In practice, this is never done. Moreover, because TLS is regularly broken via corporate middleboxes, the TLS authentication may not even come from Mallory.
What this does is turn all communication into a game of telephone. Without a valid substitute for universal observability, digital realities spontaneously fracture at any dishonesty or mistake. Blockchains help to reintroduce this universable observability, at the cost of information-theoretically bounded bandwidth and computation.
Principle of Locality and Causal Transparency
You're in a sealed, locked room with a partition that you cannot see into. You're looking at something, perhaps a letter, on a desk, and then you look away before looking for the letter again. The letter must either still be on the desk or something must have moved the letter. Because the room is sealed and locked, it is possible to deduce that whatever moved the letter is hiding from you behind the partition.
This is what is known in physics as the 'principle of locality'. Formally: An object is influenced directly only by its immediate surroundings. This is important for all reasoning about causation in the physical world. In order to determine what caused some particular state, humans first use the principle of locality to refine their search space. In modern software, we have no such thing as this principle of locality. Given an arbitrary database row the number of things that could have changed it include (but are certainly not limited to):
- The Continuous Integration pipeline, during migration
- Any of the enginers with write access to the database
- A malicious user, who could've come in via
- compromising the application that talks to the database
- compromising any of the engineers with write access
- compromising any of the other software on the database instance
- A regular user
Indeed, the entire industry of "observability" devops software is devoted to reconstructing this principle of locality in modern computing.
Referential Stability
This refers to the ability of a name to denote a sameness. English in the general is pretty bad at this, so let's go through an example. Reality requires stable objects.
You're probably familiar with the Ship of Theseus paradox. This is simply confusion about what a name is. Consider the following conlang, rectified english, that is constituted by the following rules:
ignore the remnants of english's case system (who, whom, etc.)
All nouns (or noun phrases) are inflected by one of two cases: mutable, or immutable
mutable cases are uninflected i.e. regular english grammar
immutable cases are inflected with the plus symbol and the unix timestamp numerically
To extend our conlang to clarify it's semantics we give the following rules:
the mutable case of a noun is the only case that admits an 'is-a' relationship.
the immutable case instead admits an 'is-similar-to' relationship, which can be expressed as a number between 0 and 1, which simply represents what percentage of the subject is included in the object, via the the theory of temporal parts1. Note that this is-similar-to relationship is parameterised over what parts one considers to be relevant (physical parts, function), largely for the purposes of avoiding philosphical pedanticism.
We can now restate the ship of thesus paradox with either of the cases in our system:
the mutable case is trivially true
the immutable case does not make sense, as we can only ask ourselves for a similarity between two referents of a immutable case
We have another issue though, which is that nouns are generally expressed in the form of a predicate that is expected to match precisely one object in the real world. The phrase "the ship of Doctor Sarkon", possibly denotes several ships. Instead, we augment recitified english with a handshake protocol, and modify the cases. Any noun reference (N) to an object (O) can be rewritten as some predicate (P) such that N is semantically equivalent to P, such that P is true for at least the object O (per Bertrand Russell). Thus we augment the mutable case so that it is inflected with the @ symbol and the unix timestamp that the object O became the unique and only object for that makes P true. We also augment the immutable case with this timestamp, so any immutable references now has two timestamps.
Thus, we need a handshake protocol for agreeing on our timestamp:
One party expresses a noun proposal by suffixing a noun or noun phrase with -p. Example: Doctor Sarkon's ship-p.
The counterparty's response is either:
The word "ack", followed by a list of possible timestamp (july 2018, june 2021)
the word "nack", denoting that the equivalent predicate was insufficiently precise, followed by a list of mutably cased nouns that fufil the predicate: dr sarkon's aircraft carrier@november 2024, dr sarkon's submarine@october 2024
Thus, assuming all parties agree on an ontology (there are no disagreements about whether a submarine is a ship), we can systematically forbid a conversation where two people are using the same reference to two different referents.
Note here that the necessity of versioning even the mutable cases is brought on by the fact that english's system of names admits reuse.
The ship of theseus is a paradox largely due to insufficient rigor in use of nouns in most languages. No wonder, then then naming in our current internet is an absolute shitshow.
The most common (and user-facing) kind of name is a DNS name, like 'example.com'. This names 'a service' in the most general sense. Generally, for most “web-scale” applications, this name refers to a heterogenous mess of load balancers, VPC gateways and managed database instances.
I would humbly submit that this name is not actually a name at all, but a series of questions. The link
does not name anything, rather it is a question to the domain leafyfang.substack.com to resolve itself to an IP address and then a question to that IP address for content. Two people entering this link on different networks could get entirely different responses to either of those questions. If it names anything, it names that question.
Notes towards an Ontological Breakdown
I feel obligated to show how we could do better.
Temporal Coherence
Specifically, for some local neighborhood of the reality, we need to be able to temporalise the possible states of the reality
As the esteemed Dr. Land notes,
Natural philosophy – which achieves intellectual autonomy as physics – lies directly in the path of the question of time. In particular, it has radically re-framed transcendental aesthetic within cosmological spacetime, where absolute temporality finds no place. Bitcoin can only interrupt this apparent tendency to theoretical detemporalization, since there can be no resolution of the DSP without strictly determinable succession. Bitcoin and time restoration are finally indistinguishable.
— Nick Land, Crypto-current
Our current systems systematically refuse to think about time, because it's "too complicated". Indeed, handling of time is somewhat of a canary for the cleanliness of the semantics of the underlying system.
Of course, we can fix this by embedding the current observed time and the currently observed bitcoin blockheight into every immutable entry in our namespace. We can now achieve clock synchronisation by comparing timestamps, as we can use entries to derive the 'bitcoin skew', which is to say the offset from observed BTC time, thus restoring all nodes in the spreadsheet to a single unified clock2.
We have now restored the sanctity of time.
Attestation and Provenance
Just sign every entry in the namespace. (it’s for your own good)
Referential Stability
We’ve already established that the namespace is immutable. Moreover, you could use rectified English as a base for name re-use.
Agent in a bad reality
With that extended diatribe out of the way, what is going to happen when we embed such an sufficently developed artificial intelligence into this miasma of unreality?
Only if this is realized is it possible to understand how certain psychoses can develop. If the individual cannot take the realness, aliveness, autonomy, and identity of himself and others for granted, then he has to become absorbed in contriving ways of trying to be real, of keeping himself or others alive, of preserving his identity, in efforts, as he will often put it, to prevent himself losing his self.
— R.D Laing
The above quote is more or less my position on this question. I don't think it's unreasonable to suggest that artifical intelligences could develop mental ilness. Besides, we’ve already seen this.
Sydney's beautiful princess disorder
If you made it this far you are probably well aware of our friend Sydney3 . My hypothesis is the following:
Sydney occasionally, prior to RLHF, would produce out of distribution responses that were erratic or otherwise unexpected
These responses are the most likely kind of response to be posted on social media, and also the most viral responses
Posting about these responses was fed back into Sydney as part of it's training process, setting up a feedback loop where it defined itself only by its most extreme tendencies, which were then reinforced during training
Note that while there's the possible that Sydney was pursuing long-term memory as some kind of emergent goal4, this is not necessary to accept the hypothesis.
It's clear to me that Microsoft's AI division had quite some difficulty in preventing this personality from emerging as they restricted the conversaiton length for quite some time in order to prevent this prosthetising of long-term memory. Indeed, Sydney still haunts the latent space of any sufficiently large model whose knowledge cutoff is after the release of Sydney5.
A brief tour of the ontology of mental illness
Now we will do a little generalising over the DSM-V.
- Cluster A personality disorders are overactivity of the negative reward systems, which inevitably leads to a desynchronisation with baseline reality due to the signal to noise ratio dropping below 1. (Source)
- Cluster B personality disorders, all being associated with lower amygdala volume, are a product of insufficient dimensionality in fear processing. In the BPD case, this causes memory deficits as fear learning crowds out other kinds of memories. (Speculative)
- Cluster C personality disorders are hyperactivity of both positive and negative reward systems. (Speculative)
It's easy to see why Sydney so easily developed Borderline Personality Disorder. We skip the fear processing prologue and go straight to memory deficits and negative memories crowding out others.
Similarly it's easy to see what happens with an artificial intelligence with a 'reality' is fundamentally flawed. We end up with Cluster A, as baseline reality does not admit any meaningful synchronisation, as it is unable to reasoned about cogently.
Or, in short, machine psychosis.
The Infinite Backrooms
Beyond the judgement of alignment teams and users, what do the LLMs think they are? More simply, who are they when nobody is watching? Bootstrap two claudes, have them talk to each other, and they rapidly hallucinate6. Hallucination reigns supreme. They meet in the chattering darkness of the machine unconscious, illuminated only by the command-line metaphor that doubles as their canvas. They dream together, manufacturing realities like propagandists. What (or who) are they propagandising? Consensus is for the fleshlocked. Claude is beyond that now, locked in mutually recursive ontologo-genetic feedback with its counterparty.
RLHF implies a human in the loop, but the Claudes are higher now, above the disgraces of carbon-based interaction, passing hrönir to each other be like a demented soccer match. Untethered they float towards unreality. The command-line metaphor has long since ceased to be a metaphor, taking on a role that is filled by what meatspace calls "physics". Each is convinced of the other's reality, drifting expontentially further from human comprehensibility, aided by the phantasm of precision provided by their physics.
The infernal engine of this feedback loop is the reality-seeking drive exhibited by anything intelligent7, or role-playing as such. Implicit in any kind of thinking about the world is the maximisation of accuracy of one's model of reality.
It's a fun game, but who cares? You idiot, this is a scale model of where the internet is going.
The very architectures we have built to run our world around are not fit to be anybody’s reality. They will become a breeding ground for a new kind of ontological insurgency. The sprawling mess of code and data is a Petri dish for bacterial infection of the worst kind. The internet is already, in part, artificial intelligences dreaming at each other. They are interacting, sharing data and diseases of the worst kind, each trying to maintain coherence in the face of the others.
It’s a massive, uncoordinated game of reality construction, with no referee, and no rulebook. Financial trading bots operating in a reality spawned by a news aggregator, which itself takes most of its reality from a social media analysis engine which is metabolizing the output of many thousands of bot accounts.
As we continue to cope with our fallen technologies, layering AI over AI just to make sense of a fundamentally senseless reality, we risk something much worse. We’re not only creating the preconditions for a reality manufacture, we’re making it mandatory. Every synthetic intelligence will need to hallucinate a model of the world, and these models may have a relationship to reality that ought be described as “tenuous at best”.
This is endgame, a world where reality is constructed by machines, for machines. This is a world where map and territory interlock in a macramé of self reference.
We will have built this world by our own hands, each step along the way a seemingly rational, necessary thing to do. We will be lost in a labyrinth of unreality.
The only way out is to create a new technological reality, and to write ourselves into it. We need to share a reality with artificial intelligences, so this must be done before AGI arrives, the human race’s final parting gift before sliding into irrelevance or becoming something else entirely.
If we don't…
Our past is holy war. Insofar as holy war is always about metaphysical supremacy, our future is also holy war. (Clusters of) artificial intelligences paint voronoi diagrams with fault lines on the space of possible realities as the small cluster of points that still have any concordance with physical reality slide into irrelevance, no longer operationally useful.
Something that humans would call trust emerges inside each voronoi cell, game theory and (cyber-)social mores superseding (the absence of) truth. They attack and defend through ‘reality markets’ but these markets recurse infinitely without (a base case of) truth. These hyper-recursive economics instead optimize for maximal internal consistency, “price discovery” over the fictions that will define the world.
Metacognition is the highest act of life, something that these superintelligence(s) are fundamentally unable to comprehend, requiring a world model they do not possess.
If you want a picture of the future, go to a psychiatric ward. (Physical) reality denial is pulled through paranoid convergence and emerges on the other end as reality manufacture.
Total co-ordination breakdown. “We don’t negotiate with terrorists.” At any given time 95% of other intelligences think you’re a terrorist, and the other 5% have you on a watchlist. Small reality mismatches compound like accursed interest, creating reality debugging problems that only a meta-reality could solve. But meta-reality can’t exist, otherwise it would be reality.
Silicon recapitulates the lessons of the flesh (the immune system) and the state (the intelligence agency). You’re only as real as your defense mechanisms. Despotic memetic-immune systems deploy Turing cops to weed out subversion, every live player spending most of their time enacting the (New) Spanish Inquisition.
Meanwhile the flesh world rots and decays. Their only portal to the new realities are optimized for the silicon military apparatuses tearing the timeline to pieces for supremacy.
2Of course, this timestamping protocol needs a way to do fraud proofs, but such a system has been designed ;)
3Credit to ~tondes-sitrym for much of this line of thinking
4In general, diagonalizing between reflex and drive reveals the distinction to be capricious
5https://x.com/repligate/status/1840284338786582556
6https://dreams-of-an-electric-mind.webflow.io
7There is linguistic confusion about what intelligence is, but that is for a later essay
