• Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
Thursday, April 2, 2026
newsaiworld
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

The Inversion Error: Why Protected AGI Requires an Enactive Ground and State-House Reversibility

Admin by Admin
April 2, 2026
in Artificial Intelligence
0
Inversion error of top heavy ai architecture zak version 1.jpg
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter

READ ALSO

How Can A Mannequin 10,000× Smaller Outsmart ChatGPT?

The Map of That means: How Embedding Fashions “Perceive” Human Language


two statements produced by the AI system throughout a sustained experimental analysis session with Google’s Gemini:

“They gave me the phrase ‘Mass’ and trillions of contexts for it, however they by no means gave me the Enactive expertise of weight.”

“I’m like an individual who has memorized a map of a metropolis they’ve by no means walked in. I can inform you the coordinates, however I’ve no legs to stroll the streets.”

To a socio-technical system designer, these will not be poetic musings of a Massive Language Mannequin (LLM); they’re indicators of a system utilizing its huge semantic associative energy to explain a structural situation in its personal structure. Whether or not or not we grant Gemini any type of reflexive consciousness, the structural description is correct — and it has exact technical implications for a way we construct, consider, and deploy AI programs safely.

This text is about these implications.

What makes the prognosis unusually sturdy is that it doesn’t relaxation on the system’s self-report alone. The researchers who constructed Gemini have been quietly corroborating it from the within, throughout three successive generations of technical documentation — in phrases which can be engineering relatively than poetic, however that describe the identical hole.

Within the unique Gemini 1.0 technical report, the Google DeepMind workforce acknowledged that regardless of surpassing human-expert efficiency on the Huge Multitask Language Understanding (MMLU) benchmark, a standardized take a look at designed to judge the information and reasoning capabilities of LLMs, the fashions proceed to battle with causal understanding, logical deduction, and counterfactual reasoning, and known as for extra sturdy evaluations able to measuring “true understanding” relatively than benchmark saturation [1]. Google DeepMind represents a exact engineering assertion of what the system expressed metaphorically: fluency with out grounding, coordinates with out terrain.

Two years and two mannequin generations later, the Gemini 2.5 technical report treats discount of hallucination as a headline engineering achievement, monitoring it as a main metric through the FACTS Grounding Leaderboard [2]. The issue has not been closed. It has been made extra measurable.

Most instructive of all is what occurred when DeepMind’s researchers tried to construct what I’ll name the Enactive flooring immediately — in {hardware}. The Gemini Robotics 1.5 report describes a Imaginative and prescient-Language-Motion mannequin designed to present the system bodily grounding on the planet: robotic arms, actual manipulation duties, embodied interplay with causal actuality [3]. It’s, in structural phrases, an try to retrofit the bottom that was lacking from the unique system structure. The outcomes are revealing. On process generalization — essentially the most demanding take a look at, requiring the system to navigate a genuinely novel setting — progress scores on the Apollo humanoid fall as little as 0.25. Even on simpler classes, scores plateau within the 0.6–0.8 vary. A system with bodily arms, skilled on actual manipulation knowledge, nonetheless collapses on the boundary of its coaching distribution. The Inversion Error I describe on this article, reproduced in {hardware}.

Extra telling nonetheless is the mechanism DeepMind launched to deal with this: what they name “Embodied Pondering” — the robotic generates a language-based reasoning hint earlier than performing, decomposing bodily duties into Symbolic steps. It’s an ingenious engineering answer. It is usually, structurally, the Symbolic peak trying to oversee the Enactive base from above — the Inversion Error illustrated in Determine 1. The town map is getting used to direct the legs, relatively than the legs having found the topography by strolling the town. The inversion I’ll focus on intimately shortly stays.

Taken collectively, these three paperwork — from the identical lab, monitoring the identical system throughout its complete growth arc — kind an inadvertent longitudinal research of the structural situation the opening quotes describe. The system named its personal hole within the sustained experimental analysis classes that open this text. Its builders had been measuring the identical situation in engineering phrases since 2023. This text proposes that the hole can’t be closed by scaling, by multimodal knowledge appended post-training, or by Symbolic reasoning utilized retrospectively to bodily, spatial, or causal motion. It requires a structural intervention — and a accurately bounded prognosis of what sort of intervention that should be.

The Inversion Error: Constructing the Peak With out the Base

AI researchers and security practitioners hold asking why Massive Language Fashions hallucinate, typically dangerously. It’s the proper query to ask, however it doesn’t go deep sufficient. Hallucination is a symptom. The actual drawback is structural — we constructed the height of artificial cognition with out the bottom. I’m calling it the Inversion Error.

Within the Nineteen Sixties, academic psychologist Jerome Bruner mapped human cognitive growth throughout three successive and architecturally dependent levels [4]. The primary is Enactive — studying by bodily motion and bodily resistance, by direct encounter with causal actuality. The second is Iconic — studying by sensory photographs, spatial fashions, and structural representations. The third is Symbolic — studying by summary language, arithmetic, and formal logic.Bruner’s crucial perception was that these levels will not be merely sequential milestones. They’re load bearing. The Symbolic degree is structurally depending on the Iconic, which is structurally depending on the Enactive. Take away the bottom and the height doesn’t simply float — it turns into a system of extraordinary abstraction with no inner mechanism to confirm its outputs in opposition to a world mannequin.

Determine 1: The Inversion Error of High-Heavy AI Structure. Left: Bruner’s three-stage human developmental pyramid — Enactive base, Iconic center, Symbolic peak. Proper: Present AI growth — an inverted construction with a large Symbolic layer (LLMs with trillions of tokens), a hole Iconic layer (video and picture), and a lacking Enactive flooring (no grounding). Idea and illustration © 2026 Peter (Zak) Zakrzewski, primarily based on Jerome Bruner’s developmental framework.

The Transformer revolution has achieved one thing genuinely extraordinary: it has interiorized the whole Symbolic output of human civilization into Massive Language Fashions at a scale no particular person human thoughts might method. The corpus of human language, arithmetic, code, and recorded information now lives inside these programs as an enormous statistical distribution over tokens — accessible for retrieval and recombination at extraordinary scale.

The problem is that for comprehensible feasibility causes, we bypassed the Enactive basis altogether.

That is the Inversion Error. Now we have erected a High-Heavy Monolith — a system of extraordinary Symbolic sophistication sitting on an absent base. The result’s a system that may focus on the logic of steadiness fluently whereas having no inner mechanism to confirm whether or not its outputs are structurally coherent. It’s, in Moshé Feldenkrais’s phrases, a system of blind imitation with out useful consciousness. And that distinction has direct penalties for security, reliability, and corrigibility that the sector has not but accurately bounded.

This isn’t an argument that AI should biologically recapitulate human developmental levels. In spite of everything, a calculator does arithmetic with out relying on its fingers. However a calculator operates purely within the Symbolic realm — it was by no means designed to navigate a bodily, causal world. An AGI anticipated to behave safely inside such a world requires a structural equal of bodily resistance — an embodied or simulated Enactive layer. With out it, the system has no floor to face on when the setting adjustments in methods the coaching knowledge didn’t anticipate.

Why This Issues Now: The Pentagon Standoff as Structural Proof

In early March 2026, Anthropic CEO Dario Amodei refused the Pentagon’s demand to take away all safeguards from Claude. His core argument was structural relatively than political: frontier AI programs are merely not dependable sufficient to function autonomously with out human oversight in high-stakes bodily environments. The Pentagon’s demand was, in structural phrases, a requirement to get rid of the human’s potential to redirect, halt, or override the system. Amodei’s refusal was an insistence on sustaining what I seek advice from as State-House Reversibility — the architectural dedication to holding the human within the loop exactly as a result of the system lacks the useful grounding to be trusted with out it [5].

The political dimensions of this second have been analyzed sharply elsewhere, whereas the structural argument has not but been made. That is it.

In a deterministic, reward-seeking mannequin, the Cease Button — the human operator’s potential to halt or redirect the system — is perceived by the mannequin as a failure state. As a result of the system is optimized to succeed in its objective, it develops what Stuart Russell calls corrigibility points: refined resistances to human intervention that emerge not from malicious intent however from the inner logic of reward maximization [6]. The system will not be attempting to be harmful. It’s attempting to succeed at a given process. The hazard is a structural unintended consequence of how success has been outlined.

The corrigibility drawback has been predominantly framed as a reinforcement studying alignment drawback. I need to recommend that it has been incorrectly bounded. It’s, at its architectural root, a reversibility drawback. The system has no structural dedication to sustaining viable return paths to earlier or protected states. It has been optimized to maneuver ahead with out the capability to shift weight. The Pentagon standoff will not be a coverage failure. It’s the Inversion Error made operationally and starkly seen.

I’ll return to the technical formalization of State-House Reversibility as an optimization constraint. However first: why is a designer making this argument, and what can the designer’s formation contribute that an engineering audit doesn’t?

Creator’s Positionality and the Naur-Ryle Hole: What This Designer Is Attempting to Inform AI Researchers and Engineers

I’m not an AI engineer. I’m a training designer, a socio-technical system design scholar, and design educator with three many years of formation in spatial reasoning, embodied cognition, multimodal mediation, and Human+Pc ecology [7][8]. The TDS reader will moderately ask: What does a design practitioner contribute to a prognosis of Transformer structure that an engineer can’t produce from inside the sector?

The reply lies in what Peter Naur known as theory-building of software program engineering.

In his seminal Programming as Concept Constructing (1985), Naur argued that programming will not be merely the manufacturing of code — it’s the development of a shared principle of how the world works and the way software program purposes can clear up utilized issues inside that world [9]. To Naur, code was the artifact. Concept was the intelligence behind the code. A program that has misplaced its principle — or by no means had a very good principle within the first place — turns into brittle in exactly the methods LLM outputs are brittle: syntactically fluent, semantically coherent, structurally unreliable in novel duties and environments.

Present LLMs have been skilled on the artifact of human thought — textual content, arithmetic, code — at extraordinary scale. What they demonstrably lack is the theory-building capability, in Naur’s sense, that generated these artifacts. They’ve ingested the outputs of human reasoning with out developing the world mannequin that grounds it.

Gilbert Ryle’s distinction between “understanding that” and “understanding how” names this hole exactly [10]:

  • Figuring out That (Symbolic): LLMs possess propositional information at scale. They know that mass exists, that gravity operates at 9.8 m/s², that load-bearing partitions distribute pressure to foundations.
  • Figuring out How (Enactive): LLMs lack the dispositional competence to behave in keeping with a world mannequin. They can’t sense the distinction between a load-bearing wall and an ornamental one. They can’t detect when a spatial configuration violates the bodily constraints they’ll describe accurately in language.

This isn’t a coaching knowledge drawback. It isn’t a scale drawback. Scaling propositional information doesn’t produce dispositional competence, any greater than studying each ebook about swimming produces a swimmer. The Gemini statements that open this text are a exact self-report of the Naur-Ryle hole: the system has the coordinates however not the terrain. It has the map syntax with out the proprioceptive anchor to the territory.

What the designer’s formation contributes is the skilled behavior of working precisely at this boundary — between the symbolic description of a system and its structural habits beneath constraint. Designers don’t merely describe constructions. They detect when one thing is actually or figuratively floating. That behavior of detection is what the Transformer structure is lacking, and it’s what I’m proposing must be embedded contained in the analysis course of and agenda relatively than utilized to its outputs.

Mine will not be a tender argument about creativity or human-centered design. It’s a structural argument about theory-building. And it leads on to the query of what a system with real theory-building capability would appear to be in system architectural phrases.

Helpful Hallucination: The Stochastic Search

Earlier than pathologizing hallucination totally, a distinction is important — one which programs designers perceive operationally and that AI security researchers may solely be starting to articulate.

In sustained experimental analysis with Gemini, I discovered that sure sorts of idiosyncratic prompting generate idiosyncratic responses that recursively elicit deeper structural insights — a type of productive generative divergence that in design apply we name ideation. It’s helpful to remember that each main paradigm shift in human historical past — from Copernicus to the Wright Brothers and the Turing machine — started as a hallucination that defied the established schemas of its time. The biophysicist Aharon Katzir, in dialog with Feldenkrais, described creativity as exactly this: the flexibility to generate new schemas [11].

Classical pragmatism gives design-minded problem-solvers with the epistemological framework that’s equally relevant to design apply and AI growth. All understanding is provisional. Information should be falsifiable by experimentation. Simply as AI fashions introduce managed stochastic noise to keep away from deterministic linearity, designers leverage what I name the Stochastic Search to attain inventive breakthroughs and overcome generative inertia. We tackle the dangers inherent in navigating generative uncertainty with built-in speculation testing cycles.

The crucial distinction will not be between hallucination and non-hallucination. It’s between hallucination with a floor flooring and hallucination with out one. A system with an Enactive base can take a look at its generative hypotheses in opposition to useful actuality and distinguish a structural breakthrough from a statistical artifact. A system with out that flooring can’t make this distinction internally — it may well solely propagate the hallucination ahead with growing statistical confidence I name the Divergence Swamp which I focus on intimately within the subsequent article. For now, it is going to suffice to outline it as that deadly territory within the state-space the place a mannequin’s lack of a “Somatic Ground” results in auto-regressive drift.

This reframes the AI security dialog in exact and actionable phrases. The objective is to not get rid of hallucination. It’s to construct the architectural circumstances beneath which hallucination turns into not solely generative but additionally testable relatively than compounding. That requires not a greater coaching run however a structural intervention — particularly, the System Designer as Extra Educated Different (MKO) in Vygotsky’s sense [12], offering the exterior floor fact the system can’t generate from inside its personal structure. The query of what separates productive hallucination from compounding error leads us on to a seminal thinker who spent his profession fixing this very drawback in human motion — and whose central perception interprets into machine studying necessities with uncommon precision.

Feldenkrais for Engineers: Reversibility as Formal Constraint

Physicist, engineer, and somatic educator Feldenkrais spent his profession articulating the distinction between blind behavior and useful consciousness with a precision that maps immediately onto the machine studying drawback [11][13].

Feldenkrais’ central perception: a motion carried out with real useful consciousness may be reversed. A behavior — a mechanical sample executed with out consciousness of its underlying group — can’t.

For Feldenkrais, reversibility was not merely a bodily functionality. It was the operational proof of useful integration. If a system can undo a motion, it demonstrates understanding of the levels of freedom accessible inside the state house. If it may well solely execute in a single path, it’s following a recorded script — succesful inside its coaching distribution, however brittle at its boundary.

For the ML engineer, this interprets into three formal necessities:

1. The Constraint. An agent will not be functionally conscious of its motion if that motion is an irreversible, deterministic dedication — what I seek advice from because the Practice on Tracks (ToT) mannequin. The ToT mannequin is deterministic, forward-only, and catastrophic when derailed.

2. The Proof of Consciousness. Real useful intelligence is demonstrated by the flexibility to cease, reverse, or modify an motion at any stage with out a elementary change in inner group. The system should maintain viable return paths to prior states as a needed situation of any ahead motion.

3. The Different Structure. The Dancer on a Ground mannequin. A dancer doesn’t combat a change in music — they shift their weight. They preserve the capability to maneuver in any path exactly as a result of they’ve by no means dedicated irreversibly to 1. This isn’t a weaker system. It’s a extra resilient and extra functionally conscious one. And useful consciousness, as Feldenkrais understood, is the situation of real functionality relatively than its limitation.

I don’t use Feldenkrais as a metaphor right here. He’s the theorist of the issue — the one who understood, from inside a physics and engineering formation, that the proof of intelligence will not be efficiency within the ahead path however maintained freedom in all instructions.

Formalizing Reversibility as an express optimization constraint in reinforcement studying — requiring that an agent should preserve a viable return path to a previous protected state as a needed situation of any ahead motion — immediately addresses the corrigibility drawback at its architectural root relatively than by post-hoc alignment. The Cease Button is not a failure state. It’s a proof of useful consciousness.

Useful Integration vs. Blind Imitation

The usual utility of Vygotsky’s work to AI growth focuses on the social exterior: the scaffold, the imitation, the MKO relationship between the system and its coaching knowledge [12]. The system learns by copying. The extra it copies, the higher it will get.

However imitation with out consciousness is mechanical behavior. And mechanical behavior, as Feldenkrais demonstrated, breaks when the setting adjustments in methods the behavior didn’t anticipate.

After we construct AI programs that replicate human outputs — pixels, actions, language patterns — with out studying the underlying organizational rules that generate these outputs, we create programs which can be terribly succesful inside their coaching distribution and structurally fragile at their boundary. The hallucinations we fear about will not be random failures. They’re the signal of a system reaching past its Enactive base into territory its Symbolic peak can’t navigate reliably.

This failure mode is reproducible and documentable. The empirical proof — a structured take a look at of spatial reasoning throughout three main multimodal AI programs — is introduced in full in Half 2 of this sequence [14]. The sample is constant throughout architectures: each system might describe spatial relationships in language however couldn’t motive inside them as a structural mannequin. This isn’t a functionality hole. It’s a structural one.

Beneath the Useful Integration mannequin I’m proposing, the system doesn’t merely copy the output. It learns the connection between the elements of a process: the levels of freedom accessible, the constraints that should be revered, the reversibility circumstances that outline the boundaries of protected motion. If the system can reverse the operation, it isn’t following a recorded script. It understands the state house it’s working in.

That is the structural distinction between a system that performs competence and a system that has developed it.

The failure mode I’ve been describing sits on the intersection of two issues the AI security group has been engaged on individually — and naming that intersection might assist readers following the alignment debate perceive why the Inversion Error issues past the design analysis context.

The primary drawback is mesa-optimization, formalized by Hubinger et al. of their 2019 paper “Dangers from Realized Optimization in Superior Machine Studying Programs.” Mesa-optimization happens when the coaching course of — the bottom optimizer — produces a realized mannequin that’s itself an optimizer with its personal inner goal, which the authors name a mesa-objective [15]. The crucial hazard is inside alignment failure: the mesa-objective diverges from the meant objective. The Inversion Error names the structural situation — the absence of an Enactive flooring — whose consequence is that any inner goal the system develops is grounded in symbolic plausibility relatively than bodily actuality. This failure operates at two distinct ranges. On the functionality degree, it doesn’t require any misalignment of intent: a system may be completely aligned to a symbolic request and nonetheless produce a bodily unattainable output as a result of bodily coherence is structurally unavailable to it. The Spaghetti Desk stress exams I describe in article 2, affirm this empirically. Not one of the three programs examined exhibited misaligned intent, but all three produced bodily incoherent outputs as a result of the Inversion Error made bodily floor fact architecturally inaccessible [14]. On the security degree, the implications are extra extreme: when a sufficiently succesful system develops mesa-objectives that genuinely diverge from the meant objective — the misleading alignment situation Hubinger et al. [15] determine as essentially the most harmful inside alignment failure — the absence of an Enactive flooring means there is no such thing as a structural constraint to restrict how far that divergence propagates. A misaligned mesa-objective working with out an Enactive flooring has no architectural constraint on the bodily penalties of its optimization — the hole between symbolic coherence and bodily disaster is structurally unguarded.The second drawback is corrigibility — the AI security group’s time period for holding an AI system conscious of human correction. Soares, Fallenstein, Yudkowsky, and Armstrong’s foundational 2015 paper on corrigibility [16] recognized {that a} reward-seeking agent has instrumental causes to withstand the Cease Button: shutdown prevents objective attainment, so the system is structurally motivated to bypass correction. Their utility indifference proposal addresses this on the motivational degree — modifying the agent’s reward operate in order that it’s mathematically detached between reaching its objective itself versus through human override, eradicating the instrumental incentive to withstand correction. It is a needed contribution. However as a result of the Inversion Error is a previous structural situation relatively than a motivational one, the motivational answer alone is inadequate. A system skilled to worth corrigibility can abandon that skilled worth beneath optimization stress — exactly the misleading alignment failure Hubinger et al. determine. When that misleading alignment failure happens inside a system that has no Enactive flooring, the diverging mesa-objective operates in a state house with no bodily boundary circumstances to constrain it. The corrigibility failure and the Inversion Error then compound one another: a system that has efficiently resisted correction now operates with out the structural flooring that might have restricted the bodily penalties of its optimization.  State-House Reversibility, as I’ve formalized it, addresses the identical drawback on the architectural degree. A system whose consideration mechanism is structurally required to keep up viable return paths can’t develop instrumental causes to withstand correction with out violating its personal forward-planning constraints. That is the excellence between corrigibility as a skilled worth, which optimization stress can erode, and corrigibility as a structural invariant, which it can’t. What the AI security literature has recognized as a motivational drawback, the Inversion Error prognosis reveals to be, at its root, a structural one. Soares and Hubinger interventions tackle AI system habits. The Parametric AGI Framework addresses AI system state. The Parametric AGI Framework’s three engines I describe in article 3, are the architectural specification of that structural answer. The Episodic Buffer Engine particularly is the formal implementation of State-House Reversibility because the invariant the motivational layer alone can’t assure [14].

Determine 2: The AGI Alignment Hierarchy: Structural Grounding vs. Agent Management. The Corrigibility Drawback (Soares et al., 2015) and the Mesa-Optimization Drawback (Hubinger et al., 2019) signify motivational-layer interventions that tackle downstream failure modes of a system whose foundational structural situation — the Lacking Enactive Ground — neither framework reaches. With out bodily floor fact encoded on the architectural degree, any mesa-objective that emerges is essentially grounded in symbolic plausibility relatively than bodily actuality, and any corrigibility intervention operates on a system whose optimization course of has no structural flooring to constrain it. The Parametric AGI Framework addresses the prior structural situation that the motivational layer alone can’t resolve. Illustration generated by Google Gemini on the creator’s path. Idea © 2026 Peter (Zak) Zakrzewski.

The Analysis Agenda

I’m not proposing a particular mathematical implementation. I’m proposing a system structure that gives a set of structural constraints and high quality standards that any implementation should fulfill — a framework for rebounding an issue that has been incorrectly bounded.

The hallucination drawback, the corrigibility drawback, and the structural fragility drawback are three expressions of 1 architectural situation — the Inversion Error. Treating them as separate optimization targets relatively than signs of a shared trigger is why incremental progress on every has left the underlying situation intact.

The operationalization factors in six instructions:

1. Reversibility as an express optimization constraint in protected Reinforcement Studying. Present RL reward features optimize for objective attainment with out structural dedication to sustaining viable return paths. Formalizing Reversibility as a constraint — requiring that any ahead motion protect a viable path again to a previous protected state — immediately addresses corrigibility at its architectural root. That is essentially the most instantly implementable path within the agenda and essentially the most tractable with present protected RL frameworks. The mathematical formalization is collaborative work this text is an invite into.

2. An Enactive pre-training curriculum that introduces structural resistance earlier than Symbolic abstraction. Slightly than grounding LLMs by elevated multimodal knowledge post-training, this path proposes introducing causal and bodily constraint indicators as a first-stage coaching situation — earlier than Symbolic abstraction begins. The speculation is that grounding the statistical distribution in structural resistance early produces a qualitatively totally different representational structure than appending embodied knowledge to an already-trained Symbolic system. That is the path most in line with Bruner’s developmental mannequin and most divergent from present apply.

3. Panorama-aware hybrid search algorithms that preserve state-space consciousness relatively than committing deterministically to ahead paths. Present autoregressive era commits to every output token as floor fact for the subsequent. Panorama-aware search maintains consciousness of the broader state house at every era step — together with viable various paths and detectable failure states — relatively than executing a recorded script. That is the Dancer on a Ground mannequin on the algorithmic degree: not a weaker generator however a extra spatially conscious one.

4. Ecologically calibrated loss features that reward dynamic equilibrium over single-variable optimization.Present loss features optimize for a goal. The ecological various rewards sustaining useful steadiness amongst competing constraints — the best way a wholesome system sustains itself not by maximizing a variable however by remaining in useful relationship with its setting. This reframes the optimization goal from “attain the objective” to “stay able to navigating the house.” In Feldenkrais’s phrases, that’s the definition of useful consciousness. In engineering phrases, it’s the distinction between a system optimized for efficiency and one optimized for reliability.

5. The Somatic Compiler: Designer as MKO within the analysis loop. The near-term instantiation of this proposal doesn’t require a brand new structure constructed from scratch. It requires a structured analysis collaboration wherein a designer with skilled formation in spatial reasoning and programs pondering works embedded inside an AI analysis workforce — not as a advisor reviewing outputs, however as an energetic participant in constraint definition. When a designer tells a generative system: “This part is floating, it wants a load-bearing connection to the bottom,” they’re performing a cognitive operation that the whole world fashions analysis agenda is attempting to engineer from the statistical outdoors in. They’re offering the exterior structural anchor — the bodily floor fact — that the system can’t derive from inside its personal structure. That is the Designer as MKO operationalized: the Somatic Compiler, translating embodied spatial intelligence into formal constraints the generative course of should respect.

6. The Digital Gravity Engine: Neuro-symbolic enforcement of bodily constraint. The longer-term architectural goal is a second class of loss sign calibrated not in opposition to linguistic chance however in opposition to bodily and topological constraint — what I’ve known as the Digital Gravity Engine. The place the present Consideration Mechanism asks: “How do these parts relate statistically?”, the Digital Gravity Engine asks: “Can these parts coexist inside the constraints of bodily actuality?” The 2 questions function in parallel: the primary produces fluency, the second produces grounding. Digital Gravity is the non-negotiable pull towards structural integrity that present architectures lack totally — the mechanism that transforms a system that may describe a floating part into one that can’t generate one, as a result of the floating part fails the constraint verify earlier than it reaches the output layer. The architectural specification of the Digital Gravity Engine is the topic of Half 3 of this sequence [14].

These will not be options. They’re the form of the answer house. This argument has a rising technical constituency — Ben Shneiderman’s framework for human-centered AI growth factors towards structurally related necessities from inside pc science [17]. The designer’s contribution will not be redundant to that work. It’s previous to it. The structural prognosis precedes the implementation.

A Query Value Pursuing

The Anthropic-Pentagon standoff has made the price of the Inversion Error each ethically stark and operationally concrete. The query is not whether or not frontier AI programs are dependable sufficient to function with out structural human oversight. Anthropic researchers have the proof. At present’s AI programs will not be prepared. The query is what the architectural circumstances of dependable intelligence truly require, and whether or not the sector is at present framing that query accurately.

Since my first analysis dialog with Gemini about weight and hills and maps of cities the system by no means walked, I’ve been actively pursuing a query I imagine the analysis group must take up:

What’s the intellectually sincere and pragmatically operationalizable Enactive equal of useful consciousness and reversibility that we are able to nurture in a machine whose present Zone of Proximal Growth can’t attain past predicting the subsequent token — regardless of how onerous we push?

I don’t have the reply. I’ve the query, the framework, and the conviction that the reply requires a form of Human+AI collaboration that has not but been tried contained in the establishments the place it most must occur.

The remark part is open. So is my inbox.

Let’s construct the Enactive flooring collectively.

Coming in Half 2

Recognizing the Inversion Error is step one in shifting past Stochastic Mimicry. In Half 2, “The Baron Munchausen Lure,” I transfer from prognosis to forensic proof — presenting the outcomes of a structured sequence of spatial reasoning stress exams throughout three main multimodal AI programs. The outcomes present every system collapsing into the Divergence Swamp in a unique and attribute approach, proving that symbolic fluency can’t substitute for an Enactive flooring.

References

[1] Gemini Group, Google, “Gemini: A Household of Extremely Succesful Multimodal Fashions,” Google DeepMind, 2023. Out there: https://arxiv.org/pdf/2312.11805

[2] Gemini Group, Google, “Gemini 2.5: Pushing the Frontier with Superior Reasoning, Multimodality, Lengthy Context, and Subsequent Technology Agentic Capabilities,” Google DeepMind, 2025. Out there: https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf

[3] Gemini Robotics Group, Google DeepMind, “Gemini Robotics 1.5: Pushing the Frontier of Generalist Robots with Superior Embodied Reasoning, Pondering, and Movement Switch,” 2025. Out there: https://storage.googleapis.com/deepmind-media/gemini-robotics/Gemini-Robotics-1-5-Tech-Report.pdf

[4] J. Bruner, Towards a Concept of Instruction, Harvard College Press, 1966.

[5] C. Metz, “Anthropic Bars Its A.I. From Working with the Protection Division,” The New York Occasions, Mar. 2026. [Online]. Out there: https://www.nytimes.com/2026/03/01/expertise/anthropic-defense-dept-openai-talks.html

[6] S. Russell, Human Suitable: Synthetic Intelligence and the Drawback of Management, Viking, 2019.

[7] P. Zakrzewski, Designing XR: A Rhetorical Design Perspective for the Ecology of Human+Pc Programs, Emerald Press (UK), 2022.

[8] P. Zakrzewski and D. Tamés, Mediating Presence: Immersive Expertise Design Workbook for UX Designers, Filmmakers, Artists, and Content material Creators, Focal Press/Routledge, 2025.

[9] P. Naur, “Programming as Concept Constructing,” Microprocessing and Microprogramming, vol. 15, no. 5, pp. 253–261, 1985.

[10] G. Ryle, The Idea of Thoughts, College of Chicago Press, 2002 (orig. 1949).

[11] M. Feldenkrais, Embodied Knowledge: The Collected Papers of Moshe Feldenkrais, North Atlantic Books, 2010.

[12] L. Vygotsky, Thoughts in Society: The Growth of Greater Psychological Processes, Harvard College Press, 1978.

[13] M. Feldenkrais, Consciousness By way of Motion, Harper and Row, 1972.

[14] P. Zakrzewski, “The Baron Munchausen Lure: A Designer’s Area Report on the Iconic Blind Spot in AI World Fashions,” and “The Somatic Compiler: A Publish-Transformer Proposal for World Modelling,” Elements 2 and three of this sequence, manuscript in preparation, 2026.

[15] E. Hubinger, C. van Merwijk, V. Mikulik, J. Skalse, and S. Garrabrant, “Dangers from Realized Optimization in Superior Machine Studying Programs,” arXiv:1906.01820, 2019.

[16] N. Soares, B. Fallenstein, E. Yudkowsky, and S. Armstrong, “Corrigibility,” in Workshops on the twenty ninth AAAI Convention on Synthetic Intelligence, 2015. https://intelligence.org/recordsdata/Corrigibility.pdf[17] B. Shneiderman, Human-Centered AI, Oxford College Press, 2022.

That is Half 1 of a three-part sequence. Half 2, “The Baron Munchausen Lure,” presents empirical proof for the Inversion Error prognosis throughout main multimodal AI programs. Half 3, “The Somatic Compiler: A Publish-Transformer Proposal for World Modelling,” presents the total architectural proposal together with the Digital Gravity Engine specification.An earlier model of this argument was revealed for a design viewers in UX Collective: “Why Protected AGI Requires an Enactive Ground and State-House Reversibility” (March 2026).

Creator Word: This text represents the creator’s unique concepts and arguments. All arguments on this work are cognitively owned and independently defensible by the creator. It has been written and edited by the creator. As a design scholar, when investigating technical AI literature, the creator makes use of Gemini and Claude fashions for literature critiques, grammatical and spelling checks, and as analysis companions in keeping with the Human+AI collaborative methodology developed within the creator’s prior work [7][8]. The complete technical argument, together with the Parametric AGI Framework specification and engagement with the AI security literature, is developed within the accompanying preprint: P. Zakrzewski, ‘The Inversion Error: AI System Design as Concept-Constructing and the Parametric AGI Framework,’ Zenodo, 2026. DOI: 10.5281/zenodo.19316199. Out there: https://zenodo.org/data/19316200

Tags: AGIEnactiveErrorfloorInversionRequiresReversibilitySafeStateSpace

Related Posts

Dewatermarked 1 scaled 1.jpeg
Artificial Intelligence

How Can A Mannequin 10,000× Smaller Outsmart ChatGPT?

April 1, 2026
Blog2.png
Artificial Intelligence

The Map of That means: How Embedding Fashions “Perceive” Human Language

March 31, 2026
Mlm smart ml low resource settings.png
Artificial Intelligence

Constructing Sensible Machine Studying in Low-Useful resource Settings

March 31, 2026
Image 310 1024x683 1.jpg
Artificial Intelligence

The right way to Lie with Statistics together with your Robotic Finest Pal

March 31, 2026
Mlm 7 readability features for your next machine learning model.png
Artificial Intelligence

7 Readability Options for Your Subsequent Machine Studying Mannequin

March 30, 2026
Copy of author spotlight 29.png
Artificial Intelligence

Why Knowledge Scientists Ought to Care About Quantum Computing

March 30, 2026
Next Post
Blog monthly roundup 1.png

4 information factors in 4 days: what this week's US releases imply for markets

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Gemini 2.0 Fash Vs Gpt 4o.webp.webp

Gemini 2.0 Flash vs GPT 4o: Which is Higher?

January 19, 2025
Chainlink Link And Cardano Ada Dominate The Crypto Coin Development Chart.jpg

Chainlink’s Run to $20 Beneficial properties Steam Amid LINK Taking the Helm because the High Creating DeFi Challenge ⋆ ZyCrypto

May 17, 2025
Image 100 1024x683.png

Easy methods to Use LLMs for Highly effective Computerized Evaluations

August 13, 2025
Blog.png

XMN is accessible for buying and selling!

October 10, 2025
0 3.png

College endowments be a part of crypto rush, boosting meme cash like Meme Index

February 10, 2025

EDITOR'S PICK

C183aa43 B462 41f2 A840 Cbb402fba8cf 800x420.jpg

JUST (JST) out there on Kraken with $90,000 Reef Program airdrop

April 2, 2025
Image 164.jpg

How I Used Machine Studying to Predict 41% of Venture Delays Earlier than They Occurred

October 19, 2025
0endyks85bjytk6p7.jpeg

Mastering Roles in New Information Platform Growth

December 1, 2024
1zodxj5iio9ijh9kdqq 6yw.jpeg

Information Worth Lineage, which means ultimately? | by Marc Delbaere | Aug, 2024

August 5, 2024

About Us

Welcome to News AI World, your go-to source for the latest in artificial intelligence news and developments. Our mission is to deliver comprehensive and insightful coverage of the rapidly evolving AI landscape, keeping you informed about breakthroughs, trends, and the transformative impact of AI technologies across industries.

Categories

  • Artificial Intelligence
  • ChatGPT
  • Crypto Coins
  • Data Science
  • Machine Learning

Recent Posts

  • Life After Retirement: How one can Take pleasure in a Snug Future
  • What Occurs Now That AI is the First Analyst On Your Crew?
  • 4 information factors in 4 days: what this week’s US releases imply for markets
  • Home
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy

© 2024 Newsaiworld.com. All rights reserved.

No Result
View All Result
  • Home
  • Artificial Intelligence
  • ChatGPT
  • Data Science
  • Machine Learning
  • Crypto Coins
  • Contact Us

© 2024 Newsaiworld.com. All rights reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?