Deterministic Over Judgment

We have spent the last two years trying to talk computers into being reliable. We call it prompt engineering, but it’s actually closer to hostage negotiation. We plead with the model to remember its constraints. We threaten it with imaginary penalties. We bribe it with tips it can’t spend.

This is the era of “prompt-first reasoning,” and it is a design dead end.

When we build agents today, we usually start with a massive block of text. This text is supposed to serve as the agent’s soul, its operating system, and its boundary layer all at once. We tell the model: “You are a research assistant. You must check your sources. You must be concise. You must not hallucinate.” Then we throw it into a loop and pray that the “reasoning” holds up when the context window gets crowded or the instructions get complex.

It doesn’t. The moment the model encounters a novel edge case or a conflicting instruction, the “reasoning” dissolves into vibes. The agent stops being a tool and starts being a liability. We find ourselves babysitting the output, which defeats the entire purpose of automation.

Vivesca is built on a different premise. To restore agentic trust, we have to liquidate prompt-first reasoning. We have to stop asking the model to “decide” how to handle the boring, operational primitives of its existence. We need to move those functions into a deterministic metabolic core.

In biology, your body doesn’t “reason” about whether to increase your heart rate when you start running. It doesn’t “think” about whether to release insulin after a meal. These are metabolic reflexes—hard-coded logic gates that respond to signals from the environment. Your conscious mind, the “reasoning” layer, is free to focus on high-dimensional problems like navigating the terrain or choosing a restaurant because the low-level operations are guaranteed by the system.

Most modern AI agents are the equivalent of a human who has to manually remind their heart to beat every second. They are so consumed by the overhead of maintaining their own state through prompts that they have no capacity for genuine agency.

Vivesca replaces this fragile prompt-layer with a set of deterministic gates. In the Vivesca codebase, you’ll find files like gates.py and substrate.py. These aren’t just more prompts. They are hard-coded Python logic. They define the “metabolism” of the agent—the way it consumes information, filters it, and transforms it into action.

When a signal enters the system, it doesn’t go straight to the LLM for “judgment.” It hits a gate. Does this signal align with the current Constitution? Does the agent have the budget—both in tokens and in time—to process this? Is the substrate (the data the agent is working on) verified and clean?

If the answer to any of these is no, the gate stays closed. The LLM is never even called. We don’t ask the model to “be responsible” with its budget; we build a system that makes it physically impossible for the model to overspend. We don’t ask it to “stay on mission”; we build a metabolic loop that only feeds it mission-critical data.

This shift isolates LLM judgment to what it is actually good at: high-dimensional reasoning on verified substrates.

By the time the model sees a piece of information in Vivesca, that information has already been through the “stomach.” It has been broken down, filtered for noise, checked against the agent’s internal state, and formatted for maximum signal. The model isn’t being asked to manage its own existence; it’s being asked to synthesize a specific result from a clean, deterministic input.

This is how we get back to trust. Trust isn’t born from making the model “smarter” or the prompt “better.” Trust is born from narrowing the model’s surface area for failure. When you know that the operational primitives—the state transitions, the resource management, the boundary checks—are handled by hard-coded logic, you can finally trust the agent to go off and do its job.

The industry is currently obsessed with “Chain of Thought” and “Reflection” patterns, where the model talks to itself to check its own work. This is just more prompt-first reasoning. It’s a hall of mirrors where one fallible reasoning process is used to “verify” another fallible reasoning process. It’s slop on top of slop.

Metabolic design is the opposite. It’s about stripping away the “thought” and replacing it with “process.” It’s about recognizing that “judgment” is an expensive, non-deterministic resource that should be used sparingly.

When we talk about the “Constitution eating itself,” we’re talking about this metabolic process in action. The agent’s core principles aren’t just a static text file; they are a living substrate that is constantly being refined and enforced by the system’s logic gates. The Constitution isn’t a suggestion to the model; it is the environment in which the model lives.

If we want agents that can actually function in the real world—agents that can manage budgets, handle sensitive data, and execute multi-step workflows without supervision—we have to stop treating them like magic black boxes that we can talk into submission.

We have to start treating them like organisms. We have to give them a metabolism.

We are moving from the era of the “Prompt Engineer” to the era of the “Metabolic Architect.” The goal isn’t to write the perfect sentence; it’s to build the perfect gate. It’s to ensure that by the time the model is asked to make a judgment, the “reasoning” is a foregone conclusion because the substrate is so pure.

It is a slightly wry realization: the way to make AI more “human-like” in its agency is to make its core functions as “machine-like” and deterministic as possible. We don’t need agents that can think about everything. We need agents that don’t have to think about the basics.

Agentic trust is liquidated in the prompt but restored in the gate.

The Takeaway

If you are building an agentic system, identify one “judgment” the model is currently making about its own state—like checking if it has enough information to proceed or verifying its own output—and replace that prompt-based check with a hard-coded, deterministic logic gate. Stop talking to the model and start building the plumbing.