Personal Manifesto

·— views

2026

AI is not just changing tools. It is changing how intelligence itself is organised.

The next transformation is not a better model, or a faster API. It is the Network of Agents.

To understand it, you don't start with prompts. You start with systems.


How I Arrived Here

I didn't begin by studying agents. I began by watching coordination fail.

Long before LLMs, I learned that useful behaviour rarely depends on a single brilliant individual. It depends on how memory is preserved, how decisions propagate, how uncertainty is handled, and how failure is surfaced and corrected. I saw this first while organising people—building teams, running societies, founding early ventures—where outcomes were shaped less by talent than by information flow.

Who remembers what? What gets lost between handovers? How does a system recover when something silently goes wrong?

Those questions stayed with me.

When I later moved into AI systems, I recognised the patterns immediately. Pipelines broke not because models were weak, but because context fragmented, errors compounded, and feedback loops were poorly designed. I started to see intelligence as something fundamentally systemic: behaviour emerges when components share state, negotiate uncertainty, and remain interpretable over time.

A system, to me, began to feel less like machinery—and more like an artwork to craft.


From Pipelines to Agentic Systems

As agentic systems became feasible, the limits of naïve autonomy became obvious.

  • Individual agents could sound confident while being deeply wrong.
  • Multi-agent systems could amplify mistakes instead of correcting them.
  • Intelligence existed locally, but reliability didn't scale.

Across projects—startup forecasting, automated research, text-to-model systems—the same lesson repeated: classical components, structured workflows, and deliberate interaction rules could stabilise agents, but only locally. Every new domain required redesigning constraints, and safety mechanisms from scratch.

This brittleness isn't an implementation bug. It's a missing theory.

We still lack general principles for how agent societies stay aligned as they grow: how memory should be shared, how disagreement should resolve, how trust and verification should propagate, and how systems remain corrigible under real-world uncertainty.

That gap is what pulls me forward.


Two Directions That Matter

To move beyond fragile pipelines, my work converged along two complementary axes.

1. Aligning optimisation with human-perceived correctness

Many failures are not binary. Humans distinguish near-misses from severe errors, but our evaluations do not. By treating misalignment as a graded surface rather than a switch, we can diagnose, measure, and correct behaviour instead of merely penalising it.

2. Scalable coordination in multi-agent systems

Inspired by biological collectives, I explored how simple local rules—shared memory, lightweight communication, evolutionary selection—can produce global robustness without collapsing into a few brittle super-agents. What matters is not central control, but interaction design.

Together, these directions point to a deeper truth: reliability does not come from larger models, but from systems design.


Why Pulse Exists

By 2026, something became clear.

There is growing disillusionment around AI—not because it lacks capability, but because it lacks integration. Foundational models stall technically, while commercial products bolt "agents" onto workflows that were never designed for them.

Our ways of living and working are not agent-native.

Pulse is my attempt to build toward that missing layer—not as a productivity hack, but as infrastructure for an agentic way of living.

The path is deliberate

Phase 1: The Agent Unit Persistent intention understanding, long-horizon memory, multi-turn tool usage that actually works—not demos, but dependable execution.

Phase 1.1: The Real-Time Layer Intelligence that operates under latency, interruptions, and changing context—where correctness matters in the moment.

Phase 1.2: The Agent-in-IoT Unit Agents as intermediaries between people and environments—devices that understand intent, personalise behaviour, and act safely in the physical world.

The value here is not saving time. It is enabling agentic living—where intelligence is persistent, contextual, and trustworthy.

From inboxes to devices, from individual agents to interconnected ones, the question is always the same:

How do we make complex behaviour observable, correctable, and aligned?


Why the Network of Agents Is Inevitable

A single agent is useful. A network is transformative.

Once agents can communicate, share context, and specialise, new structures emerge: agent-to-agent markets, virtual organisations, autonomous coordination at scale. This naturally connects to IoT, user-to-user collaboration, and even autonomous companies.

But a network only works if information geometry is sound.

I'm increasingly fascinated by the idea of an Agent Graph:

  • nodes with roles
  • edges encoding trust and interaction
  • centrality shaping influence

Different subgraphs activate for different tasks. Some paths evolve; others remain stable. Adaptation happens locally, alignment globally.

This is not about vibes. It's about making intelligence structural.


The Choice I Keep Making

In 2026, I turned down easy paths—security, speed, immediate validation—not because they weren't good, but because they would have pulled me away from thinking the problem through.

If I look back five years from now, the regret I can't accept is simple:

"I didn't go all in on building the system when I could."

Not that I moved slowly. Not that I chose depth over comfort. But that I saw the shape of the future—and didn't commit.


What I'm Committed To

My long-term goal is to develop general principles for building reliable, scalable systems whose behaviour remains aligned as they reason over long horizons, share memory, and interact within dynamic multi-agent environments.

Intelligence will not be won by a single model. It will be shaped by how systems are designed.

And the Network of Agents is not a feature. It is the next substrate.