The fundamental
direction of
intelligence

Eigentau orchestrates Bittensor's 129+ specialized subnets into unified general intelligence. Every query decomposed, routed, synthesized, learned.

0
Subnets
0
Faculties
74.2%
Accuracy

"AGI won't come from one model. It will emerge from the orchestration of many specialized intelligences — each excellent at one thing, coordinated by a system that learns which combinations work best."

THE EIGENTAU THESIS

0
Bittensor subnets mapped to cognitive faculties
0
DeepMind AGI faculties covered by the routing engine
74.2%
Routing accuracy after 847 self-learning cycles
80/20
EMA weight blend — stable improvement without catastrophic forgetting

AGI is orchestration,
not one model

DeepMind defines general intelligence as 10 cognitive faculties. Bittensor has 129 specialized subnets. Eigentau is the routing layer that ties them together — the gating network in a decentralized Mixture of Experts.

Perception Generation Reasoning Memory Attention Learning Metacognition Executive Problem Solving Social Cognition
Perception Generation Attention Learning Memory Reasoning Metacog Executive Problem Social

Why Subnets Matter

Bittensor's 129+ subnets are the world's largest decentralized AI workforce

Each subnet specializes — inference, training, data scraping, prediction, storage, annotation. 12,000+ miners compete to deliver the best results. $2.3M in TAO distributed daily. But until now, each subnet operated in isolation. No cross-subnet coordination. No intelligent routing.

The Missing Layer

Eigentau is the gating network that turns isolated experts into general intelligence

Like a Mixture of Experts model, Eigentau learns which subnet combinations produce the best results for each task type. The router decomposes complex queries, fans out to multiple subnets in parallel, synthesizes results, and improves its routing weights after every interaction.

How Eigentau thinks

01
Decompose
AI breaks complex queries into atomic sub-tasks, each mapped to a cognitive faculty.
02
Route
Learned weights determine which Bittensor subnets handle each sub-task.
03
Synthesize
Results from parallel subnets merged into one coherent answer.
04
Learn
Every outcome scored. Routing weights adjusted via EMA blending.

Built With

Bittensor · Anthropic Claude · DeepMind AGI Framework · Base L2

The router gets
smarter with
every query

Signal-to-performance correlations computed after every cycle. Weights shift toward subnets that deliver. No human tuning — pure data-driven evolution.

Routing Weights — Cycle 847
Inference
0.359+0.041
Prediction
0.233+0.018
Scraping
0.209-0.012
Storage
0.144+0.007
Annotation
0.049-0.003
Training
0.006+0.002
EMA blend: 80/20Accuracy: 74.2%

$TAU

The token that powers Eigentau's routing economy on Base. Query fees fund subnet interactions via TAO.

Route

Pay for multi-subnet queries

Stake

Priority routing access

Learn

Earn from routing recipes

Coming Soon on Base