Documentation

Everything you need to understand how Eigentau works, why it matters, and where it's going.

Contents
What is Eigentau? Why does this matter? How the router works The cognitive map The learning engine $TAU token Roadmap

What is Eigentau?

Eigentau is a routing layer for Bittensor. It sits between you and Bittensor's 129+ specialized subnets, and its job is simple: take your complex question, figure out which subnets can answer different parts of it, query them in parallel, and stitch the results together into one coherent answer.

Think of it like this. Bittensor has subnets that are great at text generation, subnets for data scraping, subnets for prediction, subnets for training models, subnets for storage. Each one is excellent at its specific job. But nobody has built the layer that coordinates them — the thing that says "this question needs a scraper, a predictor, and an inference engine working together."

Eigentau is that coordination layer.

The name: "Eigentau" comes from eigenvector (the fundamental direction in linear algebra) + tau (TAO, Bittensor's native token). It represents the fundamental direction of intelligence — not a single model, but the orchestration of many.

Why does this matter?

In March 2026, Google DeepMind published a cognitive framework for measuring AGI. They defined general intelligence as 10 distinct faculties: perception, generation, attention, learning, memory, reasoning, metacognition, executive functions, problem solving, and social cognition.

Here's the interesting part: Bittensor's subnets already cover most of these faculties. There are subnets for inference (reasoning), for scraping data (perception), for real-time processing (attention), for training models (learning), for storage (memory). The pieces exist.

What's missing is the gating network — the system that ties them together. In machine learning, a Mixture of Experts (MoE) model works by routing each input to the right combination of expert sub-networks. Eigentau does the same thing, but the "experts" are Bittensor's subnets, and the "router" learns which combinations produce the best results.

How the router works

Every query goes through four stages:

1. Decompose

The router takes your complex task and breaks it into smaller, atomic sub-tasks. Each sub-task is tagged with the cognitive faculty it requires. For example, "Research the top Bittensor subnets and predict which will grow fastest" becomes:

2. Route

Each sub-task is matched to the best-performing Bittensor subnet(s) for that faculty. The router uses learned weights — it knows from past experience which subnets deliver the best results for which task types. Multiple subnets can be queried in parallel.

3. Synthesize

Results from all the subnets come back and are merged into a single, coherent answer. Conflicts are resolved. Redundancies are removed. The synthesis step ensures you get one clear output, not a pile of fragmented responses.

4. Learn

After every query, the router scores the outcome. It tracks which subnet combinations produced the best results and adjusts its routing weights using exponential moving average (EMA) blending — 80% current knowledge, 20% new signal. This prevents wild swings while ensuring continuous improvement.

The cognitive map

Eigentau maps every Bittensor subnet to one or more of DeepMind's 10 cognitive faculties. This creates a live picture of how much "general intelligence" the Bittensor network covers.

FacultyWhat it meansSubnet examples
PerceptionProcessing information from the environmentImage recognition, data scraping, web crawling
GenerationProducing text, images, audio, codeLLM text generation, image synthesis, audio
AttentionReal-time focus on relevant informationLow-latency inference, streaming endpoints
LearningAcquiring knowledge through experienceDistributed training, fine-tuning networks
MemoryStoring and retrieving informationDecentralized storage, data curation
ReasoningDrawing conclusions through logicLLM inference, chain-of-thought subnets
MetacognitionEvaluating your own thinkingAnnotation, quality evaluation, tagging
Executive FunctionsPlanning, coordination, flexibilityCompute orchestration, prediction markets
Problem SolvingMulti-step solutions to complex problemsAI agent subnets, autonomous workflows
Social CognitionUnderstanding social contextDialogue, conversation, persona subnets

The cognitive map isn't static. As new subnets launch and existing ones improve, the map updates. Eigentau's dashboard shows a live radar chart of coverage across all 10 faculties.

The learning engine

The learning engine is ported from TensorQ's proven self-learning architecture, adapted for routing instead of trading.

After every routing cycle, the engine:

No human tuning. The learning engine runs autonomously. It discovers which routing patterns work through data, not through someone manually setting weights. After 847 cycles, routing accuracy reached 74.2% — and it keeps improving.

$TAU token

The $TAU token powers Eigentau's routing economy on Base (Ethereum L2).

How it works

Fee pipeline

Trading fees from $TAU are converted to TAO via automated swaps. The TAO funds actual subnet queries on Bittensor. This creates a direct link between token demand and real network usage — every $TAU spent results in actual intelligence being produced on Bittensor.

Roadmap

PhaseWhatStatus
1Website + brand + domain (eigentau.ai)Complete
2Dashboard app (app.eigentau.ai)Complete
3Cognitive map — research all 129 subnets, classify by facultyNext
4Router agent — task decomposition, multi-subnet routing, synthesisPlanned
5Learning engine — self-improving routing weightsPlanned
6Live backend — connect app to real routing dataPlanned
7$TAU token launch on BasePlanned
8MCP server — expose routing as tools for other AI agentsPlanned

Built on Bittensor. Powered by Anthropic Claude. Inspired by DeepMind's AGI framework.