What to know:
- Bittensor trained a 72B LLM (Covenant-72B) on a decentralized network, the largest of its kind.
- Matches models from Meta while using global, permissionless compute.
- Signals a shift toward decentralized AI with less reliance on big tech.

Bittensor has marked a significant milestone in artificial intelligence, having successfully trained a 72B parameter large language model (LLM) on a decentralized network. This milestone is a testament to the convergence of blockchain and artificial intelligence, which offers a new alternative to traditional model training.
Largest Decentralized LLM Training to Date
The model, named Covenant-72B, was trained on Bittensor’s Subnet 3, or Templar. This was the largest decentralized LLM pre-training to date. The Covenant-72B model differs from other artificial intelligence models in that it was created through the use of distributed computing power provided by individual internet users.
More than 70 nodes worldwide were used for the training process. No central infrastructure or data center was used. The training process allowed permissionless participation. This process shows that large-scale AI models can be built collaboratively without depending on tech giants or compute clusters.
Also Read: Bittensor (TAO) Holds Key Level as Fading Momentum Signals Imminent Price Decision
How the Decentralized Training Worked
Bittensor works in the form of a network based on the blockchain, and in this network, participants provide compute power in exchange for the token TAO.
For this training run, contributors offered GPU compute power over the internet, validators assessed the quality of outputs, and reward distribution was based on the value of the contribution. This allowed for dynamic participation, which means the nodes could join and leave the training process without affecting it.
Model Performance and Benchmarks
Covenant-72B had a competitive performance compared to centralized models, as seen in the following: Achieved 67.1 on the MMLU benchmark, comparable to Meta’s LLaMA-2-70B model, and trained on approximately 1.1 trillion tokens. These results suggest that decentralized infrastructure can produce models with performance levels close to those developed by large AI labs.
Why This Matters for AI Development
This milestone responds to an important question in the AI industry: “Can decentralized systems compete with centralized labs?”
The implications of this achievement include less dependence on big tech companies, lower barriers to entry for AI development, and more transparency with open-source releases. The weights and checkpoints of the model have been released to the public, allowing for further research and innovation.
Market and Ecosystem Impact
The achievement has also had a noticeable impact on the crypto AI sector. The token TAO, native to Bittensor, saw substantial price increases following the announcement. The tokens related to subnets have shown substantial increases in price. The interest in decentralized AI among investors and institutions has surged.
The achievement has made Bittensor a leading project in the intersection of blockchain and artificial intelligence.
Challenges and Limitations
However, there are still challenges to be addressed: Scaling distributed computing to larger sizes, ensuring model quality in a decentralized environment, and facing the increasing rate of development in centralized models. Scaling to larger sizes than 72B, yet efficient, will be another major challenge.
Also Read: Bittensor (TAO) Breakout Signals $497 Rally as Bullish Momentum Accelerates





Be the first to comment