top of page

Yanez and Bitmind have signed a strategic partnership to build a face deepfake detection model on Bittensor.

April 8, 2026

Today we are formalizing a partnership with Bitmind that has been in development for several months. Together, we are building a fine-tuned face deepfake detection model on the Bittensor network.

This article is for the Bittensor community. We want you to understand what the partnership is, why it matters, and where it is headed.

The problem we are solving


Deepfake fraud is no longer a theoretical risk. In 2024, a finance employee at a global engineering firm was tricked into wiring $25 million after a video call with what appeared to be the company’s CFO and senior executives. All of them were AI-generated.


That incident is one data point in a much larger shift. Deepfake fraud attempts in the financial sector have grown 2,137% over the last three years. Identity fraud attempts using deepfakes surged 3,000% in 2023 alone. In 2024, the average cost of a deepfake-related incident for a business was just under $500,000. Generative AI fraud losses in the US are projected to reach $40 billion by 2027.


The attack vector driving most of this is face-based. Fraudsters use face-swap deepfakes and virtual cameras to bypass biometric liveness checks during onboarding, KYC, and account access. The cryptocurrency sector is the most targeted industry, accounting for 88% of all detected deepfake fraud cases. Financial services follow closely.

The detection tools that exist today are built for general AI-generated content. They were not trained on biometric-grade face data, and their accuracy drops significantly when confronted with real-world face deepfake attacks. That is the gap this partnership is built to close.


Deepfake attacks bypassing biometric authentication increased 704% in 2023. The detection tools available today were not built for this.


How this partnership came together


Several months ago, the Yanez team began integrating Bitmind’s AI-generated content detection model into our pipeline. We saw a specific opportunity: their model was built for general AI content detection, and had never been trained on biometric-grade face data. That is where Yanez’s dataset and two decades of identity security expertise sit.


We ran the integration, validated the technical fit, and spent the following months working out what a real collaboration could look like. The partnership agreement formalizes what we had already been building.


What each side brings


Bitmind has built an AI-generated content detection model that is now serving enterprise clients. They hold a Top 20 subnet ranking on Bittensor and have established themselves as one of the more proven networks in the ecosystem.


Yanez brings 20+ years of experience in biometrics and identity security, a portfolio of patents, and a proprietary dataset of face images that is the foundation for fine-tuning the joint model. Our existing clients include identity verification providers who already use our synthetic identity images to test and train their systems.


The joint model is trained specifically on face deepfake detection. That is a narrower and harder problem than general AI content detection, and it is the problem that identity verification, onboarding, and access control systems are running into today.


What this means for the Bittensor ecosystem


Two Bittensor subnets with complementary capabilities are building a joint product. Yanez has grown over 50% in the last three months. Bitmind is an established Top 20 network. The joint model is one that neither subnet could build as well alone, and it is aimed at an enterprise market with measurable and growing demand.


This is the kind of collaboration that moves the ecosystem forward. Not two subnets competing for the same ground, but two teams combining what they are each best at to reach a market that decentralized AI has not yet served at this level.


The bigger picture: a Trust Layer for the Internet


Deepfake detection is one piece of something larger that Yanez is building.


The core problem is not just fraud. It is that the internet has lost a reliable way to verify that the entity on the other side of a transaction, a vote, or a conversation is a real, unique human being. That problem is getting harder to solve as generative AI becomes more accessible.


Yanez is building a Trust Layer for the Internet: a proof of humanhood and uniqueness system, decentralized and open source, built on Bittensor. The face deepfake detection model we are building with Bitmind is a foundational component of that system.


Two use cases matter most to this community specifically. The first is Web3 infrastructure: DAO voting, airdrops, and reward systems all have a sybil problem. One person, many wallets. Proof of uniqueness, done at the protocol level, is the clean solution. The second is agentic payments. As AI agents take on more of the financial layer of the internet, the infrastructure that handles payments will need to verify that a human authorized a transaction and cannot repudiate it. That requires proof of humanhood that is cryptographically sound.


Both of these are problems the Bittensor ecosystem will run into directly. We are building the infrastructure to solve them, and the partnership with Bitmind accelerates that work.


The internet needs a way to verify that the entity on the other side is a real, unique human. We are building that infrastructure on Bittensor.


What comes next


Model development is underway. We will publish technical updates as the work progresses, including benchmark results. More on the proof of humanhood product will follow as it moves toward release.


If you are building in identity verification, fraud prevention, DAO governance, or payments infrastructure and want to understand how this work fits into what you are doing, reach out to the Yanez team directly.


FOLLOW THE WORK

  • Follow @yanez__ai and @BitMindAI for updates as the model develops

  • If you are a validator or staker, both subnets are worth your attention

— The Yanez team

bottom of page