Denver, Colorado – The convergence of artificial intelligence and blockchain technology took center stage at ETHDenver 2026, with AI agents emerging as a dominant theme spanning autonomous finance to on-chain robotics. As enthusiasm grows for what’s being called the “agentic economy,” a critical question is gaining prominence: how can institutions verify the data used to train their AI systems?
Startup Perle Labs is tackling this challenge head-on, arguing that AI systems require a verifiable chain of custody for training data, particularly in regulated and high-risk environments. The company has raised $17.5 million in funding to date, with its latest round led by Framework Ventures. Additional investors include CoinFund, Protagonist, HashKey, and Peer VC. Perle Labs reports that over one million annotators have contributed more than one billion data points to its platform.
At ETHDenver 2026, BeInCrypto spoke with Ahmed Rashad, CEO of Perle Labs, who previously held operational leadership roles at Scale AI during the company’s period of rapid growth. Rashad discussed data provenance, model collapse, adversarial risks, and why he believes sovereign intelligence will be a prerequisite for deploying AI in critical systems.
BeInCrypto: You describe Perle Labs as a “layer of sovereign intelligence for AI.” For readers unfamiliar with the data infrastructure debate, what does that mean in practice?
Ahmed Rashad: “The term ‘sovereign’ was chosen deliberately, and it has several layers of meaning.
The most literal meaning is control. If you’re a government, a hospital, a defense contractor, or a large enterprise deploying AI in a high-risk setting, you need to have intelligence behind that system—not cede it to a black box you can’t inspect or audit. Sovereign means you know what data was used to train your AI, who verified it, and you can prove it. Most industries today can’t say that.
The second meaning is independence. Acting without external interference. Here’s also critically needed by institutions like the Department of Defense, or companies as they implement AI in sensitive environments. You can’t allow critical AI infrastructure to depend on data pipelines you can’t control, verify, or protect from manipulation. This isn’t a theoretical risk. The NSA and CISA have already issued operational guidance regarding data supply chain vulnerabilities as a national security issue.
The third meaning is accountability. As AI shifts from generating content to making decisions—whether medical, financial, or military—there needs to be someone who can answer: where did that intelligence come from? Who validated it? Is the record immutable? At Perle, our goal is to have every contribution from every expert annotator recorded on-chain. That data is unchangeable. That immutability is what makes the term ‘sovereign’ apt, not just aspirational.
Practically, we’re building a layer of verification, and credentialing. If a hospital deploys an AI diagnostic system, they need to be able to trace every data point in the training data back to the credentialed professional who validated it. That’s sovereign intelligence. That’s what we mean.”
BeInCrypto: You were part of Scale AI during its period of rapid growth, including major defense contracts and investment from Meta. What lessons from that experience informed your understanding of the gaps in traditional AI data pipelines?
Ahmed Rashad: “Scale is a phenomenal company. I was there as it went from $90 million to $29 billion in valuation, and I directly witnessed where the problems lie.
The fundamental issue is that data quality and scale are at odds. When you’re scaling 100x, the pressure is always to move fast: more data, faster annotation, lower cost per label. What gets sacrificed is precision and accountability. You conclude up with an opaque pipeline: you roughly know what goes in, you have quality metrics on what comes out, but the process in the middle is a black box. Who verified the data? Were they actually qualified? Was the annotation consistent? Those questions develop into almost impossible to answer at scale with traditional models.
One other thing I learned is that the human element is almost always treated as a cost to be minimized, rather than a capability to be developed. A transactional model—pay per task and optimize throughput—actually degrades quality over time. It also burns out the best contributors. People capable of providing high-quality annotation with specialized expertise aren’t going to stick around in a gamified micro-task system with extremely low pay. If you want that quality, you have to do it differently.
That understanding is what underpins Perle. The data problem can’t be solved by simply throwing more bodies at it. The solution is to treat contributors as professionals, build a verifiable credentialing system, and make the entire process auditable from start to finish.”
BeInCrypto: You’ve reached one million annotators and over one billion data points assessed. Most data labeling platforms rely on anonymous labor. What’s fundamentally different about your reputation model?
Ahmed Rashad: “The key difference is that at Perle, your work history is owned by you, and it’s permanent. Every time you complete a task, a record of your contribution, the quality level achieved, and how it compares to expert consensus is written on-chain. That data can’t be edited, deleted, or transferred to someone else. Over time, this builds a professional credential that appreciates in value.
Compare that to anonymous work, where someone can essentially be replaced by anyone. They have no stake in quality since their reputation doesn’t travel; each task is disconnected from the previous one. That kind of incentive structure produces predictable results: minimal effort to clear the task.
Our model is the inverse. Contributors build a verifiable track record. The platform recognizes expertise in specific domains. For example, a radiologist who consistently delivers high-quality medical image annotations will have a profile that reflects that expertise. That reputation unlocks access to higher-value tasks, better compensation, and more meaningful work. The cycle reinforces itself: quality increases because the incentives support quality.
We’ve surpassed one billion data points assessed through our network of annotators. That’s not just a volume number; it’s one billion data contributions that are traceable and attributable to verified humans. That’s the foundation of trustworthy AI training data, and it’s structurally impossible to achieve with anonymous labor.”
BeInCrypto: Model collapse is often discussed among researchers but rarely surfaces in public AI conversations. Why do you believe that is, and should the public be more concerned?
Ahmed Rashad: “The topic doesn’t obtain widespread public attention because it’s a slow-moving crisis, not a dramatic event. Model collapse—where AI systems increasingly trained on AI-generated data begin to degrade in quality, lose detail, and become more homogenized—doesn’t create a single, headline-grabbing incident. It’s a gradual erosion of quality until it becomes severe and noticeable.
The mechanism is simple: the internet is now flooded with AI-generated content. Models trained on that content learn from AI output, not from original human knowledge and experience. Each generation of training further reinforces the distortions of the previous one. This feedback loop runs without natural correction.
Should the public be more concerned? Yes, especially in high-stakes domains. When model collapse affects content recommendation algorithms, the recommendations just get worse. But when it impacts AI models for medical diagnosis, legal systems, or defense intelligence, the consequences are remarkably different. There’s no room for quality degradation.
That’s why a layer of human-verified data isn’t optional as AI moves into critical infrastructure. We need a continuous source of genuinely original and diverse human intelligence as training data; not AI output endlessly reprocessing other models. We have over a million annotators with real expertise in dozens of fields. That diversity is the antidote to model collapse. You can’t solve it with synthetic data or more compute alone.”
BeInCrypto: As AI moves from digital environments into physical systems, what fundamentally changes from a risk, liability, and standards perspective?
Ahmed Rashad: “What changes is the inability to undo. That’s the core of it. A language model hallucinating generates a wrong answer. You can correct it, flag it, and move on. But a robotic surgery system making a decision based on a flawed inference, an autonomous vehicle misclassifying an object, a drone striking the wrong target—those mistakes don’t have an undo button. The cost of failure shifts from embarrassing to catastrophic.
This also changes everything about what standards must apply. In the digital world, AI development can be iterative and self-correcting. In physical systems, that model doesn’t work. The training data behind those systems must be verified *before* deployment, not audited after an incident.
Accountability also shifts. In the digital world, responsibility can be easily diffused: was it the model? The data? The way it was deployed? But in physical systems, especially when humans are harmed, regulators and courts will demand clear answers. Who trained this system? With what data? Who verified that data, and to what standard? Companies and governments that can answer those questions are the ones who will be allowed to operate. Those who can’t will face legal risks they never anticipated.
We created Perle specifically for this transition. Human-verified, sourced from experts, and auditable on-chain. As AI starts operating in warehouses, operating rooms, and battlefields, the intelligence layer underneath it *must* meet a different standard. That’s the standard we’re building.”
BeInCrypto: How real is the threat of data poisoning or adversarial manipulation in AI systems today, especially at a national level?
Ahmed Rashad: “The threat is real, documented, and those with access to classified information are treating it as a national security priority.
DARPA’s GARD program (Guaranteeing AI Robustness Against Deception) has spent years developing defenses against adversarial attacks on AI systems, including data poisoning. In 2025, the NSA and CISA jointly issued guidance explicitly warning that data supply chain vulnerabilities and maliciously modified training data are credible threats to the integrity of AI systems. This isn’t a theoretical paper. It’s operational guidance from agencies that don’t issue warnings for hypothetical risks.
The attack surface is vast. If you can corrupt the training data of an AI system used for threat detection, medical diagnosis, or logistics optimization, you don’t need to hack the system directly. You’ve altered how the system perceives the world. This is a far more sophisticated and difficult-to-detect attack vector than traditional cyberattacks.
The $300 million contract Scale AI holds with the DoD’s CDAO to leverage AI in classified networks is largely because the government understands they can’t use AI trained on unverified public data in sensitive environments. The question of data provenance at this level isn’t academic. It’s an operational necessity.
What’s often missing from public discussion is that this isn’t just a government problem. Any company operating AI in a competitive environment—financial services, pharmaceuticals, critical infrastructure—has potential exposure to adversarial data they may not even be aware of. The threat is real. The defenses are still under development.”
BeInCrypto: Why can’t governments or large corporations build this verification layer themselves? What’s the real answer when that question is asked?
Ahmed Rashad: “Some are trying. And those who try quickly realize the real problem.
Building the technology is the effortless part. The hard part is the network. Credentialed, verified experts—radiologists, linguists, lawyers, engineers, scientists—don’t magically appear just because you build a platform. You have to recruit them, credential them, build incentives to keep them engaged, and develop consensus mechanisms to ensure their contributions are meaningful at scale. That takes years and expertise that most government agencies and corporations don’t have internally.
The second issue is diversity. A government agency building its own verification layer will only draw from a limited and homogenous pool of resources. The value of a global network of experts isn’t just credentials, but perspective, language, cultural context, and specialization that can only be obtained by operating at scale across many regions. We have over a million annotators. You can’t replicate that within an organization.
The third issue is incentive design. Keeping high-quality contributors engaged requires transparent, fair, and programmable compensation. Blockchain infrastructure enables this, although internal systems typically don’t: immutable contribution records, direct attribution, and verifiable payments. Government procurement systems aren’t designed for that kind of efficiency.
The honest answer when asked is: you’re not just buying a tool. You’re getting access to a network and credentialing system that took years to build. The alternative isn’t ‘build it yourself,’ but ‘accept the risk of data quality by not having it.’”
BeInCrypto: If AI becomes core national infrastructure, where will the layer of sovereign intelligence sit in that chain five years from now?
Ahmed Rashad: “In five years, I believe the system will mirror the function of financial audits today—a non-negotiable verification layer sitting between data and deployment, backed by regulation and clear professional standards.
Right now, AI development operates without an equivalent of a financial audit. Companies self-report on their training data. There’s no independent verification, no professional certification of the process, no third-party attestation that the intelligence in the model meets certain standards. We’re in the early days of self-certification, before the era of Sarbanes-Oxley in the financial world.
As AI becomes critical infrastructure—for electricity, healthcare, financial markets, defense networks—that model will become untenable. Governments will mandate auditability. Procurement processes will require verified data as a condition of contract. Liability frameworks will demand accountability for failures that could have been prevented with verification.
Perle will be the verification and credentialing layer in that chain, an entity capable of producing an immutable record of model training data, auditable, with clear provenance. Five years from now, this isn’t an add-on to AI development; it’s a prerequisite.
The broader point is that sovereign intelligence isn’t just about defense contractors. It’s the foundation that enables AI to be deployed anywhere the stakes are real. And as AI permeates more contexts like that, that foundation becomes the most valuable part of the entire chain.”