
Here we report on the progress of the leading builders in the ZK/AI ecosystem, documenting recent significant releases, technical breakthroughs and general updates.
Featuring: @billions_ntwk, @gizatechxyz, @0xHolonym, @lagrangedev, @NexusLabs, @PolyhedraZK, @spaceandtime, & @zk_agi.

Publications
@billions_ntwk published an article explaining how @HSBC partnered with Billions Network to launch reusable KYC, issuing verifiable credentials via the HSBC Trusted ID app that customers store in their own wallet: https://billions.network/blog/how-hsbc-turned-onboarding-friction-into-growth-with-reusable-kyc
Instead of re-uploading documents for PayMe or loans, users share ZKPs that they are already verified, cutting onboarding time, improving conversion, protecting privacy, and showing regulators that compliance and convenience can align.
European Blockchain Sandbox
Billions has been selected for the EU’s third European Blockchain Sandbox cohort, using @PrivadoID to pilot a privacy-preserving age-verification system: https://billions.network/blog/building-trust-in-europe-billions-network-selected-for-european-blockchain-sandbox
The project combines on-device AI and ZKPs to let users confirm age without revealing personal data, aligning with GDPR, eIDAS 2.0, and EU AI Act rules. The pilot will be developed with regulators and tested across selected EU platforms.
Partnerships
Billions and @Agglayer announced a partnership that makes Billions the official identity layer for Agglayer’s cross-chain network: https://billions.network/blog/billions-becomes-agglayers-official-identity-layer
The integration lets users verify their identity once through Billions, using ZKPs, and reuse those credentials across any app or chain connected to Agglayer.
Builders gain unified KYC, Sybil-resistant tools, and cross-chain identity features, while the collaboration aims to reduce fragmentation and improve trust across modular Web3.

Autonomous Treasury Standard
@gizatechxyz introduced its Autonomous Treasury Standard, a system that lets crypto treasuries manage and optimize stablecoin holdings without human intervention: https://x.com/gizatechxyz/status/1991556211733569895
The update addresses the issue of more than $45B in underutilized treasury capital, which often remains idle due to governance delays and operational overhead. @lootproject became the first adopter, depositing $2M to let Giza’s agents monitor markets, rebalance positions, and pursue higher yields around the clock.
Publications
Giza shared a detailed post covering how the project has spent three years developing zkML to address the core issue of unverifiable AI computation: https://x.com/gizatechxyz/status/1988302180202512688
The thread explains that today’s models operate as black boxes, leaving users unable to confirm whether predictions were produced using the stated architecture or weights.
Giza presents ZKPs as a way to audit AI logic without re-running models, highlighting its open-source LuminAIR framework and early production use cases.

@0xHolonym published two articles outlining its vision for human-aligned digital infrastructure in an era shaped by AI and increasingly autonomous systems:
• The first article presents the Covenant as an interactive platform for collective stewardship of human-aligned technology. It describes how users join, verify their humanity, and contribute Artifacts that explore futures shaped by AI, while highlighting the residency launch at Edge City and the role of curated cultural work in guiding responsible technological development: https://human.tech/blog/a-covenant-for-collective-stewardship-of-human-aligned-technology
• The second article introduces Wallet-as-a-Protocol, a model designed for a world where AI agents interact across apps and chains. It replaces rented WaaS wallets with protocol-level, portable, recoverable self-custody that resists common attack vectors. The article outlines key capabilities - privacy-preserving identity, constrained delegation to AI agents, universal portability - and details the initial WaaP rollout unveiled at WalletCon Argentina: https://human.tech/blog/unveiling-wallet-as-a-protocol

Publications
@lagrangedev wrote a detailed post highlighting how AI now drives decisions across the defense stack and why cryptographic proof is needed to ensure these systems behave as intended: https://x.com/lagrangedev/status/1988039509666394494
The project describes how DeepProve verifies that models run the right logic on the right data, protects sensor inputs, and makes decisions traceable without exposing classified information.
A recent demo in @anduriltech’s Lattice environment demonstrated DeepProve as applied to autonomous defense operations.
Partnerships
Lagrange announced three new partnerships, including:
• @SDCCOE: joined the center and began collaborating with its members to advance cybersecurity, AI modernization, and Zero Trust initiatives. Through this partnership, the company expanded its work on deploying DeepProve to verify autonomous systems and secure mission-critical data pipelines: https://x.com/lagrangedev/status/1988662563731865620
• @Oracle: joined Oracle’s partner ecosystem and began integrating DeepProve into OCI’s sovereign and mission-cloud environments. The collaboration focuses on bringing cryptographically verifiable AI to defense and government infrastructures, covering model verification, sensor provenance, and mission-workflow integrity: https://lagrange.dev/blog/lagrange-joins-oracle-partner-network
• @LockheedMartin: joined Lockheed Martin’s supplier network to support AI-assurance and Zero Trust programs across aerospace and defense. DeepProve will be used to ensure cryptographically provable correctness of AI systems in avionics, ISR, mission planning, and multi-domain operations: https://lagrange.dev/blog/lockheed-martin-supplier-ecosystem

@JensGroth16, Chief Scientist at @NexusLabs, participated in a fireside chat at @HouseofZK’s Verifying Intelligence event, where he outlined how ZKPs have progressed from theory to practical infrastructure and now shape emerging approaches to verifiable AI: https://blog.nexus.xyz/verifying-intelligence-with-jens-groth/
Jens went on to describe Nexus’s focus on applying ZK to financial systems that require both technical correctness and economic security - noting that fully verifying AI training remains unrealistic today, but targeted verification is becoming viable.
He also emphasized that broader adoption depends on standards, education, and public trust, not mathematics alone.
The full interview recording will be released soon across House of ZK platforms.

@PolyhedraZK continues its ongoing “Why zkML?” series, using real-world industry developments to show how increasingly autonomous AI systems create new risks when their reasoning and actions cannot be independently verified:
Financial automation: @Intuit’s move to agentic AI in Credit Karma and TurboTax raises the issue of verifying credit, tax, and financial decisions, with zkML ensuring strategies and logic remain auditable without exposing user data: https://x.com/PolyhedraZK/status/1988230407629004810
Local system actions: @Microsoft’s Copilot Local Actions turns AI into an on-device operator, prompting the need for proofs that executed actions matched user intent and adhered to defined policies: https://x.com/PolyhedraZK/status/1988955100061307205
Industrial autonomy: @nvidia and @ROKAutomation’s edge AI for factories highlights how physical systems require verifiable execution traces to ensure models stay within operational constraints: https://x.com/PolyhedraZK/status/1990957693843222822
AI-driven cyberattacks: @AnthropicAI’s disclosure of an autonomous Claude-led intrusion shows how invisible reasoning and unverified tool use create new attack surfaces, making zkML a foundation for provable agent behavior: https://x.com/PolyhedraZK/status/1991491759344668764
Enterprise knowledge systems: @OpenText’s next-generation AI platform demonstrates that when organizational data is interpreted by models, zkML is needed to prove which inputs, policies, and model versions shaped the conclusions: https://x.com/PolyhedraZK/status/1993273548560036042

@spaceandtime introduced a developer toolkit on @ethereum that lets smart contracts run fast, low-cost ZK-proven queries: https://x.com/spaceandtime/status/1989386363859542044
The kit supports aggregation across onchain data and external datasets within block time, using sub-second proofs powered by Proof of SQL.
This ZK coprocessor is designed specifically for verified SQL queries, enabling analytics over large datasets in under a second and onchain verification for about 200k gas, aimed at DeFi, institutions, and app developers.

Zynapse SDK
@zk_agi highlighted its early work with x402, showing how the Zynapse SDK lets developers add blockchain-based micropayments to any API with minimal code: https://x.com/zk_agi/status/1988373615059017921
The project noted that the SDK is built to simplify payment flows for API-driven and usage-based products. Upcoming additions include multi-party payment splitting, privacy-preserving transactions, and support for multiple assets.
Monthly Report
ZkAGI published a monthly report for October: https://medium.com/zkagi/zkagi-monthly-progress-report-october-2025-a24175a8208b
Key takeaways:
They introduced ZKProof Create and Verify API endpoints, now fully documented and testable in ZkTerminal, with early validation across healthcare and privacy-preserving analytics use cases.
The Star.fun community sale raised $8.9k but was refunded in full to maintain transparency and preserve long-term alignment.
Livestream sessions increased technical engagement, and upcoming work focuses on a prover network, broader API-level utility enhancements, and deeper TEE-based proof generation and verification.
Integrations
Finally, they integrated @Chain_GPT’s Web3 AI chatbot into ZkTerminal to bring real-time, source-aware crypto intelligence directly into the trading flow: https://chaingpt.org/case-studies/how-zkagi-turbocharged-user-engagement-and-decision-speed-with-chaingpts-web3-chatbot?utm_source=twitter&utm_medium=social&utm_campaign=zkagi_case_study
According to ChainGPT’s case study, the integration required two hours of engineering work and helped cut decision time by 60%, reduce external bounce-outs, and increase in-app trade starts. The partnership also coincided with higher retention as users relied less on outside data sources.