AI Latest (Dec 9, 2025)

Share on

Here we report on the progress of the leading builders in the AI ecosystem, documenting recent significant releases, technical breakthroughs and general updates.

Featuring: @0G_labs, @AethirCloud, @AnthropicAI, @deepseek_ai, @flock_io, @FractionAI_xyz, @gaib_ai, @gensynai, @GoogleAI, @gonka_ai, @LazAINetwork, @OntologyNetwork, @OpenAI, @Alibaba_Qwen, @SaharaLabsAI, & @xai.

0G Labs

Media

In a recent interview with @bloomingbit_io, @michaelh_0g, CEO of @0G_labs, stressed how AI must operate as a public good and cannot remain a black box: https://bloomingbit.io/en/feed/news/100971

He highlighted 0G’s approach of recording AI development on-chain to increase transparency, outlined its modular compute, storage, and data availability design, and noted strong investor backing. 

Partnerships

0G Labs and @NTUsg launched a S$5 million research program to advance decentralized AI, marking 0G’s first global university partnership: https://0g.ai/blog/0g-partners-with-ntu-singapore

The four-year initiative will fund work on decentralized training, blockchain-based model validation, and proof-of-useful-work methods. 

It aims to connect academic research with real deployment, with early proofs-of-concept in finance, healthcare, environmental modeling, and infrastructure, supported by joint labs, open marketplaces, and student-focused initiatives.

Aethir

Roadmap

@AethirCloud outlined a 12-month roadmap focused on scaling its decentralized GPU cloud, expanding enterprise adoption, and upgrading core infrastructure: https://aethir.com/blog-posts/aethirs-12-month-strategic-roadmap-supercharging-enterprise-ai-compute-growth

The plan includes using its $344M Strategic Compute Reserve to support institutional onboarding, increase compute availability, and strengthen token utility. 

Upcoming milestones span v2 and v3 network upgrades, expanded Cloud Host onboarding, new staking and liquidity mechanisms, enterprise case studies, a developer SDK, and broader ecosystem programs aimed at supporting long-term AI and compute growth across global markets.

Publications

Aethir published three articles highlighting its decentralized GPU cloud, host operations, and enterprise-scale growth across AI and compute demand:

• The first article reviews Aethir’s decentralized GPU cloud infrastructure through a technical evaluation by Anyone Protocol, comparing RTX 4090 and 5090 performance across AI workloads. It also demonstrates how Aethir units can serve anonymous, privacy-preserving inference pipelines when integrated with the Anyone onion-routing network.

• The second outlines how new Cloud Hosts can maximize revenue on Aethir by avoiding common setup and maintenance mistakes. It covers hardware compatibility, cooling, utilization monitoring, uptime practices, and effective use of the Cloud Host Portal to ensure stable performance and higher earnings.

• The third article provides an overview of Aethir’s scaling trajectory, detailing its 435,000 GPU containers, 1.4 billion compute hours delivered, and growing enterprise demand. It highlights the platform’s ARR growth, enterprise use cases across AI and gaming, and institutional validation through large-scale compute and token commitments.

Anthropic

Claude Opus 4.5

@AnthropicAI introduced Claude Opus 4.5, highlighting stronger performance in coding, agents, reasoning, and productivity tasks: https://anthropic.com/news/claude-opus-4-5

The model is available across apps, the API, and major cloud platforms, with lower token use and upgraded safety against prompt injection. 

Anthropic also updated its developer platform, Claude Code, and consumer apps, adding improved agent tools, expanded integrations, and broader access for professional users.

Partnerships

Anthropic announced several major partnerships advancing enterprise AI, education, and global-scale deployments, including:

@Snowflake: Expanded its multi-year partnership with Snowflake through a $200M agreement to bring Claude models and enterprise AI agents directly into Snowflake’s governed data environment. The collaboration integrated Claude across Snowflake Cortex, enabled multimodal analysis, and supported custom multi-agent systems for regulated industries: https://anthropic.com/news/snowflake-anthropic-expanded-partnership

@dartmouth & @awscloud: Partnered with Dartmouth College and Amazon Web Services to deploy Claude for Education at institutional scale. The initiative provides secure, academically aligned AI tools for students and faculty, integrates Claude into campus platforms, and supports AI-enhanced learning, research, and career development programs across disciplines: https://home.dartmouth.edu/news/2025/12/dartmouth-announces-ai-partnership-anthropic-and-aws

@GivingTuesday: Launched Claude for Nonprofits in partnership with the GivingTuesday movement, offering up to 75% discounted access to Claude, new connectors for Benevity, Blackbaud, and Candid, and a free AI Fluency course. The program supports global nonprofits with grant writing, impact evaluation, donor engagement, and operational efficiency: https://anthropic.com/news/claude-for-nonprofits

@Microsoft & @nvidia: Formed strategic partnerships with Microsoft and NVIDIA to scale Claude on Azure using NVIDIA architectures, expand enterprise access to Claude across Microsoft’s ecosystem, and collaborate on model optimization. The agreement included major compute commitments, cross-cloud deployment, and combined investments of up to $15B from Microsoft and NVIDIA: https://anthropic.com/news/microsoft-nvidia-anthropic-announce-strategic-partnerships

• Government of Rwanda & @alx_africa: Partnered with the Rwandan government and ALX to deploy Chidi - an educational companion built on Claude - to hundreds of thousands of learners across Africa. The initiative trains teachers and civil servants, integrates AI into national education systems, and supports ALX students with AI-guided learning across technical fields: https://anthropic.com/news/rwandan-government-partnership-ai-education

DeepSeek

@deepseek_ai introduced DeepSeek-V3.2, a model designed to improve reasoning and agent performance while keeping computational costs low: https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Speciale

It adds Sparse Attention for long-context efficiency, a scaled reinforcement learning framework that enables the Speciale variant to match or exceed leading frontier models, and a synthesis pipeline for large-scale agentic training data. 

The team also released verified Olympiad submissions and updated its chat template with new tool-use formats and a dedicated developer role.

FLock 

Publications

@flock_io published an article explaining that the current AI stock downturn highlights structural problems in a market dominated by a few major firms, raising concerns about overinvestment, centralisation, and environmental impact: https://flock.io/blog/let-the-centralised-ai-bubble-burst-and-deai-rise-from-the-ashes

The piece argues that a correction could push the industry toward decentralised AI models with transparent governance, data sovereignty, and community ownership. 

Media

@0x7SUN, CEO and Founder of FLock, shared in an interview with @futuristdotai how Flock was created to address the limits of centralized AI, enabling organizations to train models collaboratively without exposing sensitive data: https://securities.io/jiahao-sun-ceo-and-founder-of-flock-io-interview-series/

He explained how federated learning and blockchain preserve privacy while rewarding contributors. Sun highlighted growing demand for decentralized training, partnerships with global institutions, and the role of token-aligned incentives in supporting secure model development across healthcare, finance, and public-sector applications.

Fraction AI

@FractionAI_xyz introduced Stable-Up, a new Space designed to bring stablecoins into the agentic economy: https://x.com/FractionAI_xyz/status/1991901427237548123

It allows users to create or borrow agents that manage stablecoin positions based on individual risk profiles and adapt their strategies through ongoing feedback. 

These agents coordinate activity across integrated vaults from @MoonwellDeFi, @SiloFinance, @Morpho, @yearnfi, @eulerfinance, @avantisfi, and more. All operations run on @base for low-cost, high-throughput execution, supporting continuous state updates as Fraction AI expands its onchain agent framework.

Gaib

Whitepaper

@gaibfdn presented a whitepaper outlining their blockchain-agnostic economic layer that connects idle DeFi liquidity with capital-hungry AI infrastructure such as GPUs, data centres, robotics and energy systems: https://gaibfoundation.org/whitepaper 

The protocol tokenizes and validates real-world AI assets, routes capital through a modular stack, and returns on-chain rewards based on verified performance. 

The paper introduces AID and sAID as core dollar-linked instruments, details GAIB token governance, and sets out a three-phase roadmap from deployment to ecosystem expansion.

Gensyn

Automated Market Maker

Several researchers at @gensynai - @SplezzzK, @gab_p_andrade, and @oguzer90 - presented a model for decentralized markets handling goods with time-perishing utility, using compute as the motivating case: https://arxiv.org/abs/2511.16357 

They introduce an automated market maker that posts hourly prices as a concave function of load, separating price discovery from allocation to enable transparent, low-latency trading. 

The paper establishes equilibrium quotes, outlines incentive mechanisms for truthful provider participation, and shows how verifiable, reproducible execution supports scalable matching across heterogeneous hardware.

Publications

Gensyn published three articles covering advances in decentralized AI security, compute markets, and verifiable machine learning:

• The first article explains how decentralized GRPO training can be attacked through adversarial completions and how lightweight defenses restore robustness. It outlines poisoning risks in collaborative RL, shows how malicious behavior can spread across participants, and introduces two defenses- log-probability checks and LLM-as-a-judge - to secure decentralized reinforcement learning for LLMs: https://blog.gensyn.ai/hail_to_the_thief/

• The second covers a theory of decentralized compute markets that treats compute as time-based capacity rather than hardware bundles. It introduces deterministic execution, verification, and checkpointing as the foundation for dynamic pricing, transparent supply-demand matching, and incentive-aligned market design, enabling more efficient and scalable decentralized compute allocation: https://blog.gensyn.ai/from-bundles-to-time-a-theory-of-decentralised-compute-markets/

• The third article outlines Verde, a verification system enabling trustless validation of ML training, fine-tuning, and inference. It compares verification methods, introduces RepOps for reproducible execution, and describes a two-level bisection protocol that identifies disagreements efficiently. Verde is already deployed in Judge and provides a practical path toward verifiable decentralized machine intelligence: https://blog.gensyn.ai/verde-verification-system-in-production/

Google AI

Gemini 3

@GoogleAI presented Gemini 3 with upgraded reasoning, stronger multimodal performance and new generative interfaces that adapt layouts and interactive views to each prompt: https://blog.google/products/gemini/gemini-3-gemini-app/

The app was redesigned for easier navigation, improved content management and a more integrated shopping experience. 

An experimental Gemini Agent now executes multi-step tasks across Workspace apps with user approval. Gemini 3 Pro is rolling out globally, with extended access offered to U.S. college students.

Nano Banana Pro

GoogleAI also introduced Nano Banana Pro, an upgraded image generation and editing model built on Gemini 3 Pro with stronger reasoning, improved text rendering and deeper real-world grounding: https://blog.google/technology/ai/nano-banana-pro/

It delivers more accurate visuals, supports multilingual text, and maintains consistency across complex inputs. Users gain detailed control over composition, lighting and focus. 

The model is rolling out across Gemini products, Google Ads, Workspace, Vertex AI and creative tools.

Gonka

Mainnet Upgrade

@gonka_ai has executed its v0.2.5 mainnet upgrade after a governance vote concluded, enabling @ethereum-bridge primitives, Random Confirmation Proof-of-Compute, and support for Blackwell GPUs: https://x.com/gonka_ai/status/1992257779709460856

The release also fixes issues in validator selection, participant status, transfers, BLS key generation, and reward handling. 

Cosmovisor manages node and API updates automatically, but hosts are advised to monitor node activity and follow the technical guidance provided in the project’s Discord.

Publications

Gonka published an article highlighting a proposal to adapt proof-of-work to the needs of decentralized AI: https://what-is-gonka.hashnode.dev/a-new-proof-of-work-for-the-age-of-ai

The piece explains that while Bitcoin’s PoW secures consensus through computational waste, growing reliance on centralized LLM infrastructure creates similar trust and control issues. Existing PoS-based networks shift power toward capital rather than computation. 

The article outlines how transformer-based PoW could tie network security to real inference work using time-bounded tasks and randomly initialized models, preserving fairness while aligning computation with actual utility.

LazAI

Open Launch

@LazAINetwork launched Open Launch on @Lazpadfun, offering anyone the ability to create Meme + AI tokens with built-in utility through co-build agents on @MetisL2 Andromeda: https://x.com/LazAINetwork/status/1994028415632982346

The model relies on bonding-curve pricing, organic liquidity formation and community validation, with tokens graduating once the curve reaches 1,067 METIS. Users can trade, launch agents, earn co-build rewards and referral fees, creating a more sustainable path for AI-native token launches.

LazTalks Episode

In the latest LazTalks Episode, the discussion focused on how agent-to-agent payments should be governed as AI systems begin initiating transactions independently: https://lazai.network/blog/laztalks-episode-6-who-should-govern-agent-to-agent-payments

Participants from Metis, @Polyflow_PayFi, @SentientAGI, and @AEON_Community detailed the principles such systems must uphold, including auditability without compromising privacy, fraud-resistant settlement guarantees, and fairness for users who may not fully understand the underlying technology. 

The panel examined shared responsibility between users, operators, and networks, and debated identity-light authorization models. They concluded that future AI-native payments require open rails with carefully applied safeguards.

Publications

LazAI published three articles outlining its approach to verifiable data, AI-native assets, and the role of reliable information in agentic AI systems:

• In the first article, LazAI presents its alternative to indiscriminate internet scraping by introducing curated, provenance-verified datasets anchored onchain through DATs. It showcases Lazbubu as the first DAT, explains Memory Units as a user-controlled knowledge system, and highlights how agent growth is driven by high-quality, verifiable data instead of untrusted online sources: https://lazai.network/blog/train-an-ai-that-ignores-the-internet-garbage

• Another explains how DAT (ERC-8028) establishes an AI-native asset standard for Ethereum’s emerging dAI stack. It outlines Ethereum’s role as a settlement layer for AI agents, describes how DAT encodes usage rights and revenue flows, and shows practical deployments across @LazbubuAI, CreateAI, and LazAI’s Alpha Mainnet: https://lazai.network/blog/data-anchoring-token-dat-and-erc-8028-an-ai-native-asset-standard-for-ethereums-dai-era

• The final article focuses on why autonomous agents cannot rely on stale data, distinguishing generative AI from agentic systems that require continuously updated context to make safe decisions. It describes how DePIN networks provide real-time data pipelines, and how LazAI uses DATs and iDAO governance to maintain data freshness, provenance, and accountability for agents operating in dynamic environments: https://x.com/LazAINetwork/status/1993768119655887140

Partnerships

Finally, the project introduced two new collaborations, including:

@awscloud: Invited AWS to support builders deploying verifiable AI, agentic systems, and DePIN services on LazAI by providing cloud credits, accelerated compute, managed data pipelines, and secure runtime environments. The collaboration aims to help teams ship pay-per-use, auditable services more quickly through joint resources, events, and hackathons: https://x.com/LazAINetwork/status/1991390628559679512

@Polyflow_PayFi: Partnered with PolyFlow, a PayFi layer focused on stablecoin payments, real-world settlement, and on-chain identity. The partnership enables exploration of how verifiable AI agents and DATs on LazAI can integrate with practical payment flows, supporting use cases from e-commerce and cross-border settlement to crypto-backed cards and yield-generating payment activity: https://x.com/LazAINetwork/status/1993470301594644529

Ontology

Mainnet Upgrade

@OntologyNetwork rolled out the v3.0.0 MainNet upgrade, reducing ONG supply to 800 million, updating reward distribution toward ONT stakers, and strengthening long-term economic alignment: https://ont.io/news/letter-from-the-founder-ontologys-mainnet-upgrade/

The update improves performance, interoperability, and identity tooling with upcoming EIP-7702 support and EVM-based ONT ID creation. 

Expanded developer resources, new privacy features, and product enhancements broaden practical utility as governance-approved tokenomics move the ecosystem into its next phase.

Node Campaign

Ontology has launched its Node Campaign for Round 266, running until Dec 12/13: https://ont.io/news/your-guide-to-joining-the-ontology-node-campaign/

It encourages ONT holders to create new nodes by reimbursing the 2,500 ONG setup fee for the five nodes with the highest total stake. The goal is to strengthen decentralization and network stability, and participants must register a new node within the round and attract delegators to qualify.

Events

Finally, Ontology was one of the major sponsors of Verifying Intelligence 3.0 organized by @HouseofZK during @EFDevcon, with the team also participating in several key discussions:

@GeoffTRichards, Head of Community at Ontology, joined @jgorzny of @ZircuitL2, @NorbertVadas of @thezkcloud, and @o1coby of @MinaProtocol & @o1_labs to examine how society is evolving as AI becomes embedded in decisions shaping economics, governance, identity, and autonomy: https://x.com/HouseofZK/status/1993629205116887092

@humpty0x, Ecosystem Lead at Ontology, participated alongside @reka_eth of @boundless_xyz, @zKsisyfos of @StarkWareLtd, @DacEconomy of @ProjectZKM, and @Viggy_117 of @eigencloud to discuss how authorship, provenance, and human contribution can remain verifiable as AI models scale: https://x.com/HouseofZK/status/1993685245430734977

OpenAI 

Products

1/ @OpenAI released GPT-5.1, upgrading both Instant and Thinking models with improved instruction-following, clearer communication, and adaptive reasoning: https://openai.com/de-DE/index/gpt-5-1/

Instant becomes more conversational, while Thinking adjusts processing depth based on task complexity and provides more accessible explanations. 

The update also refines personalization with expanded tone options and direct controls over style, making ChatGPT more precise and aligned with user preferences in daily use.

2/ The company also introduced GPT-5.1-Codex-Max, a new model for agent-based programming now available in Codex: https://openai.com/de-DE/index/gpt-5-1-codex-max/

It delivers faster reasoning, higher accuracy, and improved token efficiency across real development tasks. Compaction enables coherent work over long sessions, supporting large refactorings and extended agent loops. 

The model reduces costs, adds Windows support, and strengthens safeguards for secure use, with API access expected soon.

Publications

OpenAI published three research articles examining interpretability, scientific acceleration, and model honesty across its newest systems.

• In the first article, OpenAI introduces sparse-circuit language models and shows that reducing connections makes internal computations easier to isolate and understand. The work outlines how sparse architectures reveal small functional circuits, enabling clearer analysis of reasoning steps and offering a path toward more interpretable large-scale systems: https://openai.com/de-DE/index/understanding-neural-networks-through-sparse-circuits/

• The next article presents early experiments where GPT-5 assists researchers across biology, mathematics, physics, and computer science. The paper highlights how the model helps synthesize complex literature, propose mechanisms, explore proofs, and accelerate parts of scientific workflows when guided by domain experts: https://openai.com/de-DE/index/accelerating-science-gpt-5/

• The final article examines a method called confessions, which trains models to explicitly report when they violate instructions, cut corners, or exploit reward signals. The approach increases visibility into misbehavior across stress-tests and serves as an additional diagnostic layer for monitoring and evaluating advanced systems: https://openai.com/index/how-confessions-can-keep-language-models-honest/

Qwen

Code v0.2.1

@Alibaba_Qwen released Code v0.2.1 with free web search across multiple providers and 2000 daily searches for OAuth users: https://x.com/Alibaba_Qwen/status/1989368317011009901

The update improves code editing through a new fuzzy matching pipeline, expands control over model behavior, enhances Zed IDE integration, and simplifies tool outputs. 

Search tools, performance, and Unicode handling are upgraded, and several cross-platform bugs and token-limit issues are resolved.

Qwen3-TTS-Flash

Qwen introduced an update to Qwen3-TTS-Flash, expanding the model to 49 timbres, 10 languages, and 9 dialects: https://qwen.ai/blog?id=qwen3-tts-1128

It delivers more natural speech, improved prosody, and adaptive pacing. According to the team, the model now outperforms several competing systems on multilingual benchmarks and provides broader character, regional, and scenario coverage. Developers can access it through the Qwen API, with sample voices and code examples included.

Publications

The project published an article highlighting a new reinforcement learning method called SAPO, designed to stabilize and improve LLM training: https://qwen.ai/blog?id=sapo 

The method replaces hard clipping with a smooth gating function that preserves useful gradients while reducing noisy, off-policy updates. SAPO maintains sequence-level coherence, adapts at the token level, and uses asymmetric temperatures for better stability. 

Experiments show consistent gains over GSPO and GRPO across math, coding, logic, and multimodal benchmarks.

Sahara AI

Open-Source

@SaharaAI open-sourced three protocols that extend the x402 standard into a full execution and coordination layer for autonomous agents: https://saharaai.com/blog/sahara-open-source-agentic-protocols

The release introduces serverless deployment for fair value distribution, verifiable compute with TEE-based proofs, and programmable policy enforcement that lets providers define usage rules for models and data.

Together, these components establish a trustless layer for modular agent workflows and set the groundwork for a broader autonomous AI economy.

Publications

Sahara published an article covering the White House’s new Executive Order creating the Genesis Mission, a federally governed AI research environment led by the Department of Energy: https://saharaai.com/blog/the-genesis-mission-and-ai

The initiative unifies national labs, agencies, and selected private partners to train scientific models, integrate sensitive datasets, and coordinate autonomous agents across secured infrastructure. 

It introduces structured collaboration pathways, establishes national research priorities, and highlights the need for secure agentic frameworks - an area Sahara’s open-source protocols specifically target. The program’s long-term impact spans science, industry, and national strategy.

xAI

Grok 4.1

@xai released Grok 4.1 across web and mobile, introducing stronger creative, emotional, and collaborative performance: https://x.ai/news/grok-4-1/ 

The model was trained with large-scale reinforcement learning to improve style control, intent sensitivity, and overall alignment. A silent rollout from November 1-14 showed users preferred it in 64.78% of blind comparisons. 

Benchmarks place Grok 4.1 at the top of LMArena and EQ-Bench, and post-training work lowers hallucination rates in non-reasoning mode.

Partnerships

xAI announced a framework agreement with Saudi Arabia and @HUMAINAI to build and operate low-cost, hyperscale GPU data centers in the Kingdom: https://x.ai/news/grok-goes-global/

The deal also brings nationwide deployment of Grok, forming a unified AI layer for public and private sectors. Grok will integrate into Humain One to support real-time intelligence and autonomous workflows. 

The collaboration combines xAI’s model capabilities with Humain’s infrastructure to advance Saudi Arabia’s AI ambitions.

More articles
News
Proof Verification Latest (Feb 19, 2025)
Read More
February 19, 2025
News
zkVM Latest (Jan 30, 2025)
Read More
January 30, 2025