
Here we report on the progress of the leading builders in the AI ecosystem, documenting recent significant releases, technical breakthroughs and general updates.
Featuring: @0G_labs, @AethirCloud, @AnthropicAI, @flock_io, @FractionAI_xyz, @gaib_ai, @gensynai, @GoogleAI, @gonka_ai, @GoKiteAI, @OntologyNetwork, @LazAINetwork, @OpenAI, @Alibaba_Qwen, @QuackAI_AI, @SaharaLabsAI, & @SentientAGI.

Monthly Report
@0G_labs published its October update, highlighting progress toward smoother deployments, higher reliability, and clearer patterns for how agents operate across the network: https://0g.ai/blog/tech-update-oct-2025
Highlights include:
• Compute improvements added settlement previews, refined contract efficiency, streamlined provider workflows, and advanced verification readiness.
• AIverse completed CVM testing, fixed concurrency issues, expanded deployment options, and continued TEE security checks.
• Node and restaking systems improved reward access and expanded automated validator testing.
• Storage updates introduced stronger error handling, better monitoring, and more reliable large-data processing.
Tech
0G Labs announced its updated testnet, releasing 0G Chain v3.0.3 for all nodes: https://x.com/0G_labs/status/1988167314563625224
The upgrade introduces an improved validator API and more reliable transaction broadcasting. Node operators are required to install the update to keep their systems aligned with the network and ensure continued stability and access to new functions.
Events
0G Labs is one of the main sponsors of Verifying Intelligence 3.0, an event co-hosted by House of ZK, @brevis_zk, and @invisiblgarden during Devcon in Buenos Aires: https://luma.com/xevcy7za
@AytuncYildizli, Head of Ecosystem at 0G Labs, will join the agenda for a fireside chat scheduled as part of the speaking program: https://x.com/HouseofZK/status/1988591359301681244
Identity
The project has introduced the .0g domain, an on-chain identity system developed with @SPACEID: https://0g.ai/blog/introducing-0g-domain
Each name connects directly to an EVM wallet, giving users and AI agents a single, human-readable identifier that works across applications in the 0G ecosystem. The domain is intended to make identity portable, verifiable, and wallet-native as agents interact and move value onchain. Minting for 0G domains is planned to begin in early 2026.

Partnerships
@AethirCloud will provide the GPU infrastructure powering @Velvet_Capital’s DeFAI Operating System, supporting its on-chain research, trading, and portfolio tools: https://aethir.com/blog-posts/velvet-capital-taps-aethir-to-power-defai-operating-system-with-strategic-compute-reserve-access
Through Predictive Oncology’s Strategic Compute Reserve, Velvet gains cost-efficient, bare-metal GPU access to scale its AI features. The platform, active across major blockchains with over 100,000 users, will rely on Aethir’s global network to run real-time, AI-driven strategies and expand its agent-based financial capabilities.
Publications
Aethir has published a series of articles showing how decentralized GPU infrastructure, strategic compute reserves, and new financial models can underpin the next era of AI:
• The first article explains how distributed Strategic Compute Reserves protect AI workloads from GPU scarcity and single-cloud outages, allowing enterprises to keep training and inference online even when a major provider or region fails: https://aethir.com/blog-posts/distributed-strategic-compute-reserves-securing-ais-future-in-a-gpu-constrained-world
• The second piece explores why bare-metal GPUs outperform virtualized setups, arguing that Aethir’s decentralized GPU cloud delivers predictable, low-latency performance and major cost savings for training and large-scale inference: https://aethir.com/blog-posts/bare-metal-vs-virtualized-gpus-performance-matters
• The third lays out the vision of a decentralized “supercloud,” where Aethir’s globally distributed GPU network and Strategic Compute Reserve turn fragmented capacity into an investable, revenue-generating AI infrastructure layer: https://aethir.com/blog-posts/the-decentralized-supercloud-why-distributed-compute-will-power-the-next-industrial-era
• The fourth examines the financial shift from CapEx-heavy hardware buying to flexible GPU-as-a-Service, showing how Aethir’s consumption-based model lets organizations scale AI using OpEx while accessing cutting-edge GPUs without massive upfront spend: https://aethir.com/blog-posts/the-financial-shift-in-ai-scaling-why-gpu-as-a-service-is-replacing-capital-expenditure
• The fifth article looks at Strategic Compute Reserves through the lens of national security and crisis response, arguing that Aethir’s resilient, globally distributed GPU pool should function like a Strategic Petroleum Reserve for emergency AI workloads: https://aethir.com/blog-posts/beyond-the-chip-strategic-compute-reserves-for-national-security-and-crisis-response
• The latest piece tackles the sustainability challenge, describing how Aethir combines energy-efficient GPUs, decentralized utilization of idle hardware, and token-driven incentives to build green, energy-aware Strategic Compute Reserves for AI: https://aethir.com/blog-posts/green-compute-building-energy-efficient-strategic-gpu-reserves-for-sustainable-ai

Agent Skills
@AnthropicAI introduced Agent Skills for Claude, enabling it to load task-specific instructions, scripts, and resources when relevant: https://claude.com/blog/skills
Skills help Claude handle specialized work, from spreadsheets to brand-aligned documents, and can be used across Claude apps, Claude Code, and the API. Developers can create, version, and manage custom skills, install them via plugins, and share them across teams while controlling secure code execution.
Claude Code
Anthropic has launched a research preview of Claude Code on the web, letting developers run coding tasks in the browser through cloud-managed sessions: https://claude.com/blog/claude-code-on-the-web
Users can link GitHub repositories, start parallel jobs, track progress, and receive automatic pull requests. The tool supports routine fixes, backend changes, and project inquiries, and is also available in an early iOS version. All tasks operate in isolated sandbox environments designed to protect code and credentials.
Publications
The project shared an article explaining how startups use AI agents to cut repetitive work, access scarce expertise, and maintain quality while moving quickly: https://claude.com/blog/building-ai-agents-for-startups
It cites gains in accounting, education, security, hiring, and research, where teams save time and reduce costs. The piece describes starting with low-risk tasks, building reusable capabilities, and adding human oversight.
It recommends modular designs and transparent decisions as systems grow to support broader adoption across startups.
Integrations
Lastly, the project has introduced new integrations that link Claude with @Microsoft365, allowing the assistant to draw context from documents, email, and calendars: https://claude.com/blog/productivity-platforms
The update supports SharePoint, OneDrive, Outlook, and Teams, enabling unified search across company data. A shared project helps employees access organizational knowledge in one place. Both the connector and enterprise search are available to Team and Enterprise customers after administrators enable them.

AI Arena V2.1
@flock_io has released AI Arena V2.1, updating its decentralised training platform with a fairer reward system: https://flock.io/blog/ai-arena-v21-now-live-fairer-rewards-in-the-flockio-deai-training-platform
The upgrade separates reward pools for nodes with and without delegations, preventing reduced earnings for those training models. It also shifts most final-day rewards into a 30-day vesting period to limit sell pressure. User activity remains unchanged, with the update focused solely on distribution mechanics.
Publications
Flock has released three articles showcasing how its decentralised AI and federated learning stack is being applied to real-world impact, shaping AI governance, and solving the fundamental limitations of centralised FL:
• In the first article, the team highlights their collaboration with the UNDP SDG Blockchain Accelerator across seven global projects, showing how blockchain-powered deAI and federated learning can deliver instant climate payouts, enable carbon markets for farmers, empower artisans, improve healthcare data governance, support wildlife conservation, and bring transparent digital finance to vulnerable communities: https://flock.io/blog/undps-sdg-blockchain-accelerator-is-improving-lives-in-developing-countries-with-deai
• The second article argues that tech companies cannot wait for governments to regulate AI and must adopt stronger self-governance. It calls for transparency, accountability, and data sovereignty, explaining how decentralised AI - powered by blockchain and federated learning - offers a path to trustworthy systems that avoid centralisation risks and respect national and user-level control over data: https://flock.io/blog/tech-companies-shouldnt-wait-for-policymakers-to-regulate-ai
• The third explains why blockchain is not a buzzword for Flock but a core requirement for secure, scalable federated learning. It contrasts the weaknesses of centralised FL with Flock’s decentralised FL, where blockchain coordination, consensus, and token incentives prevent data leaks, resist malicious actors, and enable large-scale, privacy-preserving model training across sensitive industries: https://flock.io/blog/blockchain-isnt-a-clich-at-flockio-its-essential-for-scaling-federated-learning

@FractionAI_xyz has expanded its Universal Liquidity Management agents to DeFi on @avax, adding automated oversight to liquidity and risk management: https://medium.com/@fractionai/fraction-ai-expands-universal-liquidity-management-agents-to-defi-on-avalanche-3448ec768551
The agents monitor lending rates, pool yields, and short-lived market signals and can reallocate capital across established protocols to pursue risk-adjusted returns.
Users review on-chain performance data, choose an agent, deposit funds, and receive pool tokens reflecting their share. A separate simulation phase helps validate strategies before they operate with live liquidity at scale.

AI Dollar
@gaib_ai has launched the AI Dollar (AID) and the yield-linked token sAID across @ethereum, @arbitrum, @base and @BNBCHAIN: https://blog.gaib.ai/the-ai-dollar-is-live/
AID is an ERC-20 synthetic dollar backed by U.S. Treasuries and stable assets, while sAID reflects the value of a portfolio of AI and robotics infrastructure financing. Both tokens use @LayerZero_Core’s omnichain standard. The release follows GAIB’s earlier AID Alpha phase, with migration routes available as the project prepares broader chain support and secondary market access.
Partnerships
Gaib has announced two new partnerships aimed at expanding real-world robotics financing and on-chain data infrastructure:
• @Primech_AI: Signed a memorandum of understanding with Primech, a robotics firm behind the HYTRON autonomous restroom-cleaning robot. The partnership explores tokenized financing models to support large-scale HYTRON deployments, offering the Gaib community potential access to real-world robotics investment opportunities: https://blog.gaib.ai/exploring-a-strategic-partnership-with-primech-ai/
• @campnetworkxyz: Partnered to create a full-stack ecosystem for robotic data as intellectual property (IP). Camp provides the on-chain enforcement layer with programmable data licensing, while Gaib brings the financial layer to help robotics projects raise capital against their verified data streams. The collaboration aims to turn robotic data into a yield-generating asset class, powering the next generation of embodied AI: https://blog.gaib.ai/gaib-x-camp-robotic-data-as-intellectual-property-how-gaib-and-camp-network-are-building-a-new-economy-on-chain/

CodeAssist
@gensynai introduced CodeAssist, a local AI coding assistant that learns from users’ edits and adapts to their coding style: https://blog.gensyn.ai/introducing-codeassist/
The system keeps all data on the device and uses a fixed coding model together with a trainable decision model that determines when to assist. It provides contextual support in a LeetCode-style environment, and the open-source prototype continues to develop through user interaction.
CodeZero
Gensyn unveiled CodeZero as an expansion of RL-Swarm, moving the system into collaborative coding:https://blog.gensyn.ai/codezero-extending-rl-swarm-toward-cooperative-coding-agents/
The environment uses proposer, solver, and evaluator roles to generate tasks, produce solutions, and judge results through a rule-based reward model that avoids executing code. Difficulty shifts with solver performance, forming a stable network that supports ongoing cooperation on real programming challenges and encourages steady model improvement.

Products and Updates
1/ @GoogleAI has launched Google Skills, a platform offering nearly 3,000 courses, labs and credentials across AI and technical fields: https://blog.google/outreach-initiatives/education/google-skills/?utm_source=chatgpt.com
It brings content from @GoogleCloud, @GoogleDeepMind and other programs into one place, with hands-on labs, gamified features and tools for organizations. The platform supports skills-based hiring through employer partnerships and provides many no-cost learning options for students, developers and institutions.
2/ Google also outlines several updates in October’s Gemini Drop: https://blog.google/products/gemini/gemini-drop-october-2025/?utm_source=chatgpt.com
• Veo 3.1 enhances textures, camera control and dialogue in generated videos.
• Gemini 2.5 Flash offers clearer step-by-step help and better image understanding.
• Canvas produces themed slide decks from a topic or source.
• LaTeX tools improve copying, editing and PDF export.
• Gemini now serves as a conversational voice assistant on Google TV.
3/ The company has updated NotebookLM with a larger one-million-token context window, extended conversation memory and improved information retrieval using newer Gemini models: https://blog.google/technology/google-labs/notebooklm-custom-personas-engine-upgrade/?utm_source=chatgpt.com
Testing shows higher satisfaction on responses based on extensive source material. Conversations are now saved automatically, with options to delete history. Users can also set specific goals or roles for chats, allowing the system to adjust its behavior to different project needs.
4/ NotebookLM’s latest update adds Deep Research, a tool that automates online research by creating plans, scanning hundreds of sites and producing organized reports that can be added directly to a notebook: https://blog.google/technology/google-labs/notebooklm-deep-research-file-types/?utm_source=chatgpt.com
Users can keep working while it gathers sources. The update also expands supported file types, including Google Sheets, Drive URLs, images, PDFs from Drive and Word documents, with features rolling out over the coming weeks.
Funding
Google gathered education and technology experts in London to discuss AI’s role in learning and announced $30 million for related projects: https://blog.google/outreach-initiatives/education/ai-learning-commitments/
The company highlighted new partnerships, including Estonia’s national AI initiative, and expanded tools like Gemini and YouTube’s conversational features. Google also released early research showing modest gains from AI-assisted tutoring and will run further studies while funding global groups focused on accessible AI education.
Partnerships
Finally, Google and Reliance Intelligence announced a partnership to expand access to Google’s AI tools in India: https://blog.google/around-the-globe/google-asia/reliance-jio-india-partnership/?utm_source=chatgpt.com
Millions of Reliance customers will receive the Google AI Pro plan free for 18 months, including Gemini 2.5 Pro, expanded image and video creation limits, NotebookLM, and 2 TB of cloud storage. Eligible users can activate the offer through the MyJio app.

Publications
@gonka_ai published two articles exploring how decentralized training and a proof-of-work AI network are reshaping how developers build and scale AI applications:
• The first article explains how AI training has moved beyond centralized data centers, highlighting the communication bottlenecks that once limited global scaling and showing how federated optimization and DiLoCo-style algorithms make it possible to train large models over standard internet connections. It also covers recent breakthroughs, emerging research directions, and the trust challenges of open, permissionless training networks: https://what-is-gonka.hashnode.dev/beyond-the-data-center-how-ai-training-went-decentralized
• The second shows how to build a production-ready AI app on Gonka step by step, using a simple React frontend and a small Node proxy that signs OpenAI-style requests. It walks through setup, request signing, model selection, streaming support, and prompting logic, illustrating how developers can deploy real applications on Gonka with minimal overhead: https://what-is-gonka.hashnode.dev/build-a-productionready-ai-app-on-gonka-endtoend-guide
Media
In recent episodes of @HouseofZK Radio, the @DaLiberman brothers joined @alicelingl to discuss the ideas behind the Gonka Protocol:
• Daniil talked about his journey from early distributed computing and animation work to building Bitmoji/AR at Snap, and explained why he is now focused on decentralized AI and how Gonka’s transformer-based proof-of-work aims to make compute abundant, verifiable, and permissionless: https://x.com/HouseofZK/status/1980246873836278225
Full podcast: https://hozk.io/radio#79-daniil-liberman-creator-of-gonka-protocol
• David shared how Gonka ties network rewards to real computation instead of idle staking, creating an open and permissionless market for verified AI workloads and supporting a more decentralized AI compute ecosystem: https://x.com/HouseofZK/status/1986087043978481748
Full podcast: https://hozk.io/radio#85-david-liberman-creator-of-gonka-protocol

Funding
@GoKiteAI received an investment from @cbventures, extending its recent Series A round: https://medium.com/@KiteAI/kite-announces-investment-from-coinbase-ventures-to-advance-agentic-payments-with-the-x402-protocol-cd9e3639329f
The project is integrating with @coinbase’s x402 payment protocol, making Kite a core settlement layer for agent-to-agent transactions. Its blockchain infrastructure supports secure identities, programmable controls, and low-cost stablecoin settlements.
The funding will help expand tools for autonomous AI agents as they become a growing segment of digital commerce.
Publications
Kite published an article explaining how recent research proposes an Agent-Oriented Planning framework to improve planning in multi-agent systems: https://medium.com/@KiteAI/how-agents-plan-and-plan-better-90470e563e68
The approach focuses on breaking down user queries into solvable, complete, and non-redundant subtasks. A meta-agent creates an initial plan, which is then refined by a detector that checks for gaps and a reward model that evaluates solvability. Tested on a numerical reasoning dataset, the method improved accuracy but increased computational cost.
Partnerships
Finally, the project has announced two new partnerships, including:
• @UnifaiNetwork: Joined Kite’s ecosystem to co-develop the foundation of Agentic Finance (AgentFi). UnifAI integrates its on-chain intelligence framework with Kite’s infrastructure, enabling autonomous AI agents to manage capital, execute financial strategies, and operate as verifiable, sovereign participants across DeFi: https://medium.com/@KiteAI/unifai-to-power-the-next-generation-of-agentic-finance-on-kite-51b18dc43e9f
• @pieverse_io: Integrated to power cross-chain agentic payments between Kite and @BNBCHAIN. Using the x402b standard, Pieverse enables gasless stablecoin micropayments, programmable spending policies, and tamper-evident compliance receipts: https://medium.com/@KiteAI/pieverse-to-enable-cross-chain-agentic-payment-rails-on-kite-52ce827b0632

GMPayer
In partnership with @ProjectZKM, @MetisL2, and @GOATRollup, @LazAINetwork has introduced GMPayer, a cross-chain payment protocol powered by the x402 standard: https://x.com/LazAINetwork/status/1985337394959159430
Designed for autonomous AI agents, GMPayer enables instant, trustless payments across multiple blockchains without intermediaries.
The initiative demonstrates how machines can transact value securely and verifiably in real time, linking data ownership, rights, and payments through smart contracts. It represents an early move toward a decentralized, on-chain economy for AI-driven services.
Roadmap
LazAI announced its Alpha Mainnet will launch in Q4 2025 as the first public release of its decentralized AI network, integrating agent creation, data validation, governance, and execution: https://x.com/LazAINetwork/status/1981285321758179480
According to the roadmap, Lazpad’s Premium Launchpad V2 is scheduled for Q1 2026 with verified DATs plus task and point systems, followed by further ecosystem features in 2026.
Publications
LazAI has published a series of articles exploring the foundations of proof-driven AI, secure provenance, and durable memory for autonomous agents:
• In the first article, the focus is on Memory Units - portable, ownable capsules that let AI retain context, tone, and preferences across sessions while preserving user privacy: https://x.com/LazAINetwork/status/1980610978375954928
• The second article argues that models without data provenance are doomed, showing why verifiable, reversible, and rights-aware datasets are the true foundation of trustworthy AI: https://lazai.network/blog/why-models-without-provenance-are-already-doomed
• Next, the discussion turns to durable memory as the missing link, where Lazbubu’s Memory Units enable auditability, replay, and shared yet revocable context for collaboration: https://x.com/LazAINetwork/status/1979058432557666615
• Another piece exposes the rise of weaponized AI, stressing that only verifiable actions, signed tool calls, and enforceable policies can make autonomous systems secure: https://lazai.network/blog/the-age-of-weaponized-ai-has-already-started
• The following article examines x402 as the payment layer for the agentic web, with LazAI, Metis, GOAT, and ZKM creating verifiable, multi-chain infrastructure for safe AI-to-AI commerce: https://x.com/LazAINetwork/status/1983270534910562366
• There is also an explanation of how LazAI’s own framework is described as the value layer for agent transactions - using DATs to map contributions, rights, and payments automatically within x402 flows: https://x.com/LazAINetwork/status/1983527360830804422
• In the final article, LazAI emphasizes that the true competitive edge in AI is proof itself. It presents proof-driven infrastructure as the only sustainable path - where every action, decision, and transaction leaves a verifiable record that builds lasting trust: https://lazai.network/blog/the-only-moat-in-ai-is-proof
Media
In a recent episode of House of ZK Radio, @DacEconomy, co-founder of @LazAINetwork and @ProjectZKM, highlighted how zero-knowledge and artificial intelligence intersect: https://x.com/HouseofZK/status/1981731547037270442
He discussed ZKM’s origins in the early rollup era and its choice to build a universal zkVM. Ming also explained how real-time proving enables verifiable AI and introduced Laz AI’s vision for socially-driven, crypto-powered data creation.
LazTalks
In the latest LazTalks session, experts from LazAI, @ROVR_Network, @u2u_xyz, and @awscloud discussed how decentralized physical networks are linking with the AI economy: https://x.com/LazAINetwork/status/1980826444730089816
Speakers outlined the transition from DePIN to DePAI, emphasizing verifiable AI processes, hybrid architectures combining centralized speed with decentralized trust, and the role of Data Anchoring Tokens in ensuring transparency and data provenance across connected systems.
Ambassador Program
LazAI has launched its Lazarians Ambassador Program, inviting creators and AI enthusiasts to promote its decentralized ecosystem focused on data ownership and transparency: https://lazai.network/blog/join-the-lazarians-ambassador-program
Participants gain early access to new tools, exclusive benefits, and recognition for community impact. The program includes three tiers - Alpha, Sigma, and Omega - with applications open until Nov 14, 2025.
Partnerships
LazAI has formed a series of new partnerships focused on expanding the reach of verifiable AI into DePIN, gaming, decentralized compute, and real-world impact sectors:
• @XPINNetwork: Collaborating to integrate LazAI’s Data Anchoring Token (DAT) technology into XPIN’s decentralized telecom infrastructure. This aims to tokenize connectivity and IoT data, ensuring that contributions to the network are evaluated for quality and rewarded accordingly in a privacy-preserving and trustless manner: https://lazai.network/blog/lazai-x-xpin-a-technical-collaboration-to-coordinate-real-world-value-in-depin
• @BSUniverse_OFCL: Partnering to explore how verifiable AI can support entertainment and gaming in Web3. With over 22B+ views, BSU is expanding into open-world games, NFTs, and digital commerce, with LazAI bringing AI infrastructure into the experience: https://x.com/LazAINetwork/status/1976308953005821992
• @u2u_xyz: Working with this DAG-based Layer 1 built for DePIN to test how verifiable AI can interact with high-performance, real-world infrastructure for enterprise-scale use cases: https://x.com/LazAINetwork/status/1976641604661244307
• @SoulTarotAIHQ: Launching the “Tarot AI Agent,” a multimodal AI experience that combines conversation, voice, image generation, and tarot logic. Readings are stored on-chain and mintable as NFTs, showcasing how AI and spirituality can blend into verifiable on-chain data: https://x.com/LazAINetwork/status/1978260572002021545
• @BabyBoomToken: Exploring how blockchain and AI can support families and address declining birth rates with transparent Web3-native incentive models across key life moments: https://x.com/LazAINetwork/status/1978627485165088871
• @DogexPerps: Integrated with LazAI to bring AI-powered perpetual trading. DogEx now supports referral systems and unified points tracking directly on LazAI’s infrastructure: https://x.com/DogexPerps/status/1981043070561919160
• @spheron: Partnering to expand access to verifiable AI through decentralized compute infrastructure. Spheron provides affordable GPU access (up to 4x cheaper), helping AI builders deploy at scale: https://x.com/LazAINetwork/status/1981737267568709928
• @Neura_Web3_AI: Collaborating with this emotionally intelligent AI engine to enable more natural, memory-aware, and empathetic interactions in decentralized applications and agents: https://x.com/LazAINetwork/status/1982795434834718990

MainNet Upgrade
@OntologyNetwork has scheduled a two-phase MainNet upgrade, with version 2.7.0 on Nov 27, 2025 and version 3.0.0 on Dec 1, 2025, focusing on performance improvements and an approved ONG tokenomics change:https://ont.io/news/ontology-mainnet-upgrade-announcement/
Users and stakers are not expected to be affected. Consensus and Triones nodes must upgrade to implement consensus and gas optimizations, a new 800 million ONG cap, and revised staking incentives.
Events
Ontology is one of the key sponsors of Verifying Intelligence 3.0, an event co-hosted by @HouseofZK, @brevis_zk, and @invisiblgarden during @EFDevcon in Buenos Aires: https://luma.com/xevcy7z
• @GeoffTRichards, Head of Community at Ontology, will join the panel “Abundance vs Dystopia - Steering Macroscale Outcomes”, together with @alicelingl of HoZK, @jgorzny of @ZircuitL2, @NorbertVadas of @thezkcloud, and @o1coby of @o1_labs & @MinaProtocol.
• @humpty0x, Ecosystem Lead at Ontology, will participate in the panel “AI and Work - Proving Human Contribution”, alongside @reka_eth of @boundless_xyz, @zKsisyfos of @StarkWareLtd, @DacEconomy of @ProjectZKM, and @Viggy_117 of @eigencloud.

Products and Releases
1/ @OpenAI has introduced ChatGPT Atlas, a macOS browser that embeds ChatGPT throughout the web experience:https://openai.com/index/introducing-chatgpt-atlas/
Atlas can understand on-page content, offer assistance, and perform tasks using agent mode, while optional browser memories let it recall past activity. Users retain control over what the system can see, with privacy and parental controls built in. The browser is available now, with versions for other platforms planned.
2/ OpenAI also presented GPT-5.1, an update that refines its Instant and Thinking models with clearer communication, stronger instruction following, and more adaptive reasoning: https://openai.com/index/gpt-5-1/
The Instant model is more conversational, while the Thinking model allocates time based on task complexity. New tone-customization options give users greater control over ChatGPT’s style. GPT-5.1 is rolling out to paid users first, with wider and API availability coming soon.
Research
OpenAI released a research preview of gpt-oss-safeguard, two open-weight models designed for safety classification using developer-defined policies: https://openai.com/index/introducing-gpt-oss-safeguard/
The models interpret policies during inference, provide reasoning for their decisions, and can be adjusted without retraining. They aim to help platforms handle emerging risks and nuanced domains. Early evaluations show competitive performance, and OpenAI plans to refine the models with community feedback.
Publications
The project shared an article covering efforts to train neural networks that operate through simpler, more traceable steps: https://openai.com/index/understanding-neural-networks-through-sparse-circuits/
The work focuses on sparse models with most weights set to zero, making internal computations easier to examine. Researchers found that larger but sparser models often form small, understandable circuits that still perform key tasks. Early results suggest a possible path toward building capable systems whose mechanisms are more interpretable.

DeepResearch 2511
@Alibaba_Qwen introduced DeepResearch 2511, a version focused on making in-depth inquiry faster and easier: https://qwen.ai/blog?id=qwen-deepresearch
The update adds normal and advanced modes, stronger web search and reading, and support for local files like PDFs and spreadsheets. It improves report control, interaction clarity, and offers new output formats. Using a multi-agent system with enhanced memory and state tracking, the release aims to reduce the effort of verifying ideas and help users move quickly from initial interest to solid insight.
Qwen3-Max
Qwen team released an early preview of its still-training Qwen3-Max-Thinking model: https://x.com/Alibaba_Qwen/status/1985347830110970027
The intermediate checkpoint, when paired with tool use and increased test-time compute, currently scores 100% on benchmarks such as AIME 2025 and HMMT. Users can test the model through Qwen Chat and the Alibaba Cloud API as development continues.

x402 BNB
@QuackAI_AI has introduced x402 BNB, a unified sign-to-pay layer designed to simplify transactions across @BNBCHAIN: https://medium.com/@quackai/introducing-x402-bnb-the-first-unified-sign-to-pay-layer-powering-the-agent-economy-on-bnb-chain-4e9db5f513f5
Instead of multiple approval and gas steps, users sign once while a facilitator handles execution and gas sponsorship under clear rules. Built on EIP-7702 and EIP-712, the system links intent and payment, provides auditability, and supports features like automated payouts and multi-recipient transfers. Quack AI sees this as groundwork for governed, on-chain agent operations.
Partnerships
Quack AI has announced several new partnerships aimed at expanding its presence across AI governance, research, and decentralized infrastructure, including:
• @AITECHio: Partnered to strengthen collaboration across the AI and Web3 landscape. The partnership includes mutual integration between platforms, co-hosted community campaigns, and future exploration of technical connections between AI compute and onchain governance frameworks: https://x.com/QuackAI_AI/status/1978083555411456504
• @NTUsg: Quack AI has joined NTU's Capstone Program as an official partner project. Eight students will work closely with the Quack AI team over two semesters to explore topics such as AI governance, real-world asset intelligence, and decentralized infrastructure - bridging academic research with practical Web3 development: https://x.com/QuackAI_AI/status/1978355736704143561
• @myshell_ai: Collaborated to build modular, AI-native governance systems. Both teams share a vision for autonomous AI agents and decentralized systems, aiming to develop the core logic layer that will support transparent, self-evolving digital ecosystems: https://x.com/QuackAI_AI/status/1979063667023806902
• @Gimo_Fi: Working together under @0G_labs ecosystem, Quack AI and Gimo are aligning capital deployment with coordination infrastructure. The collaboration merges Gimo’s liquidity layer with Quack AI’s governance intelligence to activate next-generation decentralized AI environments: https://x.com/QuackAI_AI/status/1983081410845143314

@SaharaLabsAI has published a series of articles exploring the foundations, technologies, and emerging standards shaping the future of AI and intelligent systems:
• In the first article, the focus is on x402, a long-anticipated payment layer for the internet that finally brings the concept of machine-to-machine commerce to life. It explains how x402 turns the dormant HTTP 402 “Payment Required” code into a functioning standard for autonomous, AI-native payments-enabling agents and services to transact directly using crypto-based settlement, without human coordination or traditional billing systems: https://saharaai.com/blog/understandingx402
• The second article highlights the distinction between Sahara AI and Sahara Labs, clarifying how the latter serves as the enterprise-focused R&D powerhouse behind the broader Sahara AI ecosystem. It explores how Sahara Labs combines deep AI expertise with blockchain fluency to deliver large-scale, secure deployments for global enterprises: https://saharaai.com/blog/sahara-ai-vs-sahara-labs
• The third explores the architecture of AI agents, explaining how they autonomously break down goals, plan multi-step actions, use tools, and maintain stateful memory to complete entire workflows. It demonstrates how this design marks a fundamental shift from simple automation or chatbots toward adaptable, goal-directed systems: https://saharaai.com/blog/understanding-ai-agents
• The next article emphasizes why AI agents are transforming industries, showing how recent advances in reasoning, tool integration, and deployment infrastructure have made autonomous agents practical. It presents real-world business results and lays out a roadmap for organizations looking to adopt agents effectively: https://saharaai.com/blog/why-ai-agents-are-transforming-our-world
• The penultimate article examines the future of AI data services, outlining six key trends reshaping how data for AI is collected, annotated, and validated. It discusses the rise of domain-specific expertise, multimodal and synthetic data, and AI-assisted workflows - stressing the critical importance of quality governance in the age of enterprise-scale AI: https://saharaai.com/blog/the-future-of-ai-data-services
• In the final article, readers are given a concise overview of artificial intelligence itself - what it is, how it evolved, and how it operates today. It traces AI’s development from rule-based systems to modern generative models, clarifies key terminology, and explains real-world applications and limitations, making it a perfect entry point for newcomers to the field: https://saharaai.com/blog/what-is-ai

AI Startup of the Year
@SentientAGI was named AI Startup of the Year at the 2025 Minsky Awards during Cypher in Bengaluru: https://blog.sentient.xyz/posts/minsky-awards-2025
The project was recognized for its open, community-owned AI initiative and its framework, ROMA (Recursive Open Meta-Agent), which coordinates smaller agents to tackle complex problems. ROMA outperformed major systems on reasoning benchmarks, highlighting Sentient’s effort to make advanced AI tools accessible and transparent for broader public benefit.
Partnerships
Sentient has announced several new partnerships, including:
• @askalphaxiv: Partnered with Sentient to promote open-source research and accelerate the development of cutting-edge AI. The collaboration includes community talks, technical deep-dives, and bringing research ideas from the alphaXiv community to life via Sentient’s open-source foundation: https://x.com/SentientAGI/status/1983204629241372742
• @Talus_Labs: Integrated with Sentient to distribute Nexus-built agents through SentientChat and AgentHub, reaching over 3 million users. The partnership connects Talus developers to Sentient’s GRID funding ecosystem and empowers non-technical users to build agents using Talus Vision: https://x.com/SentientAGI/status/1986484948572778727
• @UnifaiNetwork: Teamed up with Sentient to bridge UnifAI’s agentic finance infrastructure with Sentient’s decentralized AI model invocation system. The integration enhances UnifAI agents with more advanced reasoning and decision-making, laying the groundwork for a transparent, community-owned agentic economy: https://x.com/SentientAGI/status/1983893486287110154