code.google.com

VN:F [1.9.22_1171]
Rating: 3.7/10 (3 votes cast)

Google code homepage

Introducing Coral NPU: A full-stack platform for Edge AI

Coral NPU is a full-stack platform for Edge AI, addressing performance, fragmentation, and user trust deficits. It's an AI-first architecture, prioritizing ML matrix engines, and offers a unified developer experience. Designed for ultra-low-power, always-on AI in wearables and IoT, it enables contextual awareness, audio/image processing, and user interaction with hardware-enforced privacy. Synaptics is the first partner to implement Coral NPU.

Introducing Tunix: A JAX-Native Library for LLM Post-Training

Tunix is a new JAX-native, open-source library for LLM post-training. It offers comprehensive tools for aligning models at scale, including SFT, preference tuning (DPO), advanced RL methods (PPO, GRPO, GSPO), and knowledge distillation. Designed for TPUs and seamless JAX integration, Tunix emphasizes developer control and shows a 12% relative improvement in pass@1 accuracy on GSM8K.

Introducing the Data Commons Model Context Protocol (MCP) Server: Streamlining Public Data Access for AI Developers

Data Commons announces the availability of its MCP Server, which is a major milestone in making all of Data Commons’ vast public datasets instantly accessible and actionable for AI developers worldwide.

Building the Next Generation of Physical Agents with Gemini Robotics-ER 1.5

Gemini Robotics-ER 1.5, now available to developers, is a state-of-the-art embodied reasoning model for robots. It excels in visual, spatial understanding, task planning, and progress estimation, allowing robots to perform complex, multi-step tasks.

Continuing to bring you our latest models, with an improved Gemini 2.5 Flash and Flash-Lite release

Google is releasing updated Gemini 2.5 Flash and Flash-Lite preview models with improved quality, speed, and efficiency. These releases introduce a "-latest" alias for easy access to the newest versions, allowing developers to test and provide feedback to shape future stable releases.

Apigee Operator for Kubernetes and GKE Inference Gateway integration for Auth and AI/LLM policies

The GKE Inference Gateway now integrates with Apigee, allowing enterprises to unify AI serving and API governance. This enables GKE users to leverage Apigee's API management, security, and monetization features for their AI workloads, including API keys, quotas, rate limiting, and Model Armor security.

Your AI is now a local expert: Grounding with Google Maps is now GA

Grounding with Google Maps in Vertex AI is now generally available, helping developers build factual and reliable generative AI applications connected to real-world, up-to-date information from Google Maps. This unlocks better, more personal results and is useful across industries like travel, real estate, devices, and social media.

Delight users by combining ADK Agents with Fancy Frontends using AG-UI

The ADK and AG-UI integration enables developers to build interactive AI applications by combining a powerful backend (ADK) with a flexible frontend protocol (AG-UI). This unlocks features like Generative UI, Shared State, Human-in-the-Loop, and Frontend Tools, allowing for seamless collaboration between AI and human users.

Gemma explained: EmbeddingGemma Architecture and Recipe

EmbeddingGemma, built from Gemma 3, transforms text into numerical embeddings for tasks like search and retrieval. It learns through Noise-Contrastive Estimation, Global Orthogonal Regularizer, and Geometric Embedding Distillation. Matryoshka Representation Learning allows flexible embedding dimensions. The development recipe includes encoder-decoder training, pre-fine-tuning, fine-tuning, model souping, and quantization-aware training.

Gemini for Home: Expanding the Platform for a New Era of Smart Home AI

Google Home is enabling new Gemini-powered features for our partners’ devices and launching a new program to help them build the next generation of AI cameras.