Amazons’s cloud computing & web hosting service.
Amazon SageMaker now supports multi-region replication from IAM Identity Center
21 April 2026 @ 9:00 pm
Amazon SageMaker now supports multi-region replication from IAM Identity Center (IdC), enabling you to deploy SageMaker Unified Studio domains in different regions from your IdC instance. This new capability empowers enterprise customers, particularly those in regulated industries like financial services and healthcare, to maintain compliance while leveraging centralized workforce identity management. As an Amazon SageMaker Unified Studio administrator, you can deploy SageMaker domains closer to your workforce based on data residency needs while maintaining seamless single sign-on (SSO) access. Organizations can address use cases such as maintaining IdC in one region while processing sensitive data in compliance-required regions, supporting global operations with centralized identity management, and meeting data sovereignty requirements without compromising SSO capabilities.
To get started see the
Introducing the Amazon EKS Hybrid Nodes gateway for hybrid Kubernetes networking
21 April 2026 @ 7:51 pm
Amazon Elastic Kubernetes Service (EKS) now offers the Amazon EKS Hybrid Nodes gateway, a feature that automates networking between your Amazon EKS cluster VPC and Kubernetes Pods running on Amazon EKS Hybrid Nodes. The Amazon EKS Hybrid Nodes gateway eliminates the need to make on-premises pod networks routable or coordinate network infrastructure changes when running in hybrid Kubernetes environments. Networking in hybrid Kubernetes environments can be complex, often requiring changes to on-premises routing configurations, coordination with network teams, and ongoing maintenance as workloads scale. The Amazon EKS Hybrid Nodes gateway addresses these challenges by automatically enabling Kubernetes control plane-to-webhook communication, pod-to-pod traffic across cloud and on-premises environments, and connectivity for AWS services such as Application Load Balancers, Network Load Balancers, and Amazon Managed Service for Prometheus.
AWS Marketplace streamlines VAT payment for deemed supply transactions
21 April 2026 @ 7:03 pm
AWS Marketplace now offers sellers a streamlined self-service process to submit Value Added Tax (VAT) invoices and receive automated VAT disbursements for deemed supply of digital services in the European Union, Norway, and the United Kingdom. Under the European Union, United Kingdom, and Norwegian VAT laws, when AWS Marketplace facilitates digital service sales, the law creates a deemed supply arrangement between sellers and the marketplace. To receive VAT payment, sellers are required to invoice the relevant AWS Europe, Middle East, and Africa (EMEA) SARL branch facilitating their transaction. This new capability provides sellers a unified experience within AWS Marketplace to submit VAT invoices and receive VAT payments, simplifying tax compliance under deemed supply arrangements. Sellers can now access the new experience through AWS Marketplace Management portal or AWS Partner Central, submit VAT invoices, track invoice status in real-time, and receive automated VAT paym
Amazon Athena Spark adds support for AWS PrivateLink
21 April 2026 @ 6:09 pm
Amazon Athena Spark now supports AWS PrivateLink so that you can access APIs and endpoints from your Amazon Virtual Private Cloud (VPC) without traversing the public internet. This feature can help you meet compliance requirements by allowing you to access and use Athena Spark APIs and endpoints entirely within the AWS network. You can now create AWS PrivateLink interface endpoints to connect from clients in your VPC. The Athena VPC endpoint supports all Athena Spark APIs and endpoints, including the Spark Connect, Spark Live UI and Spark History Server endpoints. Communication between your VPC and Athena Spark APIs and endpoints is then conducted entirely within the AWS network, providing a secure pathway for your data. To get started, you can create an interface VPC endpoint to connect to Amazon Athena Spark using the AWS Management Console or AWS Command Line Interface (AWS CLI) commands or AWS CloudFormat
AWS Lambda functions can now mount Amazon S3 buckets as file systems with S3 Files
21 April 2026 @ 5:00 pm
AWS Lambda now supports Amazon S3 Files, enabling your Lambda functions to mount Amazon S3 buckets as file systems and perform standard file operations without downloading data for processing. Built using Amazon EFS, S3 Files gives you the performance and simplicity of a file system with the scalability, durability, and cost-effectiveness of S3. Multiple Lambda functions can connect to the same S3 Files file system simultaneously, sharing data through a common workspace without building custom synchronization logic.
The S3 Files integration simplifies stateful workloads in Lambda by eliminating the overhead of downloading objects, uploading results, and managing ephemeral storage limits. This is particularly valuable for AI and machine learning workloads where agents need to persist&n
Amazon CloudWatch pipelines now supports configuration of processors via AI
21 April 2026 @ 4:30 pm
Amazon CloudWatch pipelines now lets you configure log processors using natural language descriptions powered by generative AI. CloudWatch pipelines is a fully managed service that ingests, transforms, and routes log data to CloudWatch without requiring you to manage infrastructure. Setting up the right combination of processors to parse and enrich logs can be time-consuming, especially when working with complex log formats. With AI-assisted configuration, you can simply describe the processing you need in plain language and have the pipeline configuration generated for you automatically. When creating a pipeline in the CloudWatch console, toggle the AI-assisted option during the processing step and enter a natural language description of your desired transformations. The system generates the processor configuration along with a sample log event, so you can immediately verify the output before deploying. This reduces setup time and makes it easier to get your pipelines runn
AWS Glue now supports OAuth 2.0 for Snowflake connectivity
21 April 2026 @ 4:00 pm
Starting today, AWS Glue supports OAuth 2.0 authorization and authentication for native Snowflake connectivity, enabling customers to read from and write to Snowflake without sharing user credentials. This makes it easier for enterprises to maintain security compliance while building data integration pipelines. With OAuth support, you can now securely access Snowflake data within AWS Glue using temporary token-based authorization. AWS Glue provides built-in connector to Snowflake, which helps you to integrate Snowflake data with other sources on a single platform while leveraging the scalability and performance of the AWS Glue Spark engine—all without installing or managing connector libraries. Previously, connecting to Snowflake required using persistent credentials or private keys. With OAuth 2.0 support, you can now eliminate credential management entirely, relying instead on secure, temporary tokens that enhance security and simplify access control. This approach enab
AWS Transform custom is now available in six additional AWS Regions
21 April 2026 @ 3:00 pm
AWS Transform custom is now available in six additional AWS Regions: Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (London). AWS Transform custom enables organizations to modernize and transform code at scale using AWS-managed and custom transformations. You can upgrade language versions, migrate frameworks, optimize performance, and analyze code bases using transformations that are ready to use or can be customized to meet your organization's specific requirements. These transformations benefit from continuous improvement, learning from each engagement to deliver increasingly accurate and efficient results.
With this expansion, AWS Transform custom is now available in a total of eight AWS Regions: US East (N. Virginia), Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (Frankfurt, London). To learn more, visit the AWS Transform
Amazon Location Service now offers bulk address validation for the United States, Canada, Australia, and the United Kingdom
21 April 2026 @ 2:00 pm
Amazon Location Service now offers bulk address validation for the United States, Canada, Australia, and the United Kingdom. Customers can now validate, correct, and standardize large volumes of addresses at scale, whether cleaning customer databases before a CRM migration, verifying shipping addresses to reduce failed deliveries, screening addresses for identity verification and fraud prevention, or improving direct mail targeting and insurance underwriting accuracy. This capability supports use cases across healthcare, financial services, transportation and logistics, retail, and more. Address validation checks addresses against authoritative postal data, corrects common errors like misspellings, missing postal codes, and non-standard abbreviations, and standardizes formatting to match regional postal rules. Each result includes a confidence score and deliverability indicators so applications know exactly what to trust and act on. Using the new Amazon Location Service Job
Amazon EC2 G7e instances now available in AWS Local Zones in Los Angeles
21 April 2026 @ 2:00 pm
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) G7e instances in AWS Local Zones in Los Angeles, California. G7e instances feature NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and 5th generation Intel Xeon Scalable (Emerald Rapids) processors, bringing high-performance GPU compute closer to end users in Los Angeles.
For creative workloads, you can use G7e instances to run studio workstation workloads with low-latency access to local storage, and post-production workloads including visual effects (VFX) editorial, color correction, and VFX finishing. G7e instances support enhanced real-time rendering on graphics engines and 2D/3D VFX composition software. For AI workloads, you can also use G7e instances to deploy Large Language Models (LLMs), inference, and agentic AI at the edge.
To get started, opt-in to th