Amazons’s cloud computing & web hosting service.
Amazon Timestream for InfluxDB Now Supports Customer-Defined Maintenance Windows
9 April 2026 @ 9:57 pm
Amazon Timestream for InfluxDB now supports customer-defined maintenance windows, giving you control over when routine maintenance is performed on your InfluxDB databases. This feature is available for both InfluxDB 2 instances and InfluxDB 3 clusters across all supported editions. With this launch, you can specify a weekly maintenance window using a day-and-time format in your preferred timezone. Timestream for InfluxDB supports IANA timezone identifiers such as America/New_York, Europe/London, and Asia/Tokyo, and automatically handles Daylight Saving Time transitions so you don't need to manually adjust your schedule. If you don't specify a maintenance window, the service continues to manage maintenance timing automatically. You can set or update your preferred maintenance window when creating or modifying a resource using the Amazon Timestream for InfluxDB console, AWS CLI, or AWS SDKs. You can use Amazon Timestream for InfluxDB Customer-Defined Maintenance
Amazon Bedrock now supports cost allocation by IAM user and role
9 April 2026 @ 9:50 pm
Amazon Bedrock now supports cost allocation by IAM principal, such as IAM users and IAM roles, in AWS Cost and Usage Report 2.0 (CUR 2.0) and Cost Explorer. This enables customers to understand and attribute Bedrock model inference costs across users, teams, projects, and applications. With this launch, customers can tag their IAM users and roles with attributes like team, project, or cost center, activate them as cost allocation tags, and analyze Bedrock model inference costs by the tags in Cost Explorer or at the line-item level in CUR 2.0. To get started, tag your IAM users and roles and activate them as cost allocation tags in the Billing and Cost Management console. Then create a CUR 2.0 data export and select "Include caller identity (IAM principal) allocation data" or filter by tags in Cost Explorer. This feature is available in all AWS commercial Regions where Amazon Bedrock is available. To learn more, see
Amazon OpenSearch Service supports Managed Prometheus and agent tracing
9 April 2026 @ 8:00 pm
Amazon OpenSearch Service now provides a unified observability experience that brings together metrics, logs, traces, and AI agent tracing in a single interface. This release introduces native integration with Amazon Managed Service for Prometheus and comprehensive agent tracing capabilities, addressing the dual challenges of prohibitive costs from premium observability platforms and operational complexity from fragmented tooling. Site Reliability Engineers, DevOps Engineers, and Platform Engineering teams can now consolidate their observability stack without costly data duplication or constant context switching between multiple tools.
You can now query Prometheus metrics directly using native PromQL syntax alongside logs and traces in OpenSearch UI's observability workspace—without duplicating data. Combined with new application monitoring workflows powered by RED metrics (Rate, Errors, Duration) and AI agent tracing using OpenTelemetry GenAI semantic conventions, operati
Amazon S3 Lifecycle pauses actions on objects that are unable to replicate
9 April 2026 @ 5:33 pm
Amazon S3 Lifecycle now prevents expiration and transition actions on objects that failed replication, helping you to coordinate replication configuration or permissions changes with actions defined in your lifecycle rules.
Incorrect permissions or replication configuration can prevent objects from being replicated. With this change, S3 Lifecycle no longer expires or transitions objects that have failed replication, even if they match one of the lifecycle rules that you have defined. Once you have corrected your replication configuration or permissions, you can use S3 Batch Replication to replicate objects that previously failed. After successful replication, S3 Lifecycle will automatically process these objects according to your configured rules.
This change applies automatically to all existing and new S3 Lifecycle configurations, across 37 AWS Regions, including the AWS China and AWS GovCloud (US) Regions. We are in the process of deploying this change and plan to
Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy
9 April 2026 @ 5:00 pm
Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy, enabling faster application recovery during switchover by eliminating DNS propagation delays. Blue/Green Deployments create a fully managed staging environment (Green) that allows you to deploy and test production changes, keeping your current production database (Blue) safe. When ready, you can switchover to the new production environment and your applications begin accessing it immediately without any configuration changes. During a Blue/Green Deployment switchover for single-Region configurations, RDS Proxy actively monitors database instances and detects when the Green environment becomes the new production environment. This allows RDS Proxy to quickly redirect connections to the Green environment, enabling faster application recovery. You don't need to modify your drivers or change your existing application setup. Amazon RDS Blue/Green Deployments with Amazon RDS Proxy is available for Amazon Aur
AWS Agent Registry for centralized agent discovery and governance is now available in Preview
9 April 2026 @ 4:00 pm
AWS Agent Registry, available through Amazon Bedrock AgentCore, is now in preview — a private, governed catalog and discovery layer for agents, tools, skills, MCP servers, and custom resources within the organization. It gives teams complete visibility into their AI landscape, enabling them to discover existing agents and tools instead of rebuilding capabilities that already exist. The registry can be accessed via the AgentCore Console UI, APIs (AWS CLI, AWS SDK), or as an MCP server that builders can query and invoke directly from their IDEs. Registry supports both IAM and OAuth (Custom JWT) based access.
Teams can register resources manually through the console or API, or use URL-based discovery, which automatically retrieves metadata such as tool schemas and capability descriptions from a live MCP server or agent endpoint. Records go through an approval workflow where administrators can approve records before they become discoverable, and they can plug the registry into
AWS Marketplace announces the Discovery API for programmatic access to catalog data
9 April 2026 @ 4:00 pm
Today, AWS Marketplace announces the Discovery API, giving you programmatic access to product and pricing information across the AWS Marketplace catalog — including SaaS, AI agents and tools, AMI, containers, and machine learning models. With the Discovery API, buyers can embed catalog data into internal portals, enrich procurement tools with current pricing and offer terms, and streamline vendor evaluation workflows. Sellers and channel partners can surface product listings, public pricing, and private offer details directly within their own websites and storefronts — helping customers browse, compare, and move to purchase without leaving the partner experience. The API provides access to product descriptions, categories, pricing across public and private offers, and offer terms, so you can build experiences tailored to how your organization discovers and procures software through AWS Marketplace. The AWS Marketplace Discovery API is available in US E
Amazon OpenSearch Serverless now supports Zstandard (zstd) codec for index compression
9 April 2026 @ 3:00 pm
Amazon OpenSearch Serverless now supports Zstandard codecs for index storage, giving customers greater control over the trade-off between storage costs and query performance. With this launch, customers can configure Zstandard compression to achieve up to 32% reduction in index size compared to the default LZ4 codec, helping lower managed storage costs for data-intensive workloads.
Customers running large-scale log analytics, observability pipelines, and time-series workloads on Amazon OpenSearch Serverless can benefit most from Zstandard compression where high data volumes make storage efficiency a significant cost driver. The Zstandard compression algorithm is available in two different modes in Amazon OpenSearch Serverless: zstd and zstd_no_dict. Customers can tune the compression level to balance their specific needs: lower levels (e.g., level 1) deliver meaningful storage savings with minimal impact on indexing throughput and query latency, whil
Amazon EC2 Capacity Manager now supports tag-based dimensions
9 April 2026 @ 7:00 am
Starting today, Amazon EC2 Capacity Manager supports tag-based dimensions, enabling you to use tags from your EC2 resources to group and filter capacity metrics. EC2 Capacity Manager helps you monitor and optimize capacity usage across On-Demand Instances, Spot Instances, and Capacity Reservations. This launch also introduces Account Name as a new built-in dimension.
You can activate up to five custom tag keys — such as environment, team, or cost-center — and use them alongside built-in dimensions like Region, Instance Type, and Availability Zone to group and filter capacity metrics by tag values in the console and APIs, and include tag data as additional columns in newly created S3 data exports. Capacity Manager also includes four Capacity Manager-provided tags by default: EC2 Auto Scaling group name, EKS cluster name, EKS Kubernetes node pool, and Karpenter node pool. The new Account Name dimension makes it easier to identify accounts when analyzing cross-account capac
SageMaker HyperPod now supports gang scheduling for distributed training workloads
8 April 2026 @ 7:26 pm
Amazon SageMaker HyperPod task governance now supports gang scheduling, which ensures all pods required for a distributed training job are ready before training begins. Administrators can configure gang scheduling to prevent wasted compute from partial job runs and avoid deadlocks from jobs waiting for resources. Data scientists running distributed AI/ML training jobs on Amazon SageMaker HyperPod clusters using the EKS orchestrator require multiple pods to work together across nodes with pod-to-pod communication. When some pods start but others do not, jobs can hold onto resources without making progress, block other workloads, and increase costs. Gang scheduling resolves this by monitoring all pods in a workload and pulling the workload back if not all pods are ready within a set time. Pulled-back workloads are automatically requeued to prevent stalling. Administrators can adjust settings on the HyperPod Console, such as how long to wait for pods to be ready, how to handle