AWS is announcing the release of Amazon S3 Files, a new file system that seamlessly connects any AWS compute resource with Amazon Simple Storage Service (Amazon S3).
With S3 Files, Amazon S3 is the first and only cloud object store that offers fully-featured, high-performance file system access to your data, the company said.
It makes buckets accessible as file systems, meaning changes to data on the file system are automatically reflected in the S3 bucket and users have fine-grained control over synchronization. S3 Files can be attached to multiple compute resources, enabling data sharing across clusters without duplication.
Until now, you had to choose between Amazon S3 cost, durability, and the services that can natively consume data from it or a file system’s interactive capabilities. S3 Files eliminates that tradeoff.
S3 now becomes the central hub for all an organization’s data. It’s accessible directly from any AWS compute instance, container, or function, whether you’re running production applications, training ML models, or building agentic AI systems, according to AWS.
Users can access any general purpose bucket as a native file system on your Amazon Elastic Compute Cloud (Amazon EC2) instances, containers running on Amazon Elastic Container Service (Amazon ECS) or Amazon Elastic Kubernetes Service (Amazon EKS), or AWS Lambda functions.
The file system presents S3 objects as files and directories, supporting all Network File System (NFS) v4.1+ operations like creating, reading, updating, and deleting files.
By default, files that benefit from low-latency access are stored and served from the high-performance storage. For files not stored on high performance storage such as those needing large sequential reads, S3 Files automatically serves those files directly from Amazon S3 to maximize throughput. For byte-range reads, only the requested bytes are transferred, minimizing data movement and costs.
The system also supports intelligent pre-fetching to anticipate data access needs. You have fine-grained control over what gets stored on the file system’s high-performance storage.
Under the hood, S3 Files uses Amazon Elastic File System (Amazon EFS) and the file system supports concurrent access from multiple compute resources with NFS close-to-open consistency, making it ideal for interactive, shared workloads that mutate data, from agentic AI agents collaborating through file-based tools to ML training pipelines processing datasets.
S3 Files works best when you need interactive, shared access to data that lives in Amazon S3 through a high-performance file system interface. It’s ideal for workloads where multiple compute resources—whether production applications, agentic AI agents using Python libraries and CLI tools, or machine learning (ML) training pipelines—need to read, write, and mutate data collaboratively. Users get shared access across compute clusters without data duplication, sub-millisecond latency, and automatic synchronization with your S3 bucket, the company said.
S3 Files is available today in all commercial AWS Regions.
For more information about this news, visit https://aws.amazon.com.