S3 File System

Provides an Amazon S3-based remote file system to Drupal, allowing files to be stored in and served from S3 or any S3-compatible storage service.

s3fs
12,360 sites
107
drupal.org

Install

Drupal 11, 10, 8 v8.x-3.9
composer require 'drupal/s3fs:8.x-3.9'
Drupal 9 v8.x-3.7
composer require 'drupal/s3fs:8.x-3.7'

Overview

S3 File System (s3fs) provides an additional file system to your Drupal site, alongside the public and private file systems, which stores files in Amazon's Simple Storage Service (S3) or any S3-compatible storage service. You can set your site to use S3 File System as the default, or use it only for individual fields.

This functionality is designed for sites which are load-balanced across multiple servers, as the mechanism used by Drupal's default file systems is not viable under such a configuration. The module uses a metadata cache that keeps track of every file stored in S3, which speeds up file system operations significantly.

S3fs uses a modular architecture with separate submodules for S3 client creation, bucket configuration, stream wrapper provision, and CSS/JS optimization. This allows for flexible configuration of multiple S3 buckets with different settings and credentials.

Features

  • Store and serve files from Amazon S3 or any S3-compatible storage (MinIO, DigitalOcean Spaces, Backblaze B2, etc.)
  • Multiple bucket configuration support with separate credentials and settings per bucket
  • Take over public:// and/or private:// file systems to store all files in S3
  • Custom stream wrapper creation for multiple S3-backed file schemes
  • File metadata caching in Drupal's database for improved performance
  • Presigned URL support for time-limited access to private files
  • CNAME/CDN support for custom domain file serving
  • Server-side encryption support (AES256 and AWS KMS)
  • CSS/JS link rewriting for assets stored in S3
  • Image style derivative generation with S3 storage
  • Integration with Key module for secure credential management
  • AWS credential caching support using Doctrine Cache
  • Drush commands for cache refresh and bucket management
  • Migration support from Drupal 7 S3FS installations
  • Support for versioned buckets and version-specific file access

Use Cases

Load-balanced Multi-server Deployment

For Drupal sites running on multiple web servers behind a load balancer, s3fs solves the file synchronization problem by storing all files in a central S3 bucket. All servers access the same files without needing shared NFS mounts or rsync solutions.

CDN Integration

Configure s3fs with a CNAME pointing to a CloudFront distribution or other CDN. Files are served from edge locations worldwide, reducing latency and server load. Use the custom domain settings and Cache-Control headers for optimal CDN caching.

Private File Access with Presigned URLs

Store sensitive files in S3 with private ACL and use presigned URLs to grant time-limited access. Configure presigned URL patterns to automatically generate temporary access links for specific file paths.

S3-Compatible Storage Services

Use s3fs with any S3-compatible storage service like MinIO, DigitalOcean Spaces, Backblaze B2, or Wasabi. Configure custom endpoints and path-style URLs as needed for your provider.

Multiple Bucket Configuration

Configure multiple S3 buckets with different credentials and settings. For example, use one bucket for public media files with CDN access and another private bucket for sensitive documents with encryption.

Encrypted Storage Compliance

Enable server-side encryption (AES256 or AWS KMS) for compliance requirements. Use the encryption hooks to implement customer-managed encryption keys (SSE-C) for specific file paths.

Tips

  • Always store your php_storage/twig directory locally when using public:// takeover. Twig files in S3 pose security and performance risks.
  • Use the Key module for credential management rather than storing credentials in settings.php for better security.
  • Clear Drupal's cache after enabling or disabling public:// or private:// stream wrapper takeover.
  • For custom endpoints (MinIO, etc.), enable 'Use path-style endpoint' if your provider doesn't support virtual-hosted-style URLs.
  • Set appropriate Cache-Control headers (e.g., 'public, max-age=31536000') for static assets to maximize CDN and browser caching.
  • Use presigned URLs for private files that need temporary public access rather than making the entire bucket public.
  • When migrating from local storage, use 'drush s3fs:copy-local' to copy files while maintaining the metadata cache.
  • Test your configuration using the validation button on the bucket actions page before deploying to production.

Technical Details

Admin Pages 7
S3 /admin/config/s3

Main S3 configuration section containing links to S3 Buckets, StreamWrappers, and CSS/JS settings.

S3 Buckets /admin/config/s3/s3-bucket

List of configured S3 bucket connections. Manage bucket configurations including credentials, endpoints, and storage settings.

Add S3 Bucket /admin/config/s3/s3-bucket/add

Create a new S3 bucket configuration with credentials and connection settings.

S3 Bucket Actions /admin/config/s3/s3-bucket/{s3fs_bucket}/actions

Perform administrative actions on the S3 bucket such as refreshing the metadata cache.

S3 StreamWrapper Config /admin/config/s3/streamwrapper

List of configured S3 stream wrappers. Each stream wrapper defines a custom URI scheme for accessing files in S3.

Add S3 StreamWrapper /admin/config/s3/streamwrapper/add

Create a new stream wrapper configuration that maps a URI scheme to an S3 bucket.

S3 CSS/JS rewrite settings /admin/config/s3/cssjs

Configure how URLs inside CSS and JavaScript files are rewritten when stored in S3.

Permissions 3
Administer S3 File System

Administer S3 File System settings. This is a restricted permission.

Administer s3 buckets

Manage S3 bucket configuration entities.

Administer s3 streamWrappers

Manage S3 stream wrapper configuration entities.

Hooks 6
hook_s3fs_url_settings_alter

Alters the format and options used when creating an external URL. Allows modification of presigned URL settings, timeouts, and custom GET arguments.

hook_s3fs_stream_open_params_alter

Alters the S3 file parameters when a stream is opened. Useful for adding server-side encryption parameters.

hook_s3fs_upload_params_alter

Alters the S3 file parameters when uploading an object. Allows modification of ACL, encryption, and other upload parameters.

hook_s3fs_copy_params_alter

Alters the S3 parameters when copying or renaming files. Useful for handling encrypted source and destination files.

hook_s3fs_command_params_alter

Alters the S3 parameters returned by getCommandParams(). Impacts calls such as obtaining metadata (HeadObject).

hook_s3fs_bucket_command_params_alter

Alters the S3 parameters at the bucket plugin level. Similar to hook_s3fs_command_params_alter but with additional context including bucket_id.

Drush Commands 2
drush s3fs:list-buckets

List all configured S3 bucket entities with their status.

drush s3fs:refresh-cache

Refresh the S3 File System metadata cache for a specific bucket.

Troubleshooting 7
Files uploaded to S3 manually are not visible to Drupal

Run the metadata cache refresh via Drush (drush s3fs:refresh-cache --bucket=BUCKET_NAME) or through the admin UI at /admin/config/s3/s3-bucket/BUCKET_NAME/actions. S3fs requires its metadata cache to know about files.

Folder Integrity Constraint Violation during Metadata Refresh

This occurs when an object exists at both '/path/to/object' and '/path/to/object/another_object' in the bucket. Either remove/rename the root object or remove/rename all objects with the same path prefix.

Access Denied (403) errors when accessing files

If using BlockPublicAcls on your bucket, enable 'Upload all files as private in S3' option. Also ensure IAM permissions include all required actions and the bucket CORS policy allows GET requests.

Image styles not generating or displaying correctly

For Nginx servers, add the location block: location ~ ^/s3/styles/ { try_files $uri @rewrite; } to your server configuration.

CSS/JS files have broken asset references

Ensure the s3fs_cssjs module is enabled when using public:// takeover. Configure CORS on your S3 bucket to allow GET requests from your site's domain.

AWS credential rate limiting errors

Enable credential caching by installing doctrine/cache (composer require 'doctrine/cache:~1.4') and configuring the 'Cached Credentials Directory' setting.

Maximum URI length exceeded errors

S3fs has a 255 character limit on file URIs due to MySQL index limitations. Shorten file names or reduce directory nesting depth.

Security Notes 6
  • S3 credentials should be stored outside the web root. Use the Key module, environment variables, or IAM roles rather than settings.php when possible.
  • When credential caching is enabled, credentials are stored in plain text on the filesystem. Ensure the cache directory is outside the docroot and properly secured.
  • Disabling SSL/TLS verification is a security risk and should only be used for development with self-signed certificates.
  • Files stored with public ACL are accessible to anyone with the URL. Use 'upload_as_private' for sensitive content and presigned URLs for controlled access.
  • The 'Ignore file metadata cache' option causes all requests to hit S3 directly, which could expose your bucket to excessive API calls and potential DDoS.
  • When using public:// takeover, ensure your bucket CORS policy only allows requests from your domain, not wildcard origins.