Building a Ransomware-Resistant S3 Storage with MinIO and Versioning

Hello everyone!
Today we’re diving into something unusual — setting up a "protected" S3 storage with versioning. Let's first break down what this means and why it matters.
Not long ago, I had a conversation with friends about protecting backups from ransomware attacks. The main requirement was that even if a ransomware virus managed to infiltrate the system and encrypt some or all files, it wouldn't be able to encrypt the backups.
Obviously, if your backups are stored on a mounted network drive (SMB share), the virus will encrypt them just like any local files.
Even if you set up an FTP server with delete permissions restricted, the virus could still overwrite existing backups with encrypted versions — because it would likely have write permissions, allowing it to overwrite old files.
During our discussion, we concluded that we need a backup system where:
- Every new version is stored separately (!)
- Files cannot be overwritten or deleted.
There are a few ways to set this up, but we chose to build an S3-compatible object storage.
What is S3-compatible object storage?
It’s a service or software solution that implements the same HTTP API as Amazon S3.
- Basic model: store "objects" (files with metadata) inside "buckets" (containers) accessible via REST API (GET, PUT, DELETE, etc.).
- Main advantage: you can use existing S3 clients, SDKs, and tools (AWS CLI, boto3, s3cmd) without modifications.
- Examples: MinIO, Ceph RADOS Gateway, Wasabi, and many cloud providers offering S3-compatible interfaces.
For implementation, we chose MinIO because it’s open source and can be deployed in Docker on x86_64 or ARM architectures. Initially, I deployed it on my Raspberry Pi via Portainer under Umbrel OS, but later I migrated it to a Synology NAS using Container Manager. Here’s the Docker Compose (YML) configuration I used:
version: "3.9"
services:
minio:
image: minio/minio:RELEASE.2025-04-08T15-41-24Z
container_name: minio
restart: unless-stopped
environment:
MINIO_ROOT_USER: decker
MINIO_ROOT_PASSWORD: "myverysecretrootpassword"
MINIO_SERVER_URL: "http://192.168.178.200:19000"
MINIO_BROWSER_REDIRECT_URL: "http://192.168.178.200:19001"
MINIO_BROWSER_REDIRECT: true
# MINIO_SERVER_URL: "https://s3.example.org"
# MINIO_BROWSER_REDIRECT_URL: "https://s3.example.org/console/"
volumes:
- minio-data:/data
ports:
- "19000:9000"
- "19001:9001"
command: server /data --console-address ":9001"
volumes:
minio-data:
If you follow this setup, your MinIO instance will be accessible at 192.168.178.200
(replace with your IP), with:
- Port 19000 for the S3 endpoint,
- Port 19001 for the web console (both over HTTP).
After launching, log into the console using your MINIO_ROOT_USER
credentials.
You’ll need to create your first bucket, set up IAM policies, and a user.
When creating the bucket, enable Object Locking — this automatically enables Versioning. Set the Mode to Governance so that only the root user (with s3:BypassGovernanceRetention
) can delete object versions. (If you want even stricter policies, use Compliance mode)
Next, create a new policy — in my case, called AppendOnly_v2
, which looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::backup/*",
"arn:aws:s3:::photos",
"arn:aws:s3:::photos/*",
"arn:aws:s3:::backup"
]
},
{
"Sid": "DeleteLocks",
"Effect": "Allow",
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::backup/locks/*"
]
},
{
"Effect": "Deny",
"Action": [
"s3:DeleteObject"
],
"NotResource": [
"arn:aws:s3:::backup/locks/*"
]
}
]
}
This policy:
- Allows access to two buckets:
backup
andphotos
, - Allows deleting objects only inside
backup/locks
, - Denies delete actions elsewhere.
This setup is great for using your S3 bucket with rclone and restic.
In recent rclone versions, you should add this to the end of your $HOME/.config/rclone/rclone.conf
:
# The --s3-no-check-bucket option is only required with rclone > v1.50 and gateway < 7.1 to avoid 409 errors (CLOUD-3213).
no_check_bucket=true
Now add a user (for example, resticuser
) and assign the created policy.
(We're not covering restic
usage here — feel free to Google it if interested.)
Using rclone
, if you named your remote minio
, you can upload a file like this:
rclone -v copyto ./upload-test.txt minio:backup/upload-test.txt
# or
# rclone copy ./upload-test.txt minio:backup
Running this command five times with slight changes to the file will store five different versions in the bucket!
In the admin console, you’ll see the version history for each object.
For regular users (with AppendOnly_v2
applied):
- They cannot view file versions,
- They cannot delete files or versions,
- They can only upload or download the latest version — exactly what we wanted!
Thus, even if a virus steals a regular user’s credentials, it cannot damage your existing backups.
Securing with HTTPS
Currently, the storage runs over HTTP.
You can go further and expose it over HTTPS using Nginx Proxy Manager within the same network.
Set it up so your storage is available at:
https://s3.example.org
for S3 endpointshttps://s3.example.org/console
for the web console
Important:
In the Forward Hostname/IP field, make sure you include the trailing /
.
Without it, images and subpaths like /console/subpath
may not load properly.
If you encounter problems, try creating the config without the trailing slash first, then edit it to add it afterward.
Final Notes
This setup was used only in a local network — it wasn’t exposed to the internet.
If you want to make it public, additional security configurations are highly recommended (or better yet, keep it local and access it via VPN if needed).
This post is not meant to be a comprehensive guide or free of mistakes — it’s simply a starting point for your own exploration.
For any missing details, documentation, or clarifications, feel free to search Google or ask ChatGPT!