⚠️ Object Storage is currently in Beta test. For more information see this FAQ: Object Storage » Beta test
How can I access my Buckets?
For essential tasks, such as creating a Bucket, you can use the Cloud Console. To efficiently manage the files in your Buckets and fully leverage all the features Object Storage offers, you need to use the Hetzner S3 API via an Amazon S3 compatible tool. It is possible to use the API directly, but this is not recommended as you would have to generate the signature yourself, making it quite complicated. Here are your options summarized:
You need an access key and a secret key to use the S3 API. You can create those via Cloud Console. By default, you can use each access key & secret key pair to access every Bucket within the same project.
- Amazon S3 REST API with Hetzner S3 endpoint
- Tools that support an Amazon S3 compatible API (e.g. S3cmd or MinIO)
- Cloud Console (only essential tasks)
For detailed instructions on how to create and manage Buckets via an Amazon S3 compatible API tool or the Cloud Console, navigate to Object Storage
» Getting Started
in the left menu bar and select your preferred option.
If you set the Bucket visibility to public
, you can access your files via the URL in a web browser.
Who can make changes to Buckets?
Via the Cloud Console, all members and above have permission to make changes to Buckets (add / delete Buckets).
Via the S3 API, basically anyone can upload / delete files — even someone without a Hetzner account.
-
If you set your Bucket visibility to
private
, you just need to give anyone who needs access an access key and a secret key. Note that a private Bucket can include public objects if the access permissions are customized accordingly. -
If you set your Bucket visibility to
public
, you don't even need to provide access keys and secret keys for read access. Anyone who knows the Bucket URL and the file name can view and download those files as they want (file listing remains denied). Write access (e.g. add files) still requires an access key and a secret key.
How do I upload an entire directory at once?
This depends on the S3-compatible tool you're using, so we recommend reading their documentation. With the MinIO Client, for example, you could use this command:
mc mirror example_directory <alias_name>/<bucket_name>
example_directory
├── file1
├── file2
└── file3
This will automatically upload file1
, file2
, and file3
to your Bucket.
Can I edit a file after uploading it to a Bucket?
No, with Object Storage you cannot edit files because objects are immutable. To "update" a file, you need to upload the new version as a new object. If you use the same object name, it will automatically overwrite the existing object.
What Bucket names are allowed?
The name has to be:
- Valid as per RFC 1123 (see "2.1 Host Names and Numbers")
- Not formatted as IP address (e.g.
203.0.113.1
) - Unique amongst all Hetzner Object Storage users and across all locations
The name needs to be unique. This means that two different Buckets cannot share the same name, regardless of their location. This rule applies Hetzner-wide, across all locations. If another customer already has a Bucket with the name that you would prefer, you will have to come up with another name.
The Bucket name will be part of the Bucket URL, which is why it has to adhere to the host name requirements. Some of the rules include:
- You can use the alphabet (a-z), digits (0-9), minus sign (-)
- No period (.)
- No blank or space characters
- No upper case characters
- The first character must be an alpha character or a digit
- The last character must not be a minus sign
- Between 3-63 characters are allowed
Note that it is NOT possible to change the name once the Bucket is created.
What object names are allowed?
When you name an object, you should note the following rules:
- Up to 1024 Bytes (equivalent to 1024 US-ASCII characters)
- You can use the alphabet (a-z) and digits (0-9)
- You can use special characters, e.g.
!
-
.
*
'
(
and)
. - You can use UTF-8 characters. Note that those characters could cause issues.
In Buckets, it is not possible to add directories or subdirectories. To get a hierarchical structure, you would need to add /
to the object name.
Examples:
website/images/example1.jpg
website/images/example2.jpg
backup/snapshot.bak
backup/mysqldump.dmp
How long are Bucket names blocked from being re-used after deletion?
When you delete a Bucket, the name will become available again after 14 days.
Are Buckets moveable?
Yes, it is possible to move Buckets from one project to another project. You will have to do this via Cloud Console. You cannot use the S3 API to change the project. In order to move a Bucket, you need the following permissions:
- Source project: Owner
- Target project: Member or above
As soon as the Bucket is moved to the target project, the owner of the target project will be billed for the Bucket.
How do I protect my Bucket from being deleted by accident?
You can protect your Buckets with the property protected
on Cloud Console. The protected
property disables deletion. Before you can delete a protected Bucket, you have to first deactivate this property.
In Cloud Console, protected resources are indicated by a lock icon on the Bucket list view.
How do I protect my objects from getting deleted by accident?
Manually deleting objects by accident is not the only risk. When you upload a new object with the same name as an existing one in the Bucket, the existing object is automatically deleted and replaced by the new object. If you're not careful with your naming scheme, you might end up losing important data by accident.
To protect objects from getting deleted automatically, you can use versioning.
To protect objects from getting deleted manually, you can use object locking. Note that you must enable object locking when you create the Bucket, otherwise you won't be able to use it. To enable object locking during Bucket creation, you have to use an S3-compatible tool as explained in this how-to guide.
This gives you the following options to choose from:
For more information on each option, see these Amazon S3 articles: Versioning, Retention, Legal Hold
manual deletion allowed.
manual deletion disabled.
- Governance Mode
- Compliance Mode
What is the difference between versioning and object locking?
Versioning allows you to disable automatic deletion of objects. Each object is automatically assigned a version ID, which allows you to keep several versions of the same object in a single Bucket. If you upload an object with a name that already exists in the Bucket (e.g. file_name.txt
), the existing object is not deleted; instead, the objects are distinguished via their version IDs. Manual deletion of objects is still possible.
Object Locking allows you to disable manual deletion of selected objects. With object locking, you can choose between the options "legal hold" and "retention". Legal hold protects an object from getting deleted until the legal hold is manually removed again. Retention protects an object from getting deleted until a specified time period has elapsed. Retention has two different modes: "Governance" and "Compliance".
All options in direct comparison:
Automatic deletion | Manual deletion | Objects with the same name | |
---|---|---|---|
Versioning | disabled | allowed | Objects are distinguished via their version ID. |
Legal Hold | Versioning is automatically enabled and you cannot disable it. | To delete an object, you first have to remove the legal hold. This does not require any special permissions, but it adds an extra step that can help prevent accidental deletion. | Because versioning is automatically enabled, a new object with a different version ID is added. You will need to enable the legal hold again for the new object. |
Retention (Governance Mode) |
Versioning is automatically enabled and you cannot disable it. | Only users with special permissions can end the retention period earlier and delete the object before the original retention period ended. | Because versioning is automatically enabled, a new object with a different version ID is added. You will need to set retention again for the new object. |
Retention (Compliance Mode) |
Versioning is automatically enabled and you cannot disable it. | No one can end the retention period earlier and it is not possible to delete the object before the retention period ended. | Because versioning is automatically enabled, a new object with a different version ID is added. You will need to set retention again for the new object. |
Does the visibility setting apply to all objects within a Bucket?
When you set the visibility to public
during Bucket creation, we will automatically apply access policies that allow read access to all objects within the Bucket.
When you set up your own access policies, you have the option to exclusively allow read access to objects with a certain prefix (bucket_name/prefix/*
) instead of allowing read access to all objects within the Bucket (bucket_name/*
).
When you use a client like WinSCP, for example, you might have the option to set access permissions for individual objects. In this case, you could end up with public objects in a Bucket that is marked as "private". To avoid surprises, carefully consider any changes you make to the visibility of individual objects, and document these changes accordingly or track them in another way.
Note, however, that it is not recommended to change the visibility to public
after you already added data to your Bucket. If you have to do it, double-check the contents of your Bucket and remove any sensitive data before setting the visibility to public
.
The best practice is to only set empty Buckets to public
and add data afterwards.
Is it possible to change the visibility of existing Buckets?
Yes, this is possible, but only via an S3 compatible tool. The default visibility of every Bucket is "private". To grant access permissions, Object Storage uses access policies. The policies are defined in a JSON file which is applied to the Bucket.
In other words, the access policies automatically overwrite the default visibility, which is always "private".
- If you set the Bucket visibility to "public" during Bucket creation, we create and apply the access policies for you.
- If you set the Bucket visibility to "private" during Bucket creation, no access policies are added.
private
public
How to change the visibility after the Bucket was already created:
private
topublic
: Add your own access policiespublic
toprivate
: Delete existing access policies
Instead of going fully private or fully public, you can apply access policies that simply restrict access (for example to certain IPs).
For more information about policies and available access restrictions, check out the Amazon articles "Policies and permissions in Amazon S3" and "Examples of Amazon S3 bucket policies".
The command to apply the policies depends on the S3-compatible tool you're using, so we recommend reading their documentation (e.g. MinIO Client)
Can I access a private Bucket via a web browser?
If you want to share individual files from a Bucket with someone who doesn't have their own S3 credentials, you can presign the URL to the file with your own S3 credentials and share the resulting URL. With the presigned URL, anyone can download the file via a web browser or tools like curl
or wget
without having to provide their own S3 credentials. You can set a time for how long this presigned URL should be valid. After that time, you can no longer use the presigned URL to access the file.
The command to sign a URL depends on the S3-compatible tool you're using (e.g. MinIO Client, AWS CLI, rclone). With the MinIO Client, AWS CLI and S3cmd, for example, you could use these commands:
mc share download <alias-name>/<bucket-name>/<file-name> --expire 12h34m56s # hours, minutes, seconds (default 168h = 7 days)
s3cmd signurl s3://<bucket-name>/<file-name> 1765541532 # use https://epochconverter.com/ to convert the time
aws s3 presign s3://<bucket-name>/<file-name> --expires-in 60480 # number of seconds (default 3600 seconds = 1 hour)
You can find instruction on how to apply lifecycle policies in the how-to guide "Applying lifecycle policies".
With the MinIO Client, you can automatically create several presigned URLs at once by not specifying a file name, which means you don't have to create those URLs one by one.
Example:
<bucket-name>
├─ example-file-1
└─ example-file-2
mc share download <alias-name>/<bucket-name>
With the example above, you will get two signed URLs — one for example-file-1
and one for example-file-2
.
What are lifecycle policies and how do I use them?
With lifecycle policies, you can:
-
Define a timestamp or a time period after which objects expire (e.g. 30 days after creation). Expired objects are automatically deleted. When you apply lifecycle policies to a Bucket, the rules apply to all objects — existing and new.
-
Define a time period after which "leftover" parts from an aborted multipart upload are automatically deleted. Without this lifecycle rule, these fragments will remain in your Bucket and utilize storage until you remove them.
For a step-by-step guide on how to apply lifecycle policies to a Bucket, you can follow this how-to: "Applying lifecycle policies".
Note that expiry of fully uploaded objects behaves differently, depending on the versioning status:
A delete marker is an empty object (no data associated with it) that has a key (object name) and a version ID. A delete marker is seen as the latest version of an object and it is treated as if the object is deleted.
Expiration learn more |
Noncurrent Version Expiration learn more |
Expired Object Delete Marker learn more |
|
No versioning | After an object expired, it is permanently deleted. | n/a | n/a |
Versioning enabled | Only applies to the latest version of an object. After the latest version of an object expired, it becomes a "noncurrent version". A delete marker with a unique version ID is added as the new latest version and it is treated as if the object was actually deleted. | X days after an object became a "noncurrent version" (replaced by a newer version), it is permanently deleted — unless object lock is applied. | If a delete marker is the only remaining version of an object and all noncurrent versions have been permanently deleted, the delete marker is also deleted. |
Versioning suspended | Only applies to the latest version of an object. After the latest version of an object expired, it becomes a "noncurrent version". A delete marker with version ID "null" is added as the new latest version and it is treated as if the object was actually deleted. If any of the existing versions already has the version ID "null", this existing object is automatically permanently deleted (see this FAQ .) |
Click here to view an example
Versioning disabled
- Object A
Created: 31 days ago
- Object B
Created: 18 days agoVersioning enabled
- Object C latest
Created: 44 days ago
- Object C noncurrent
Created: 94 days ago
Noncurrent since: 44 days agoVersioning suspended
- Object D delete-marker
Created: 91 days ago
- Object E latest
Created: 31 days ago
Version ID: null
Let's apply the following example lifecycle configuration:
{ "Rules": [{ "ID": "expiry", "Status": "Enabled", "Prefix": "", "Expiration": { "Days": 30 }, "NoncurrentVersionExpiration": { "NoncurrentDays": 15 } }, { "ID": "deletemarker", "Status": "Enabled", "Prefix": "", "Expiration": { "ExpiredObjectDeleteMarker": true } }] }
After this lifecycle configuration is applied, the Buckets will look like this:
Versioning disabled
- Object B
Created: 18 days agoVersioning enabled
- Object C delete-marker
Created: 1 day ago
- Object C noncurrent
Created: 44 days ago
Noncurrent since: 1 day agoVersioning suspended