Buckets & objects

Last change on 2024-09-23 • Created on 2024-09-23 • ID: ST-41A1B

⚠️ Object Storage is currently in Beta test. For more information see this FAQ: Object Storage » Beta test

How can I access my Buckets?

For essential tasks, such as creating a Bucket, you can use the Cloud Console. To efficiently manage the files in your Buckets and fully leverage all the features Object Storage offers, you need to use the Hetzner S3 API via an Amazon S3 compatible tool. It is possible to use the API directly, but this is not recommended as you would have to generate the signature yourself, making it quite complicated. Here are your options summarized:

Note that you need an access key and a secret key to use the S3 API. You can create those via Cloud Console. By default, you can use each access key & secret key pair to access every Bucket within the same project.

  • Amazon S3 REST API with Hetzner S3 endpoint
  • Tools that support an Amazon S3 compatible API (e.g. S3cmd or MinIO)
  • Cloud Console (only essential tasks)

For detailed instructions on how to create and manage Buckets via an Amazon S3 compatible API tool or the Cloud Console, navigate to Object Storage » Getting Started in the left menu bar and select your preferred option.

If you set the Bucket visibility to public, you can access your files via the URL in a webbrowser.

Who can make changes to Buckets?

Via the Cloud Console, all members and above have permission to make changes to Buckets (add / delete Buckets).

Via the S3 API, basically anyone can upload / delete files — even someone without a Hetzner account.

  • If you set your Bucket visibility to private, you just need to give anyone who needs access an access key and a secret key.

  • If you set your Bucket visibility to public, you don't even need to provide access keys and secret keys for read access. Anyone who knows the Bucket URL and the file name can view and download those files as they want (file listing remains denied). Write access (e.g. add files) still requires an access key and a secret key.

How do I upload an entire directory at once?

This depends on the S3-compatible tool you're using, so we recommend reading their documentation. With the MinIO Client, for example, you could use this command:

mc mirror example_directory <alias_name>/<bucket_name>
example_directory
├── file1
├── file2
└── file3

This will automatically upload file1, file2, and file3 to your Bucket.

Can I edit a file after uploading it to a Bucket?

No, with Object Storage you cannot edit files because objects are immutable. To "update" a file, you need to upload the new version as a new object. If you use the same object name, it will automatically overwrite the existing object.

What Bucket names are allowed?

The name has to be:

  • Valid as per RFC 1123 (see "2.1 Host Names and Numbers")
  • Not formatted as IP address (e.g. 203.0.113.1)
  • Unique amongst all Hetzner Object Storage users and across all locations

The name needs to be unique. This means that two different Buckets cannot share the same name, regardless of their location. This rule applies Hetzner-wide, across all locations. If another customer already has a Bucket with the name that you would prefer, you will have to come up with another name.

The Bucket name will be part of the Bucket URL, which is why it has to adhere to the host name requirements. Some of the rules include:

  • You can use the alphabet (a-z), digits (0-9), minus sign (-)
  • No period (.)
  • No blank or space characters
  • No upper case characters
  • The first character must be an alpha character or a digit
  • The last character must not be a minus sign
  • Between 3-63 characters are allowed

Note that it is NOT possible to change the name once the Bucket is created.

What object names are allowed?

When you name an object, you should note the following rules:

  • Up to 1024 Bytes (equivalent to 1024 US-ASCII characters)
  • You can use the alphabet (a-z) and digits (0-9)
  • You can use special characters, e.g. ! - . * ' ( and ).
  • You can use UTF-8 characters. Note that those characters could cause issues.

In Buckets, it is not possible to add directories or subdirectories. To get a hierarchical structure, you would need to add / to the object name.

Examples:

  • website/images/example1.jpg
  • website/images/example2.jpg
  • backup/snapshot.bak
  • backup/mysqldump.dmp

How long are Bucket names blocked from being re-used after deletion?

When you delete a Bucket, the name will become available again after 30 days.

Are Buckets moveable?

Yes, it is possible to move Buckets from one project to another project. Note that you will have to do this via Cloud Console. You cannot use the S3 API to change the project. In order to move a Bucket, you need the following permissions:

  • Source project: Owner
  • Target project: Member or above

As soon as the Bucket is moved to the target project, the owner of the target project will be billed for the Bucket.

How do I protect my Bucket from being deleted by accident?

You can protect your Buckets with the property protected in the Cloud Console. Protected is a property that disables deletion. Before you can delete a protected Bucket, you have to deactivate the deletion protection.

In Cloud Console, protected resources are indicated by a lock icon on the Bucket list view.

Does the visibility setting apply to all objects within a Bucket?

When you set the visibility to public during Bucket creation, we will automatically apply access policies that allow read access to all objects within the Bucket.

When you set up your own access policies, you have the option to exclusively allow read access to objects with a certain prefix (bucket_name/prefix/*) instead of allowing read access to all objects within the Bucket (bucket_name/*).

Note, however, that it is not recommended to change the visibility to public after you already added data to your Bucket. If you have to do it, double-check the contents of your Bucket and remove any sensitive data before setting the visibility to public.

The best practice is to only set empty Buckets to public and add data afterwards.

Is it possible to change the visibility of existing Buckets?

Yes, this is possible, but only via an S3 compatible tool. The default visibility of every Bucket is "private". To grant access permissions, Object Storage uses access policies. The policies are defined in a JSON file which is applied to the Bucket.

In other words, the access policies automatically overwrite the default visibility, which is always "private".

  • If you set the Bucket visibility to "public" during Bucket creation, we create and apply the access policies for you.
  • If you set the Bucket visibility to "private" during Bucket creation, no access policies are added.
Bucket A
private
Default visibility:
  • private
  • Bucket B
    public
    Default visibility:
  • private



  • Access policies:
  • Allow GET

  • How to change the visibility after the Bucket was already created:

    • private to public: Add your own access policies
    • public to private: Delete existing access policies

    Instead of going fully private or fully public, you can apply access policies that simply restrict access (for example to certain IPs).

    For more information about policies and available access restrictions, check out the Amazon articles "Policies and permissions in Amazon S3" and "Examples of Amazon S3 bucket policies".

    The command to apply the policies depends on the S3-compatible tool you're using, so we recommend reading their documentation (e.g. MinOI Client)

    Can I access a private Bucket via a webbrowser?

    If you want to share individual files from a Bucket with someone who doesn't have their own S3 credentials, you can presign the URL to the file with your own S3 credentials and share the resulting URL. With the presigned URL, anyone can download the file via a web browser or tools like curl or wget without having to provide their own S3 credentials. You can set a time for how long this presigned URL should be valid. After that time, you can no longer use the presigned URL to access the file.

    The command to sign a URL depends on the S3-compatible tool you're using (e.g. MinOI Client, AWS CLI, rclone). With the MinIO Client, AWS CLI and S3cmd, for example, you could use these commands:

    mc share download <alias-name>/<bucket-name>/<file-name> --expire 12h34m56s # hours, minutes, seconds (default 168h = 7 days)
    s3cmd signurl s3://<bucket-name>/<file-name> 1765541532           # use https://epochconverter.com/ to convert the time
    aws s3 presign s3://<bucket-name>/<file-name> --expires-in 60480  # number of seconds (default 3600 seconds = 1 hour)

    With the MinIO Client, you can automatically create several presigned URLs at once by not specifying a file name, which means you don't have to create those URLs one by one.

    Example:

    <bucket-name>
    ├─ example-file-1
    └─ example-file-2
    mc share download <alias-name>/<bucket-name>

    With the example above, you will get two signed URLs — one for example-file-1 and one for example-file-2.

    Table of Contents