What is Hetzner Cloud Object Storage?
With our S3-compatible Object Storage, you can save data in a self-contained environment called "Bucket". You can set individual visibility settings for each Bucket.
Technical background:
When you save data (e.g. text document or music file) it is conceptualized as an object. Each object includes the following information:
- Object key (unique identifier for the object)
- Data (e.g. image or text document)
- System metadata (e.g. file type, file size)
- Custom metadata (key-value pairs set during the upload of the object to store any additional information)
- Attributes (e.g. which users (keys) are allowed to download or delete the object)
On the storage disk, the object is saved as a whole (along with its data, metadata and attributes) under its unique key (name).
What is the difference between Buckets (object storage) and Cloud Volumes (block storage)?
General differences:
| Buckets | Cloud Volumes |
|---|---|
| Buckets offer storage space as a stand-alone product that customers can use independently of any other cloud resources. | Cloud Volumes are used to extend the storage space of a cloud server. |
| You can access the data directly via the API or, if the Bucket is publicly visible, via a URL in the webbrowser. | The only way to access data from a Cloud Volume is to mount it to a server and access it from there. |
| Since you need to access the data via the Internet, it inevitably comes with some latency, making it suitable for backups, database dumps, or logs. | Since the storage device is directly mounted to the server, it usually comes with very low latency, making it suitable for real-time databases and application with high latency sensitivity. |
| The storage space is not limited by the size of a storage device and therefore offers high flexibility to scale up or down at any time. | If you need more storage space, you need to resize and reformat the block storage device, which can be a tedious task. |
| Primary purpose: Write once, read many (WORM) | Primary purpose: Edit, move, or delete files however you need. |
| Objects are immutable, which means it is not possible to modify an object. To "update" a file, you need to upload the new version. This will create a new object and automatically delete the old object. | You can open existing files with a text editor like nano or vi and edit the file directly. |
| A Bucket can only hold a list of files. You cannot add directories or subdirectories. To get a hierarchical structure, you would need to name the files accordingly, e.g. music/example.mp3. | You can save your files in different directories and subdirectories however you need. |
Technical differences:
- With object storage, data is saved as an entire object. It is not possible to modify an object. For any changes, a new object is created.
- With block storage, data is split into multiple fixed-sized blocks that are saved separately. When you access your data, it is first "reassembled". When you modify data, the file system updates the specific blocks as needed.
Is Object Storage exclusively managed via the Hetzner S3 API?
The following is managed via our Hetzner Console and NOT via the S3 API:
- Create S3 credentials
- Add
protectedlabel to Buckets
For everything else, meaning everything Bucket and object related, you should use our Hetzner S3 API.
Is Object Storage available in all Cloud locations?
No, Object Storage is currently only available in our European locations, which are:
- Nuremberg
- Falkenstein
- Helsinki
What location is data stored in?
The entire data of a Bucket is stored in the location you selected. In that location, the data is stored in a single data center. The power and network infrastructure is designed with built-in redundancy for high availability.
What kind of redundancy does Object Storage offer? How resilient is the product to failures?
Each uploaded data object is divided into chunks, which are distributed across multiple servers within the cluster. Using erasure coding, the system can ensure data integrity even if up to three storage servers fail.
As always, each of our products can only be one part of a secure backup strategy.
Actions we take:
| Action | Description |
|---|---|
| Redundant power | Each server is connected to two independent power rails. Additionally, we have a standby power system at Hetzner (see hetzner.com » data center). |
| Redundant network | The networks have redundant switches and uplinks to increase network reliability. |
Do you perform any kind of encryption? How secure is my data?
There is no default data-at-rest encryption of objects, but you can encrypt your data during the upload using SSE-C. This is explained in the how-to guide "Encrypting data with SSE-C". Replaced disk drives are physically destroyed on site and never leave our premises in a recoverable form that would allow data to be reconstructed.
Why do the total file count and total size in Hetzner Console not update right after changes?
These Bucket statistics are not calculated in real-time. It can take up to 15-20 minutes for these values to be updated.
Why does the files list in Hetzner Console show an error message?
When you apply access policies that restrict access to specific access keys, our frontend is no longer able to retrieve the objects from this Bucket and list them in Hetzner Console, resulting in an error message.
Why are the total file count and total size in the Bucket overview higher than what is visible in the object overview?
The total file count includes all objects that utilize storage. There a two special cases that utilize storage but are not listed in the object overview, which are:
-
Previous versions of objects
-
Objects from multipart uploads (see Amazon S3 documentation) that are either still in progress or have been aborted
You can automatically delete leftover objects from aborted multipart uploads with lifecycle policies (see this FAQ entry.)
If you suspect that the total file count is too high, we recommend checking the Bucket for any "invisible objects".
# List all versions
mc ls --versions <alias_name>/<bucket_name>
aws s3api list-object-versions --bucket <bucket_name>
# List ongoing or aborted multipart uploads
mc ls --incomplete <alias_name>/<bucket_name>
aws s3api list-multipart-uploads --bucket <bucket_name>The total size represents the total storage used. This includes:
- All visible objects
- All "invisible" objects
Billing is based on the total size. Note that billing also takes metadata into account, as it consumes storage as well.
What configuration and security features are currently supported?
| Feature | Supported |
|---|---|
| AWS Signature version | |
| Storage classes | |
| Server-Side Encryption (SSE) |
What TLS protocols and cipher suites are currently supported by the API?
| Protocols | Cipher suites |
|---|---|
| TLS 1.3 |
|
| TLS 1.2 (Support will end soon*) |
|
*TLS version 1.2 is deprecated and we will discontinue support for it in the near future. Please upgrade your applications to TLS version 1.3 as soon as possible.
Do you support AWS SDKs and AWS CLI?
Yes you can manage your Buckets via AWS CLI und AWS SDKs, such as:
The getting started "Using libraries" provides some example configurations for the AWS SDKs mentioned above.
Do you offer different storage tiers similar to other Cloud providers?
No, we don't offer different storage tiers for specific use cases (e.g. frequent access, infrequent access, archive).
We currently only offer a "standard" storage tier, backed by HDDs.
Do you offer backup options that work across different locations?
No there is no built-in functionality to replicate Bucket data from one location to another one. However, you can set up replication manually, using CLI tools that support bucket-to-bucket synchronisation (e.g. rclone). You can schedule this replication process with a tool like a cron job.
Do you recognize or reward customers for developing open-source projects that support Hetzner Object Storage?
Yes! We believe that if someone else has already developed a solution, other developers should be able to benefit from that work too. For this reason, you can find a list of libraries created by fellow developers here: github.com/hetznercloud/awesome-hcloud.
Note: We only consider rewards for projects that provide Hetzner-specific functionality or integrations. For example, our Object Storage exposes a standard S3 API without any Hetzner-specific extensions. Projects that focus solely on generic S3 capabilities (e.g., general S3 clients or SDKs) are not Hetzner-specific and are therefore not eligible for Hetzner Rewards.
If you are developing an open-source project that supports or intends to add support for our S3-compatible Object Storage, you may be eligible for a free one-time credit of up to € 50 / $ 50 on your account. Please contact us via the support page on your Hetzner Console and let us know the following:
- The name of the project you are working on
- A short description of the project
- Link to the project website or repo where the project is hosted
- Affiliation with / role in the project (e.g. project maintainer)
- Link to some other open-source work you have already done (if you have done so)
Where can I report issues?
For issues with our Object Storage product, you can submit support tickets via Hetzner Console. Note that we do not provide support for configuring individual applications. If you wish to report an error or problem with our product, please include the following information so that we can investigate your issue as efficiently as possible:
- For Hetzner Console issues: a screenshot of the page in question.
- For problems with applications or CLI tools such as
s3cmd,mc, etc.: an excerpt from the debug output with meaningful error messages, possibly log file entries that could help us to narrow down the error. - For issues with specific Buckets: the Bucket ID (not the name!) — you can obtain this via the URL on Hetzner Console: click on "Object Storage" in your project and then the name of the Bucket so that the Bucket overview appears. The URL in the address bar of the browser contains the Bucket ID:
https://console.hetzner.com/projects/<project-id>/buckets/<bucket-id>/overview - To report bandwidth or latency issues: the output of the commands mentioned in the troubleshooting article called "Bandwidth or latency issues".
- For test API requests that reproduce the issue: log entries with timestamp (including time zone), source IP, full request URI, HTTP status code, and possibly response time.
Where can I discuss general questions?
You can discuss general questions and content related to our Object Storage in this dedicated forum:
Please do not share any personal data. When you share screenshots, please anonymize personal data such as your customer number in advance.
For security reasons, never post access keys or secret keys!
Can I serve objects directly via HTTP to a large number of clients?
Object Storage supports sharing data directly via public URLs, which is suitable for smaller-scale use cases. However, Object Storage is not designed to serve objects directly to thousands of clients over HTTP, as it does not provide the low latency and global coverage required for large-scale content delivery.
For use cases such as distributing images, videos, or other static assets to a large audience, we recommend using a third-party Content Delivery Network (CDN) in front of Object Storage.
A CDN caches objects on globally distributed edge servers, allowing clients to fetch content from locations closer to them. This significantly improves latency, reduces load on the Object Storage Bucket, and provides better performance at scale. In this setup, the Object Storage Bucket acts as the origin, and the CDN retrieves objects from it as needed.
For more information about Content Delivery Network (CDN), see this docs entry.
Simplified example visualisation:
fetched from Bucket
fetched from Bucket
fetched from Bucket
Why are my uploads failing? Why is Object Storage sometimes slow or unresponsive?
Since our object storage is a shared-resource product, the usage patterns of individual customers can sometimes affect the entire cluster. Particularly resource-intensive workloads (such as those involving large amounts of concurrent requests, consistently high load, or unusual access patterns) can contribute to bottlenecks that in turn could lead to slow response times or timeout errors.
Please see our FAQ "Which use cases and workloads are a good fit for Object Storage?" for some guidance about suitable applications and workloads and which usage patterns should be avoided.
Our storage clusters have grown faster than originally planned in recent months. We are addressing this growth by bringing additional storage clusters online in every location. Newly created Buckets are being placed on these, while existing Buckets continue to grow on the existing clusters.
To avoid exceeding critical utilization thresholds, we regularly migrate Buckets between clusters. Exceeding these thresholds can lead to poorer performance and issues such as timeouts.
Since Bucket migrations take time given the volume of data involved, improvements will not be immediately apparent after a new cluster is activated.
Our primary focus is on bringing additional clusters online as quickly as possible at all locations to restore load and utilization to stable levels.
We have implemented some temporary measurements and are also working on a number of future improvements to mitigate the situation.
As a short-term measurement, we have implemented temporary limits in NBG that reduce both the maximum upload speed and the number of concurrent requests. If these limits are exceeded, the cluster returns a 503 error. S3 clients without retry logic will abort uploads at this point.
We are also working on the ability to set limits with finer granularity in the future, so that individual Buckets with disproportionately high load can be specifically limited without affecting other customers on the same cluster.
We are developing a mechanism that prioritizes operations that place a particularly heavy load on the cluster (e.g., uploads, deletions, and high volumes of requests) in queues, so that these can be processed during peak loads without timeouts or errors.
In addition, there is a long list of other measures, such as improvements to monitoring to resolve issues more easily and quickly, as well as even greater operational automation.
Which use cases and workloads are a good fit for Object Storage?
Our Object Storage service is a highly scalable and cost-effective solution for storing large amounts of data. Access is provided via an S3-compatible interface, ensuring broad compatibility with modern applications and tools.
While S3 is widely used for diverse workloads, our specific implementation is not ideal for all scenarios. This is primarily due to the HDD-based architecture of our storage clusters, which differs from faster Flash-based systems.
Object Storage is best suited for:
- Backups & Archiving: Long-term retention of data that is rarely accessed, such as database backups, log archives, or snapshots
- Static Content Storage: Images, videos, documents, or other media files that are stored and retrieved by an application on a moderate basis
- Software Distribution: Container images, binaries, or update packages that are uploaded regularly and downloaded on demand
- Cold & Warm Storage: Data that is not required in real time but should remain accessible at any time
- Disaster Recovery: Secondary storage for replication and recovery purposes, ensuring data availability in case of failure
Object Storage is less suited for:
- High-frequency read and write operations: Applications that write or read thousands of small files per second (e.g., databases, transaction systems) should use block or file storage instead
- Large numbers of very small files: Storing many files smaller than 1 MB at high frequency is inefficient and puts a disproportionate load on the service
- Latency-sensitive applications: Object Storage is not designed for applications that require single-digit millisecond response times
- Direct database usage**:** Object Storage does not replace a relational or NoSQL database and should not be used as the primary data store for applications
This service is not a CDN
Our Object Storage service is primarily optimized for data storage and retrieval. It is not designed to function as a Content Delivery Network (CDN). Using Object Storage to serve high volumes of static content directly to end-users at scale will lead to poor performance and can strain the infrastructure. For optimal delivery of content to a large audience with low latency, please utilize a dedicated CDN service in front of your Object Storage Bucket.
Recommendations for optimal usage
To ensure stable performance for all users, we recommend the following usage patterns:
- File sizes: The service is optimized for files of approximately 1 MB and larger. Where possible, very small files should be bundled or uploaded as archives.
- Access frequency: Continuous high-frequency mass access should be avoided. A steady and moderate access pattern is ideal, rather than short bursts of extreme load.
- Multipart Upload: For files larger than 100 MB, we strongly recommend using Multipart Uploads. This significantly improves transfer speed and upload reliability.
- Lifecycle Policies: Use lifecycle rules to automatically delete outdated or no longer needed data. This keeps your storage clean and helps avoid unnecessary costs.
- Parallelism: Instead of a few very large sequential requests, prefer multiple moderate parallel requests. This makes more efficient use of the underlying infrastructure and results in better overall performance.
In a nutshell
Object Storage performs best when data is written and retrieved, and not when it is constantly modified or processed in real time. Being aware of this distinction allows you to get the most out of a reliable, scalable, and cost-effective storage service that remains performant in the long run.