⚠️ Object Storage is currently in Beta test. For more information see this FAQ: Object Storage » Beta test
-
Create your credentials
For a step-by-step guide, see the getting started article "Generating S3 credentials".
Make sure you save the keys locally right after you create them as it is not possible to view the secret key again, neither via Cloud Console nor via API.
-
Install an Amazon S3 compatible API tool
There are several tools out there that you can choose from. Popular examples are:
- MinIO Client
mc
- S3cmd
s3cmd
- RCLONE
rclone
- AWS CLI
aws
For macOS or Linux users, the MinIO Client could be an interesting choice as it uses Unix-similar commands (e.g.
cat
/cp
/rm
). Click on one of the options below to view the next steps that are relevant for the tool of your choice. - MinIO Client
MinIO Client
-
Understand the configuration file
After you install the MinIO Client, you need to create a configuration file. In the configuration file, you can create several aliases. Each alias includes the following information:
- Location-bound endpoint (e.g.
fsn1.your-objectstorage.com
) - Project-bound access key and secret key
This means you need separate aliases for each location and each project.
Example:
Project defaultaccess_key_1secret_key_1
bucket1.fsn1.your-objectstorage.combucket2.nbg1.your-objectstorage.comProject holuaccess_key_2secret_key_2
bucket3.fsn1.your-objectstorage.combucket4.fsn1.your-objectstorage.com
In this example, only the Buckets
bucket3
andbucket4
are:- In the same project (same access key and secret key)
- In the same location (same endpoint).
The other Buckets have either different keys or different endpoints.
This means, in this example you would need 3 different aliases in the
mc
configuration file.View example configuration file
{ "version": "10", "aliases": { "default-nbg1": { "url": "https://nbg1.your-objectstorage.com", "accessKey": "access_key_1", "secretKey": "secret_key_1", "api": "S3v4", "path": "off" }, "default-fsn1": { "url": "https://fsn1.your-objectstorage.com", "accessKey": "access_key_1", "secretKey": "secret_key_1", "api": "S3v4", "path": "off" }, "holu-fsn1": { "url": "https://fsn1.your-objectstorage.com", "accessKey": "access_key_2", "secretKey": "secret_key_2", "api": "S3v4", "path": "off" } } }
- Location-bound endpoint (e.g.
-
Create the configuration file
To create the configuration file and add the first alias, you need to run the
alias set
command. Later, you can also use this command to edit existing aliases. You can use any name of your choice for the alias. Just make sure you don't use the same name twice. In the command below, note that the Hetzner S3 endpoint has to include the location (in this examplefsn1
). If your Buckets are in a different location, make sure to adapt the endpoint accordingly.mc alias set <alias-name> \ https://fsn1.your-objectstorage.com \ <your_access_key> <your_secret_key> \ --api "s3v4" \ --path "off"
Once you're done, you should find the new configuration file
.mc/config.json
in your home directory. You can also run this command to list all aliases:mc alias list
-
Test the configuration
Run this command to list all Buckets via the keys and endpoint you provided for the alias:
mc ls <alias_name>
Next, test if you can copy a file to one of the Buckets you just listed:
mc cp example-file.txt <alias_name>/<bucket_name>/
This will copy the file
example-file.txt
to your Bucket.For more commands, see "Command Quick Reference" in the official MinIO Client documentation.
S3cmd
-
Understand the configuration file
After you install S3cmd, you need to create a configuration file. In the configuration file, specify the following information:
- Location-bound endpoint (e.g.
fsn1.your-objectstorage.com
) - Project-bound access key and secret key
This means you need separate configuration files for each location and each project.
Example:
Project defaultaccess_key_1secret_key_1
bucket1.fsn1.your-objectstorage.combucket2.nbg1.your-objectstorage.comProject holuaccess_key_2secret_key_2
bucket3.fsn1.your-objectstorage.combucket4.fsn1.your-objectstorage.com
In this example, only the Buckets
bucket3
andbucket4
are:- In the same project (same access key and secret key)
- In the same location (same endpoint).
The other Buckets have either different keys or different endpoints.
This means, in this example you would need 3 different configuration files.
View example configuration files
.s3cfg-default-fsn1access_key_1secret_key_1
fsn1.your-objectstorage.com%(bucket)s.fsn1.your-objectstorage.com.s3cfg-default-nbg1access_key_1secret_key_1
nbg1.your-objectstorage.com%(bucket)s.nbg1.your-objectstorage.com
.s3cfg-holu-fsn1access_key_2secret_key_2
fsn1.your-objectstorage.com%(bucket)s.fsn1.your-objectstorage.com
The default name for the configuration file is
.s3cmd
. If you run thes3cmd
command without specifying a file name, S3cmd will automatically use the information (keys and endpoint) provided in the default configuration file.s3cfg
. If you want to use the keys and endpoint you specified in a different configuration file, you will have to add the file name with the-c
flag. Here's an example command to list Buckets:s3cmd -c ~/.s3cfg-project2-nbg1 ls
- Location-bound endpoint (e.g.
-
Create the configuration file
To create a configuration file, you need to add the
--configure
flag. The command below will create the default file.s3cmd
.If you want to create a configuration file with a different name, add
-c <file-name>
in the command below.s3cmd --configure
You will be asked for two keys — the access key and the secret key. Save the keys you just created in "Step 1". You can leave the default region as is, as it does not affect the Hetzner S3 endpoint. For the S3 endpoint, enter the Hetzner S3 endpoint. Note: the Hetzner S3 endpoint has to include the location (in the example below
fsn1
). If your Buckets are in a different location, make sure to adapt the endpoint accordingly. When it asks about the "DNS-style template", enter the Hetzner S3 endpoint and add%(bucket)s
at the beginning of the URL.Example for Buckets in Falkenstein:
Configuration Parameter Value Access Key: <your_access_key> Secret Key: <your_secret_key> Default Region: US S3 Endpoint: fsn1.your-objectstorage.com DNS-style template
for accessing a Bucket:bucket+hostname:port
%(bucket)s.fsn1.your-objectstorage.com Once you're done, you should find the new configuration file
.s3cfg
in your home directory. -
Test the configuration
Run this command to list all Buckets via the keys and endpoint you provided:
If you didn't use the default file name, add
-c <file-name>
in the command below.s3cmd ls
Next, you could test uploading a file to one of the Buckets you just listed:
If you didn't use the default file name, add
-c <file-name>
in the command below.s3cmd put example-file.txt s3://<bucket_name>/example.txt
This will copy the file
example-file.txt
to your Bucket and rename it toexample.txt
RCLONE
-
Understand the configuration file
After you install Rclone, you need to create a configuration file. In the configuration file, you can create several "remotes". Each "remote" includes the following information:
- Location-bound endpoint (e.g.
fsn1.your-objectstorage.com
) - Project-bound access key and secret key
This means you need separate "remotes" for each location and each project.
Example:
Project defaultaccess_key_1secret_key_1
bucket1.fsn1.your-objectstorage.combucket2.nbg1.your-objectstorage.comProject holuaccess_key_2secret_key_2
bucket3.fsn1.your-objectstorage.combucket4.fsn1.your-objectstorage.com
In this example, only the Buckets
bucket3
andbucket4
are:- In the same project (same access key and secret key)
- In the same location (same endpoint).
The other Buckets have either different keys or different endpoints.
This means, in this example you would need 3 different "remotes" in the
rclone
configuration file.View example configuration file
[default-nbg1] type = s3 provider = Other access_key_id = access_key_1 secret_access_key = secret_key_1 endpoint = nbg1.your-objectstorage.com acl = private region = nbg1 [default-fsn1] type = s3 provider = Other access_key_id = access_key_1 secret_access_key = secret_key_1 endpoint = fsn1.your-objectstorage.com acl = private region = fsn1 [holu-fsn1] type = s3 provider = Other access_key_id = access_key_2 secret_access_key = secret_key_2 endpoint = fsn1.your-objectstorage.com acl = private region = fsn1
- Location-bound endpoint (e.g.
-
Create the configuration file
To create the configuration file and add the first remote, you need to run the
config
command. You can use any name of your choice for the "remote". Just make sure you don't use the same name twice. When you enter the endpoint, note that the Hetzner S3 endpoint has to include the location (in this examplefsn1
). If your Buckets are in a different location, make sure to adapt the endpoint accordingly.rclone config
Example for Buckets in Falkenstein:
Configuration Parameter Value n) New remote
s) Set configuration password
q) Quit confign Name You can set any name, but it should indicate what it is used for, such as project name and endpoint location default-fsn1
Storage 4 / Amazon S3 compliant storage providers Provider 32 / Any other S3 compatible provider env_auth 1 / Enter AWS credentials in the next step. Access Key: <your_access_key> Secret Key: <your_secret_key> Region 1 / Will use v4 signatures and an empty region. Endpoint fsn1.your-objectstorage.com Once you're done, you can "Quit config" and you should find the new configuration file
.config/rclone/rclone.conf
in your home directory. You can also run this command to list all remotes:rclone listremotes
-
Edit the configuration
Edit the file
.config/rclone/rclone.conf
and add the line "region":[holu-fsn1] type = s3 provider = Other access_key_id = <access_key> secret_access_key = <secret_key> endpoint = fsn1.your-objectstorage.com acl = private region = fsn1
-
Test the configuration
Run this command to list all Buckets via the keys and endpoint you provided for the "remote":
rclone ls <remote_name>:
Next, test if you can copy a file to one of the Buckets you just listed:
rclone copy example-file.txt <remote_name>:<bucket_name>
This will copy the file
example-file.txt
to your Bucket.For more commands, see "Rclone Commands" in the official RCLONE documentation.
AWS CLI
-
Understand the configuration file
After you install the AWS CLI, you need to create a configuration file and a credentials file. In those files, you can create several profiles. Each profile includes the following information:
- Location-bound endpoint (e.g.
fsn1.your-objectstorage.com
) - Project-bound access key and secret key
This means you need separate profiles for each location and each project.
Example:
Project heroaccess_key_1secret_key_1
bucket1.fsn1.your-objectstorage.combucket2.nbg1.your-objectstorage.comProject holuaccess_key_2secret_key_2
bucket3.fsn1.your-objectstorage.combucket4.fsn1.your-objectstorage.com
In this example, only the Buckets
bucket3
andbucket4
are:- In the same project (same access key and secret key)
- In the same location (same endpoint).
The other Buckets have either different keys or different endpoints.
This means, in this example you would need 3 different profiles in the
aws
configuration file and the credential file.View example configuration and credential files
-
Configuration file:
[default] endpoint_url = https://nbg1.your-objectstorage.com [profile hero-fsn1] endpoint_url = https://fsn1.your-objectstorage.com [profile holu-fsn1] endpoint_url = https://fsn1.your-objectstorage.com
-
Credentials file:
[default] aws_access_key_id=access_key_1 aws_secret_access_key=secret_key_1 [hero-fsn1] aws_access_key_id=access_key_1 aws_secret_access_key=secret_key_1 [holu-fsn1] aws_access_key_id=access_key_2 aws_secret_access_key=secret_key_2
The files include a default profile. If you run the
aws
command without specifying a profile name, the AWS CLI will automatically use the information (keys and endpoint) provided for the default profile[default]
. If you want to use the keys and endpoint you specified for a different profile, you will have to add the profile name with--profile
. Here's an example command to list Buckets:aws s3 ls --profile holu-fsn1
- Location-bound endpoint (e.g.
-
Create the configuration file and credentials file
To create the files, you need to use
configure
. The command below will create the default profile[default]
.If you want to create a profile with a different name, add
--profile <profile-name>
in the command below.aws configure
You will be asked for two keys — the access key and the secret key. Save the keys you just created in "Step 1". You can leave "Default region name" and "Default output format" empty.
The new profile should now be in the files
~/.aws/credentials
and~/.aws/config
. The file~/.aws/credentials
already includes your access key and your secret key. Now, you need to manually edit the file~/.aws/config
to add the Hetzner S3 endpoint.nano ~/.aws/config
Add the Hetzner S3 endpoint right below the profile. Note: the Hetzner S3 endpoint has to include the location (in the example below
fsn1
). If your Buckets are in a different location, make sure to adapt the endpoint accordingly.Example:
[default] endpoint_url = https://fsn1.your-objectstorage.com #Keep the lines below commented out when you create Buckets #Uncomment the lines below before you create presigned URLs #s3 = # addressing_style = virtual
-
Test the configuration
Run this command to list all Buckets via the keys and endpoint you provided:
If your profile does not have the default name, add
--profile <profile-name>
in the command below.aws s3 ls
Next, test if you can upload a file to one of the Buckets you just listed:
If your profile does not have the default name, add
--profile <profile-name>
in the command below.aws s3 cp example-file.txt s3://<bucket_name>/example.txt
This will copy the file
example-file.txt
to your Bucket and rename it toexample.txt
You should now be all set and ready to manage your Buckets. For more information about the available functions, see the article "List of supported actions". For detailed instructions on how to run supported actions, please check the official documentation of the tool you chose.