Using S3 compatible CLI tools

Last change on 2025-11-28 • Created on 2024-09-23 • ID: ST-CDAC7
  1. Create your credentials

    For a step-by-step guide, see the getting started article "Generating S3 credentials".

    Make sure you save the keys locally right after you create them as it is not possible to view the secret key again, neither via Hetzner Console nor via API.

  2. Install an Amazon S3 compatible API tool

    There are several tools out there that you can choose from. Popular examples are:

    For macOS or Linux users, the MinIO Client could be an interesting choice as it uses Unix-similar commands (e.g. cat / cp / rm). Click on one of the options below to view the next steps that are relevant for the tool of your choice.


  1. Configure the client

    Click on one of the options below to view the respective steps.

    MinIO Client
    1. Understand the configuration file

      After you install the MinIO Client, you need to create a configuration file. In the configuration file, you can create several aliases. Each alias includes the following information:

      • Location-bound endpoint (e.g. fsn1.your-objectstorage.com)
      • Project-bound access key and secret key

      This means you need separate aliases for each location and each project.

      Example:

      Project default
      access_key_1
      secret_key_1

      bucket1.fsn1.your-objectstorage.com
      bucket2.nbg1.your-objectstorage.com
      Project holu
      access_key_2
      secret_key_2

      bucket3.fsn1.your-objectstorage.com
      bucket4.fsn1.your-objectstorage.com

      In this example, only the Buckets bucket3 and bucket4 are:

      • In the same project (same access key and secret key)
      • In the same location (same endpoint).

      The other Buckets have either different keys or different endpoints.

      This means, in this example you would need 3 different aliases in the mc configuration file.

      View example configuration file
      {
         "version": "10",
      	"aliases": {
       	         "default-nbg1": {
       		                       "url": "https://nbg1.your-objectstorage.com",
       		                       "accessKey": "access_key_1",
       		                       "secretKey": "secret_key_1",
       		                       "api": "S3v4",
       		                       "path": "off"
       	         },
       	         "default-fsn1": {
       		                       "url": "https://fsn1.your-objectstorage.com",
       		                       "accessKey": "access_key_1",
       		                       "secretKey": "secret_key_1",
       		                       "api": "S3v4",
       		                       "path": "off"
       	         },
       	         "holu-fsn1": {
       		                       "url": "https://fsn1.your-objectstorage.com",
       		                       "accessKey": "access_key_2",
       		                       "secretKey": "secret_key_2",
       		                       "api": "S3v4", 
       		                       "path": "off" 
       	         }
          }
      }

    2. Create the configuration file

      To create the configuration file and add the first alias, you need to run the alias set command. Later, you can also use this command to edit existing aliases. You can use any name of your choice for the alias. Just make sure you don't use the same name twice. In the command below, note that the Hetzner S3 endpoint has to include the location (in this example fsn1). If your Buckets are in a different location, make sure to adapt the endpoint accordingly.

      mc alias set <alias-name> \
        https://fsn1.your-objectstorage.com \
        <your_access_key> <your_secret_key> \
        --api "s3v4" \
        --path "off"

      Once you're done, you should find the new configuration file .mc/config.json in your home directory. You can also run this command to list all aliases:

      mc alias list
    3. Test the configuration

      Run this command to list all Buckets via the keys and endpoint you provided for the alias:

      mc ls <alias_name>

      Next, test if you can create a new Bucket and copy a file to this Bucket:

      Replace region with an available location, e.g. fsn1.

      mc mb <alias_name>/<bucket_name> --region <region>
      mc cp example-file.txt <alias_name>/<bucket_name>/

      This will copy the file example-file.txt to your Bucket.

      For more commands, see "Command Quick Reference" in the official MinIO Client documentation.


    S3cmd
    1. Understand the configuration file

      After you install S3cmd, you need to create a configuration file. In the configuration file, specify the following information:

      • Location-bound endpoint (e.g. fsn1.your-objectstorage.com)
      • Project-bound access key and secret key

      This means you need separate configuration files for each location and each project.

      Example:

      Project default
      access_key_1
      secret_key_1

      bucket1.fsn1.your-objectstorage.com
      bucket2.nbg1.your-objectstorage.com
      Project holu
      access_key_2
      secret_key_2

      bucket3.fsn1.your-objectstorage.com
      bucket4.fsn1.your-objectstorage.com

      In this example, only the Buckets bucket3 and bucket4 are:

      • In the same project (same access key and secret key)
      • In the same location (same endpoint).

      The other Buckets have either different keys or different endpoints.

      This means, in this example you would need 3 different configuration files.

      View example configuration files
      .s3cfg-default-fsn1
      access_key_1
      secret_key_1

      fsn1.your-objectstorage.com
      %(bucket)s.fsn1.your-objectstorage.com
      .s3cfg-default-nbg1
      access_key_1
      secret_key_1

      nbg1.your-objectstorage.com
      %(bucket)s.nbg1.your-objectstorage.com

      .s3cfg-holu-fsn1
      access_key_2
      secret_key_2

      fsn1.your-objectstorage.com
      %(bucket)s.fsn1.your-objectstorage.com

      The default name for the configuration file is .s3cmd. If you run the s3cmd command without specifying a file name, S3cmd will automatically use the information (keys and endpoint) provided in the default configuration file .s3cfg. If you want to use the keys and endpoint you specified in a different configuration file, you will have to add the file name with the -c flag. Here's an example command to list Buckets:

      s3cmd -c ~/.s3cfg-project2-nbg1 ls

    2. Create the configuration file

      To create a configuration file, you need to add the --configure flag. The command below will create the default file .s3cmd.

      If you want to create a configuration file with a different name, add -c <file-name> in the command below.

      s3cmd --configure

      You will be asked for two keys — the access key and the secret key. Save the keys you just created in "Step 1". You can leave the default region as is, as it does not affect the Hetzner S3 endpoint. For the S3 endpoint, enter the Hetzner S3 endpoint. Note: the Hetzner S3 endpoint has to include the location (in the example below fsn1). If your Buckets are in a different location, make sure to adapt the endpoint accordingly. When it asks about the "DNS-style template", enter the Hetzner S3 endpoint and add %(bucket)s at the beginning of the URL.

      Example for Buckets in Falkenstein:

      Configuration Parameter Value
      Access Key: <your_access_key>
      Secret Key: <your_secret_key>
      Default Region: US
      S3 Endpoint: fsn1.your-objectstorage.com
      DNS-style template
      for accessing a Bucket:

      bucket+hostname:port
      %(bucket)s.fsn1.your-objectstorage.com

      Once you're done, you should find the new configuration file .s3cfg in your home directory.

    3. Test the configuration

      Run this command to list all Buckets via the keys and endpoint you provided:

      If you didn't use the default file name, add -c <file-name> in the command below.

      s3cmd ls

      Next, test if you can create a new Bucket and copy a file to this Bucket:

      If you didn't use the default file name, add -c <file-name> in the command below.

      s3cmd mb s3://<bucket_name> --region=fsn1
      s3cmd put example-file.txt s3://<bucket_name>/example.txt

      This will copy the file example-file.txt to your Bucket and rename it to example.txt


    S5cmd
    1. Understand the configuration file

      After you install s5cmd, you need to create a credentials file. In the credentials file, you can create several profiles. Each profile includes the following information:

      • Project-bound access key and secret key

      This means you need separate profiles for each project.

      In addition, you have to create an environment variable with the default endpoint. The default endpoint is used for all profiles.

      Example:

      Project hero
      access_key_1
      secret_key_1

      bucket1.fsn1.your-objectstorage.com
      bucket2.nbg1.your-objectstorage.com
      Project holu
      access_key_2
      secret_key_2

      bucket3.fsn1.your-objectstorage.com
      bucket4.fsn1.your-objectstorage.com

      In this example, only the Buckets bucket3 and bucket4 are:

      • In the same project (same access key and secret key)
      • In the same location (same endpoint).

      The other Buckets have either different keys or different endpoints.

      This means, in this example you would need 2 different profiles in the credential files, and the environment variable with the default endpoint should be set to fsn1.your-objectstorage.com.


    1. Create the configuration file and set the endpoint

      S5cmd pulls the access credentials from a file. The endpoint is set via an environment variable.

      • Create the configuration file

        ~/.aws/credentials

        You need to add two keys — the access key and the secret key. Save the keys you just created in "Step 1".

        Next to the defaul profile, you can add additional profiles with different S3 keys.

        Example:

        [default]
        aws_access_key_id=<your_access_key_1>
        aws_secret_access_key=<your_secret_key_1>
        
        [project-2]
        aws_access_key_id=<your_access_key_2>
        aws_secret_access_key=<your_secret_key_2>

      • Save the environment variable

        Note: the Hetzner S3 endpoint has to include the location (in the example below fsn1). If your Buckets are in a different location, make sure to adapt the endpoint accordingly.

        The example below sets fsn1 as the default. In addition, the endpoint for nbg1 is saved as a variable as well.

        The commands depend on the operating system of your device.

        • Linux

          echo 'export S3_ENDPOINT_URL="https://fsn1.your-objectstorage.com"' >> ~/.bashrc
          echo 'export S5_NBG="https://nbg1.your-objectstorage.com"' >> ~/.bashrc
          source ~/.bashrc
        • macOS

          echo 'export S3_ENDPOINT_URL="https://fsn1.your-objectstorage.com"' >> ~/.zshrc
          echo 'export S5_NBG="https://nbg1.your-objectstorage.com"' >> ~/.zshrc
          source ~/.zshrc
        • Windows (PowerShell)

          setx S3_ENDPOINT_URL "https://fsn1.your-objectstorage.com"
          setx S5_NBG "https://nbg1.your-objectstorage.com"

    1. Test the configuration

      Run the commands below to list all Buckets via the keys and endpoint you provided.

      • Use the default profile and endpoint

        This command uses the default credentials provided in ~/.aws/credentials and the default endpoint defined in the S3_ENDPOINT_URL environment variable.

        s5cmd ls

      • Use a different profile or endpoint

        If you want to use a different profile, you have to specify the profile name. If you want to use an endpoint other than the default, you have to specify this as well.

        s5cmd \
          --profile project-2 \
          --endpoint-url https://nbg1.your-objectstorage.com \
          ls

        This command uses the credentials provided for the profile "project-2" in ~/.aws/credentials.

        If you created a variable for the alternative endpoint as explained in step 2, you can specify the variable instead of the full URL:

        • Linux / macOS

          s5cmd \
            --endpoint-url $S5_NBG \
            ls
        • Windows (PowerShell)

          s5cmd \
            --endpoint-url $env:S5_NBG \
            ls

      Next, test if you can create a new Bucket and copy a file to this Bucket: First, you have to set the location of the Bucket.

      If the Bucket location does not match the default endpoint, add --endpoint-url <endpoint> in the s5cmd commands below. If your profile does not have the default name, add --profile <profile-name> in the s5cmd commands below.

      • Linux / macOS

        export AWS_REGION=fsn1
        s5cmd mb s3://<bucket_name>
        s5cmd cp example-file.txt s3://<bucket_name>/example.txt
      • Windows (PowerShell)

        $env:AWS_REGION = "fsn1"
        s5cmd mb s3://<bucket_name>
        s5cmd cp example-file.txt s3://<bucket_name>/example.txt

      This will copy the file example-file.txt to your Bucket and rename it to example.txt.


    RCLONE
    1. Understand the configuration file

      After you install Rclone, you need to create a configuration file. In the configuration file, you can create several "remotes". Each "remote" includes the following information:

      • Location-bound endpoint (e.g. fsn1.your-objectstorage.com)
      • Project-bound access key and secret key

      This means you need separate "remotes" for each location and each project.

      Example:

      Project default
      access_key_1
      secret_key_1

      bucket1.fsn1.your-objectstorage.com
      bucket2.nbg1.your-objectstorage.com
      Project holu
      access_key_2
      secret_key_2

      bucket3.fsn1.your-objectstorage.com
      bucket4.fsn1.your-objectstorage.com

      In this example, only the Buckets bucket3 and bucket4 are:

      • In the same project (same access key and secret key)
      • In the same location (same endpoint).

      The other Buckets have either different keys or different endpoints.

      This means, in this example you would need 3 different "remotes" in the rclone configuration file.

      View example configuration file
      [default-nbg1]
      type = s3
      provider = Other
      access_key_id = access_key_1
      secret_access_key = secret_key_1
      endpoint = nbg1.your-objectstorage.com
      acl = private
      region = nbg1
      
      [default-fsn1]
      type = s3
      provider = Other
      access_key_id = access_key_1
      secret_access_key = secret_key_1
      endpoint = fsn1.your-objectstorage.com
      acl = private
      region = fsn1
      
      [holu-fsn1]
      type = s3
      provider = Other
      access_key_id = access_key_2
      secret_access_key = secret_key_2
      endpoint = fsn1.your-objectstorage.com
      acl = private
      region = fsn1

    2. Create the configuration file

      To create the configuration file and add the first remote, you need to run the config command. You can use any name of your choice for the "remote". Just make sure you don't use the same name twice. When you enter the endpoint, note that the Hetzner S3 endpoint has to include the location (in this example fsn1). If your Buckets are in a different location, make sure to adapt the endpoint accordingly.

      rclone config

      Example for Buckets in Falkenstein:

      Configuration Parameter Value
      n) New remote
      s) Set configuration password
      q) Quit config
      n
      Name You can set any name, but it should indicate what it is used for, such as project name and endpoint location default-fsn1
      Storage 4 / Amazon S3 compliant storage providers...
      Provider 39 / Any other S3 compatible provider (Other)
      env_auth 1 / Enter AWS credentials in the next step.
      Access Key: <your_access_key>
      Secret Key: <your_secret_key>
      Region 1 / Will use v4 signatures and an empty region.
      Endpoint fsn1.your-objectstorage.com

      Once you're done, you can "Quit config" and you should find the new configuration file .config/rclone/rclone.conf in your home directory. You can also run this command to list all remotes:

      rclone listremotes
    3. Edit the configuration

      Edit the file .config/rclone/rclone.conf and add the line "region":

      [holu-fsn1]
      type = s3
      provider = Other
      access_key_id = <access_key>
      secret_access_key = <secret_key>
      endpoint = fsn1.your-objectstorage.com
      acl = private
      region = fsn1
    4. Test the configuration

      Run this command to list all Buckets via the keys and endpoint you provided for the "remote":

      rclone ls <remote_name>:

      Next, test if you can create a new Bucket and copy a file to this Bucket:

      rclone mkdir <remote_name>:<bucket_name>
      rclone copy example-file.txt <remote_name>:<bucket_name>

      This will copy the file example-file.txt to your Bucket.

      For more commands, see "Rclone Commands" in the official RCLONE documentation.


    AWS CLI
    1. Understand the configuration file

      After you install the AWS CLI, you need to create a configuration file and a credentials file. In those files, you can create several profiles. Each profile includes the following information:

      • Location-bound endpoint (e.g. fsn1.your-objectstorage.com)
      • Project-bound access key and secret key

      This means you need separate profiles for each location and each project.

      Example:

      Project hero
      access_key_1
      secret_key_1

      bucket1.fsn1.your-objectstorage.com
      bucket2.nbg1.your-objectstorage.com
      Project holu
      access_key_2
      secret_key_2

      bucket3.fsn1.your-objectstorage.com
      bucket4.fsn1.your-objectstorage.com

      In this example, only the Buckets bucket3 and bucket4 are:

      • In the same project (same access key and secret key)
      • In the same location (same endpoint).

      The other Buckets have either different keys or different endpoints.

      This means, in this example you would need 3 different profiles in the aws configuration file and the credential file.

      View example configuration and credential files
      • Configuration file:

        [default]
        endpoint_url = https://nbg1.your-objectstorage.com
        
        [profile hero-fsn1]
        endpoint_url = https://fsn1.your-objectstorage.com
        
        [profile holu-fsn1]
        endpoint_url = https://fsn1.your-objectstorage.com
      • Credentials file:

        [default]
        aws_access_key_id=access_key_1
        aws_secret_access_key=secret_key_1
        
        [hero-fsn1]
        aws_access_key_id=access_key_1
        aws_secret_access_key=secret_key_1
        
        [holu-fsn1]
        aws_access_key_id=access_key_2
        aws_secret_access_key=secret_key_2

      The files include a default profile. If you run the aws command without specifying a profile name, the AWS CLI will automatically use the information (keys and endpoint) provided for the default profile [default]. If you want to use the keys and endpoint you specified for a different profile, you will have to add the profile name with --profile. Here's an example command to list Buckets:

      aws s3 ls --profile holu-fsn1

    2. Create the configuration file and credentials file

      To create the files, you need to use configure. The command below will create the default profile [default].

      If you want to create a profile with a different name, add --profile <profile-name> in the command below.

      aws configure

      You will be asked for two keys — the access key and the secret key. Save the keys you just created in "Step 1". You can leave "Default region name" and "Default output format" empty.

      The new profile should now be in the files ~/.aws/credentials and ~/.aws/config. The file ~/.aws/credentials already includes your access key and your secret key. Now, you need to manually edit the file ~/.aws/config to add the Hetzner S3 endpoint.

      nano ~/.aws/config

      Add the Hetzner S3 endpoint right below the profile. Note: the Hetzner S3 endpoint has to include the location (in the example below fsn1). If your Buckets are in a different location, make sure to adapt the endpoint accordingly.

      Example:

      [default]
      endpoint_url = https://fsn1.your-objectstorage.com
      
      #Keep the lines below commented out when you create Buckets
      #Uncomment the lines below before you create presigned URLs
      #s3 =
      #  addressing_style = virtual
    3. Test the configuration

      Run this command to list all Buckets via the keys and endpoint you provided:

      If your profile does not have the default name, add --profile <profile-name> in the command below.

      aws s3 ls

      Next, test if you can create a new Bucket and copy a file to this Bucket:

      If your profile does not have the default name, add --profile <profile-name> in the command below.

      aws s3 mb s3://<bucket_name> --region fsn1
      aws s3 cp example-file.txt s3://<bucket_name>/example.txt

      This will copy the file example-file.txt to your Bucket and rename it to example.txt



You should now be all set and ready to manage your Buckets. For more information about the available functions, see the article "List of supported actions". For detailed instructions on how to run supported actions, please check the official documentation of the tool you chose.