Rclone is a program that can be used to transfer files to and from more than forty different storage backends (e.g., Amazon S3, Box, Dropbox, FTP, Google Cloud Storage, Google Drive, Microsoft Azure Blob Storage, Microsoft OneDrive, Microsoft Sharepoint, SFTP, etc.).

The Mitto rclone plugin provides an rclone job and wizard to create configurations to control rclone jobs. Mitto’s rclone job uses the rclone program to transfer files to and from the Mitto instance on which it runs or between two remote systems.

Out of the box (as of Mitto 2.8), the Mitto rclone plugin wizard supports FTP and sFTP connections.

However, any other rclone connection types like Amazon S3, Box, Dropbox, and Google Cloud Storage, etc can be configured as custom jobs.

Testing Rclone on a Local Machine

Generally speaking, the process for setting up any Mitto rclone job will involve configuring and testing rclone on a local machine (preferably not headless).

Download and install Rclone on a local machine.

Once you've successfully set up the connection, you will then translate the resulting rlcone config's key pairs using global rclone flags.

Type rclone config show to show your local rclone remotes' details.

Using the Mitto Generic Plugin

In all of the examples below, create the custom Rclone job using the Mitto Generic plugin.

In your Mitto UI Click the orange Add Job button in the bottom left-hand corner of the screen. Select Generic Job from the wizard.

On the following screen (below) select rclone as the job type.

Manual rclone S3 job
Manual rclone S3 job

Use the examples below as templates for your job's JSON config.

Mitto Rclone Job Examples

Below are a few simple examples of custom Mitto Rclone job configurations:

Amazon S3

To connect to AWS S3 you will need an Access ID and Key of a user with programmatic access to the buckets you want to use. You will also need the correct region.

Example job config

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--s3-provider",
            "value": "AWS"
        },
        {
            "flag": "--s3-secret-access-key",
            "value": "{secret-access-key}"
        },
        {
            "flag": "--s3-access-key-id",
            "value": "{access-key-id}"
        },
        {
            "flag": "--s3-region",
            "value": "{region}"
        }
    ],
    "targets": {
        "destination": ":s3:{bucket-name}/{path/to/file/}",
        "source": "/var/mitto/data/{local-file-name}"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the local rclone command:

rclone copy /var/mitto/data/{local-file-name} s3:{bucket-name}{path/to/file/}

In the rclone_flags block, replace {secret-access-key} with your AWS Secret Access Key, {access-key-id} with your AWS Access Key ID, and {region} with your AWS region.

In the targets block, this would copy a file from Mitto (source) to Amazon S3 (destination). Replace {bucket-name} and {path/to/file} with your bucket and if necessary additional folder-like paths (do not escape spaces with \). Also replace {local-file-name} with the name of the file you want to copy.

Read more information on Rclone's Amazon S3 documentation for all the available flags.

Box

To use rclone with Box you will need to create an access token.

At the end of the rclone process you should see something similar to this:

[box]
client_id = 
client_secret = 
token = {"access_token":"XXX","token_type":"bearer","refresh_token":"XXX","expiry":"XXX"}

Example job config

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--box-token",
            "value": "{\"access_token\": \"{access_token}\", \"token_type\": \"bearer\", \"refresh_token\": \"{refresh_token}\", \"expiry\": \"{expiry}"}"
        }
    ],
    "targets": {
        "destination": ":box:{/path/to/file/}",
        "source": "/var/mitto/data/{local-file-name}"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the local rclone command:

rclone copy /var/mitto/data/{local-file-name} box:{/path/to/file/}

In the rclone_flags block replace the token JSON object with the token returned after configuring locally. Make sure and escape the double quotes with \.

In the targets block, this would copy a file from Mitto (source) to Box (destination). Replace the destination {/path/to/folder/} with the correct path in Box and replace {local-file-name} with the file you want to copy.

Read more information on Rclone's Box Documentation for all the available flags.

Dropbox

To use rclone with Dropbox you will need to create an access token.

At the end of the rclone process you should see something similar to this:

[dropbox]
[remote]
app_key =
app_secret =
token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Example job config

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--dropbox-token",
            "value": "{token}"
        }
    ],
    "targets": {
        "destination": ":dropbox:{/path/to/file/}",
        "source": "/var/mitto/data/{local-file-name}"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the local rclone command:

rclone copy /var/mitto/data/{local-file-name} dropbox:{/path/to/file/}

In the rclone_flags block replace the {token} with the token returned after configuring locally.

In the targets block, this would copy a file from Mitto (source) to Dropbox (destination). Replace the destination {/path/to/file/} with the correct path in Box and replace {local-file-name} with the file you want to copy.

Read more information on Rclone's Dropbox documentation for all the available flags.

Google Cloud Storage

There are several ways to configure rclone with Google Cloud Storage. In the example below we chose the service account route.

Learn more from Google on creating and managing service account keys.

Example job config

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--gcs-project-number",
            "value": "{project_number}"
        },
        {
            "flag": "--gcs-service-account-file",
            "value": "/var/mitto/data/{service_account_json_file}.json"
        }
    ],
    "targets": {
        "source": ":gcs:{bucket}/{path/to/file}",
        "destination": "/var/mitto/data/{file}"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the local rclone command:

rclone copy gcs:{bucket}/{path/to/file} /var/mitto/data/{file} 

In the rclone_flags block replace the {project_number} with your GCP Project's number. with your with the token returned after configuring locally. Drop the GCP service account JSON file into Mitto's file manager and replace the {service_account_json_file} with the name of your service account JSON file.

In the targets block, this would copy a file from Google Cloud Storage (source) to Mitto (destination). Replace the source's {bucket} and {/path/to/folder/} with the correct bucket and file path in Google Cloud Storage and replace {file} with the file you want to create in Mitto.

Read more information on Rclone's Google Cloud Storage documentation for all the available flags.