Rclone

Overview

Rclone is a command-line application used to transfer files to and from more than forty different cloud storage backends (e.g., Amazon S3, Box, Dropbox, Google Cloud Storage, Google Drive, Microsoft Azure Blob Storage, Microsoft OneDrive, Microsoft Sharepoint, SFTP, etc.). To review configuration for many of these sources, visit this RCLONE Sources Article!

../_images/rclone-1.png

The Zuar Runner rclone plugin provides an rclone job and wizard to create configurations to control rclone jobs. Zuar Runner’s rclone job uses the rclone program to transfer files to and from the Zuar Runner instance on which it runs or between two remote systems.

Currently the Zuar Runner Rclone plugin wizard supports creating FTP and sFTP rclone jobs. Custom RCLONE jobs can be created to use any other rclone functionality. For even more advanced rclone use cases, create command line jobs.

Zuar Runner Rclone API Documentation

See detailed Zuar Runner Rclone Reference documentation.

Create a Zuar Runner Rclone Job

You can set up the initial connection, or “remote” using the wizard, or alternatively you can use the Credentials page in your Zuar Runner UI to set up the remote before-hand.

Select Rclone as the Job Type

To create the remote using the wizard, click on “Add Job” on the bottom left of the screen in your Zuar Runner UI. Then select “Rclone”:

Wizard Step 1Wizard Step 1

Choose a Backend

On the second screen of the wizard type in the name of the job, and select the rclone backend type:

Wizard Screen 2Wizard Screen 2

Supply Credentials

On the third screen you will enter the remote name and credentials for the connection.

If you created the credentials separately select “Use Existing Named Credentials, otherwise choose “Provide Credentials Now…” and then type in your remote name, hostname, username and password:

Wizard Screen 3Wizard Screen 3

The Host Name field will only accept hostnames. This is a valid hostname: sftp.hostname.net

Do not include the protocol when entering a hostname. This is an invalid hostname, because it includes the sftp:// portion of the URL: sftp://sftp.hostname.net

When supplying a hostname, remove any protocol delaration: http://, sftp://, ftp://, etc.

Set Rclone Flags

On the fourth screen (not pictured) you can include any number of rclone flags. In addition to the available flags for the command you’re using (examples: “copy” or “copyto”) there are global flags available for use with any command.

Set Source and Destination

On the final screen of the wizard is where you define the source and destination files. In the following example we’re copying from the Zuar Runner file directory to a destination on the sftp server. A remote source or destination will always be an rclone remote name followed by a :. sftpserver: in this case

Wizard Screen 5Wizard Screen 5

Zuar Runner Rclone Job Configuration Examples

Amazon S3

To connect to AWS S3 you need the Access ID and Key of a user with programmatic access to the buckets you want to use. You will also need to know the correct AWS region.

Example job configuration:

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--s3-provider",
            "value": "AWS"
        },
        {
            "flag": "--s3-secret-access-key",
            "value": "{secret-access-key}"
        },
        {
            "flag": "--s3-access-key-id",
            "value": "{access-key-id}"
        },
        {
            "flag": "--s3-region",
            "value": "{region}"
        }
    ],
    "targets": {
        "destination": ":s3:{bucket-name}/{path/to/file/}",
        "source": "/var/mitto/data/{local-file-name}"
    },
    "timeout_seconds": 18000
}

This configuration is equivalent to the following rclone command:

rclone copy /var/mitto/data/{local-file-name} s3:{bucket-name}{path/to/file/}

In the rclone_flags block, replace {secret-access-key} with your AWS Secret Access Key, {access-key-id} with your AWS Access Key ID, and {region} with your AWS region.

The targets block copies a file from Zuar Runner (source) to Amazon S3 (destination). Replace {bucket-name} and {path/to/file} with the bucket and, if necessary, the folder-like path (do not escape spaces with \). Also replace {local-file-name} with the name of the file you want to copy.

Read more information on rclone’s Amazon S3 documentation for all available flags.

Box

To use rclone with Box you need an access token.

Box’s access token is used to generate a refresh token. Because of this process, you cannot use a standard rclone job type in Zuar Runner. Instead, create an rclone.conf file with the information about the Box remote to refresh the token.

First, run rclone config to create the Box remote. You should see something similar to this at the conclusion of the command:

[box]
type = box
box_sub_type = user
token = {"access_token":"xxxxx","token_type":"bearer","refresh_token":"xxxx","expiry":"xxxx"}
  1. Create a file on your local computer with a text editor (e.g. Notepad++, Sublime Text, etc.).

  2. Paste the config generated by the config command (see above) into the file and save it as something (e.g. rclone.conf), to be referenced in a Zuar Runner command job.

  3. Drag and drop or upload the rclone.config file into Zuar Runner’s File page.

  4. Create a new Command Job and use the following command: rclone copy box:/'Database Folder'/'Flat Files'/csv/ /var/mitto/data/ --config /var/mitto/data/rclone.conf

Here’s what this command does:

Use rclone to copy files from the remote called box within /'Database Folder'/'Flat Files'/csv/ directory to Zuar Runner’s /var/mitto/data file directory, using the configuration file ( --config) located at /var/mitto/data/rclone.conf

  • If you need to navigate a folder/directory with capitalized letters and/or spaces, contain the name within single quotes.

By using the above rclone.conf file, the refresh token will automatically continue to be refreshed by the access token.

Read more information on rclone’s Box documentation for all the available flags.

Dropbox

To use rclone with Dropbox you will need to create an access token.

At the end of the rclone config process you should see something similar to this:

[dropbox]
app_key =
app_secret =
token = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX_XXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Example job configuration:

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--dropbox-token",
            "value": "{token}"
        }
    ],
    "targets": {
        "destination": ":dropbox:{/path/to/file/}",
        "source": "/var/mitto/data/{local-file-name}"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the following local rclone command:

rclone copy /var/mitto/data/{local-file-name} dropbox:{/path/to/file/}

In the rclone_flags block replace the {token} with the token returned after configuring locally.

In the targets block, this would copy a file from Zuar Runner (source) to Dropbox (destination). Replace the destination {/path/to/file/} with the correct path in Box and replace {local-file-name} with the file you want to copy.

Read more information on [rclone’s Dropbox documentation[(https://rclone.org/dropbox/)] for all the available flags.

OneDrive

To use rclone with OneDrive you will need an access token.

First, run rclone config to create the Box remote. You should see something similar to this at the conclusion of the command:

[onedrive]
type = onedrive
region = global
token = {"access_token":"eyJ0eXAiOiJKV1QiLCJ..."}
drive_id = ID
drive_type = business

Example job configuration:

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--onedrive-token",
            "value": "{\"access_token\":\"token\",\"token_type\":\"Bearer\",\"refresh_token\":\"token\",\"expiry\":\"2021-03-24T10:33:52.0571522+11:00\"}"
        },
        {
            "flag": "--onedrive-drive-id",
            "value": "ID"
        },
        {
            "flag": "--onedrive-drive-type",
            "value": "business"
        },
        {
            "flag": "--onedrive-region",
            "value": "global"
        }
    ],
    "targets": {
        "destination": "/var/mitto/data/",
        "source": ":onedrive:/file.txt"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the following local rclone command:

rclone copy onedrive:{/path/to/file/} /var/mitto/data/

In the rclone_flags block replace the value of --onedrive-token with the token returned after configuring locally. Be sure to escape the quotes in the JSON blob string as in the example.

In the targets block, this would copy a file from OneDrive (source) to /var/mitto/data (destination). Replace {/pat/to/file} with the file you want to copy.

Read more information on rclone’s OneDrive documentation for all the available flags.

Google Cloud Storage

There are several ways to configure rclone with Google Cloud Storage. In the example below we chose the service account route.

See Google’s documentation on creating and managing service account keys.

Example job configuration:

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--gcs-project-number",
            "value": "{project_number}"
        },
        {
            "flag": "--gcs-service-account-file",
            "value": "/var/mitto/data/{service_account_json_file}.json"
        }
    ],
    "targets": {
        "source": ":gcs:{bucket}/{path/to/file}",
        "destination": "/var/mitto/data/{file}"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the following local rclone command:

rclone copy gcs:{bucket}/{path/to/file} /var/mitto/data/{file}

In the rclone_flags block replace the {project_number} with your GCP Project’s Number. Drop the GCP service account JSON file into Zuar Runner’s file manager and replace the {service_account_json_file} with the name of your service account JSON file.

In the targets block, this would copy a file from Google Cloud Storage (source) to Zuar Runner (destination). Replace the source’s {bucket} and {/path/to/folder/} with the correct bucket and file path in Google Cloud Storage and replace {file} with the file you want to create in Zuar Runner.

Read more information on rclone’s Google Cloud Storage documentation for all available flags.

WebDAV

WebDAV (Web Distributed Authoring and Versioning) is a protocol similar to FTP and SFTP which extends HTTP allowing clients to create, change and move documents on a server. Apache, Nginx and many other servers have modules for WebDAV. It is supported by many sites, services and software. Below are a few examples of Rclone jobs for WebDAV, including for Egnyte, and Sharepoint.

Read more information on rclone’s WebDAV documentation for all available flags.

Egnyte

To connect to Egnyte with rclone you will need your WebDAV URL https://{yourcompany}.egnyte.com/webdav and Egnyte username and password.

First, configure a WebDAV Rclone remote locally with rclone config. For “Vendor”, select Other. At the end of the command you should see something like this:

[webdav]
type = webdav
url = https://{yourcompany}.egnyte.com/webdav
vendor = other
user = {user@email.com}
pass = your_password

Example job configuration:

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--webdav-url",
            "value": "https://{yourcompany}.egnyte.com/webdav"
        },
        {
            "flag": "--webdav-user",
            "value": "{user@email.com}"
        },
        {
            "flag": "--webdav-pass",
            "value": "your_-_password"
        }
    ],
    "targets": {
        "destination": ":webdav:{path/to/folder}",
        "source": "/var/mitto/data/test.txt"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the following local rclone command:

rclone copy /var/mitto/data/{file} webdav:{path/to/folder}

In the rclone_flags block replace {yourcompany} with your company’s Egnyte URL. Replace the username and password with your Egnyte username and the encrypted password from your local rclone config (rclone config show).

In the targets block, this would copy a file from Zuar Runner (source) to Egnyte (destination). Replace the source file with any file uploaded to Zuar Runner, and replace the destination path {path/to/folder}.

SharePoint

To connect to SharePoint using WebDAV you will need the Sharepoint Site URL, and a username and password with access to your site.

First, configure a WebDAV rclone remote locally with rclone config. For “Vendor”, select SharePoint. At the end of the process you should see something like:

[webdav]
type = webdav
url = https://{yourcompany}.sharepoint.com
vendor = sharepoint
user = {user@email.com}
pass = your_password

Example job configuration:

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--webdav-vendor",
            "value": "sharepoint"
        },
        {
            "flag": "--webdav-url",
            "value": "https://{yourcompany}.sharepoint.com"
        },
        {
            "flag": "--webdav-user",
            "value": "{user@email.com}"
        },
        {
            "flag": "--webdav-pass",
            "value": "your_password"
        }
    ],
    "targets": {
        "source": ":webdav:{path/to/file}",
        "destination": "/var/mitto/data"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the following local rclone command:

rclone copy /var/mitto/data/{file} webdav:{path/to/folder}

In the rclone_flags block replace {yourcompany} with the organization’s portion of the SharePoint URL. Also replace the username and password with your Sharepoint username and the encrypted password from your local rclone config (rclone config show).

In the targets block, this would copy a file from Sharepoint (source) to mitto (destination /var/mitto/data). Replace the source path {path/to/file} with the path to the SharePoint file you want to download.

SFTP (using SSH Key File)

To connect to an SFTP server using an SSH key file, you need an SFTP username, a hostname for the SFTP server, and the private key file.

Configure an SFTP Rclone remote locally with rclone config, when prompted for the key file enter the path to your SSH key. At the end of the process you should see something like:

[sftp]
type = sftp
host = sftp.hostname.com
user = USERNAME
key_file = /path/to/private_key

Before you create an rclone job on Zuar Runner, you first need to upload your private key to the Zuar Runner server using the Zuar Runner Files page.

In order to do this safely, we suggest encrypting the private key before uploading it to Zuar Runner’s file manager:

Encrypt a file on Mac/Linux using OpenSSL: The following command will use OpenSSL to encrypt the file private_key and output the encrypted version as encrypted.txt:

openssl enc -k 3ncrypt -aes-256-cbc -md sha512 -pbkdf2 -iter 100000 -salt -in private_key -out encrypted.txt

Upload the encrypted file to Zuar Runner’s file manager. Create the following two command line jobs to move the file, and un-encrypt it.

Click + Add to create a new Command job. In the wizard, add a job title, and then add the following commands, respectively. These jobs need to be run in order, and should only be run once.

Job #1

Make a hidden directory .ssh (if it doesn’t exist) and move the encrypted file to the hidden directory:

mkdir -p /var/mitto/etc/.ssh && mv /var/mitto/data/encrypted.txt /var/mitto/etc/.ssh

Job #2

Un-encrypt the file creating a file named private_key in /var/mitto/etc/.ssh:

openssl enc -d -k 3ncrypt -aes-256-cbc -md sha512 -pbkdf2 -iter 100000 -salt -in /var/mitto/etc/.ssh/encrypted.txt -out /var/mitto/etc/.ssh/private_key

Example job configuration:

{
    "command": "copy",
    "credentials": null,
    "rclone_flags": [
        {
            "flag": "--sftp-key-file",
            "value": "/var/mitto/etc/.ssh/private_key"
        },
        {
            "flag": "--sftp-user",
            "value": "USERNAME"
        },
        {
            "flag": "--sftp-host",
            "value": "sftp.hostname.com"
        }
    ],
    "targets": {
        "destination": ":sftp:path/to/folder/",
        "source": "/var/mitto/data/text.csv"
    },
    "timeout_seconds": 18000
}

This would be equivalent to the following local rclone command:

rclone copy /var/mitto/data/text.csv sftp:path/to/folder/

In the rclone_flags block change the name of the key file in the value of --sftp-key-file, replace USERNAME with your SFTP username in --sftp-user, and replace the value of --sftp-host with your SFTP hostname.