Paths are specified as
remote: for the
command.) You may put subdirectories in too, eg
Here is an example of making an QingStor configuration. First run
This will guide you through an interactive setup process.
No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / QingStor Object Storage \ "qingstor" [snip] Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step \ "false" 2 / Get QingStor credentials from the environment (env vars or IAM) \ "true" env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Enter a endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" endpoint> Zone connect to. Default is "pek3a". Choose a number from below, or type in your own value / The Beijing (China) Three Zone 1 | Needs location constraint pek3a. \ "pek3a" / The Shanghai (China) First Zone 2 | Needs location constraint sh1a. \ "sh1a" zone> 1 Number of connnection retry. Leave blank will use the default value "3". connection_retries> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key endpoint = zone = pek3a connection_retries = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
This remote is called
remote and can now be used like this
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
/home/local/directory to the remote bucket, deleting any excess
files in the bucket.
rclone sync /home/local/directory remote:bucket
This remote supports
--fast-list which allows you to use fewer
transactions in exchange for more memory. See the rclone
docs for more details.
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don’t have an MD5SUM.
With QingStor you can list buckets (
rclone lsd) using any zone,
but you can only access the content of a bucket from the zone it was
created in. If you attempt to access a bucket from the wrong zone,
you will get an error,
incorrect zone, the bucket is not in 'XXX'
There are two ways to supply
rclone with a set of QingStor
credentials. In order of precedence:
truein the config file
Here are the standard options specific to qingstor (QingCloud Object Storage).
Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
QingStor Access Key ID Leave blank for anonymous access or runtime credentials.
QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials.
Enter a endpoint URL to connection QingStor API. Leave blank will use the default value “https://qingstor.com:443"
Zone to connect to. Default is “pek3a”.
Here are the advanced options specific to qingstor (QingCloud Object Storage).
Number of connection retries.
Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.
Chunk size to use for uploading.
When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.
Note that “--qingstor-upload-concurrency” chunks of this size are buffered in memory per transfer.
If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded concurrently.
NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though).
If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.