Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
Here is an overview of the major features of each cloud storage system.
|Name||Hash||ModTime||Case Insensitive||Duplicate Files||MIME Type|
|Enterprise File Fabric||-||Yes||Yes||No||R/W|
|Google Cloud Storage||MD5||Yes||No||No||R/W|
|Mail.ru Cloud||Mailru ⁶||Yes||Yes||No||-|
|Microsoft Azure Blob Storage||MD5||Yes||No||No||R/W|
|Microsoft OneDrive||SHA1 ⁵||Yes||Yes||No||R|
|pCloud||MD5, SHA1 ⁷||Yes||No||No||W|
|SFTP||MD5, SHA1 ²||Yes||Depends||No||-|
|WebDAV||MD5, SHA1 ³||Yes ⁴||Depends||No||-|
|The local filesystem||All||Yes||Depends||No||-|
¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.
² SFTP supports checksums if the same login has shell access and
sha1sum as well as
echo are in the remote's PATH.
³ WebDAV supports hashes when used with Owncloud and Nextcloud only.
⁴ WebDAV supports modtimes when used with Owncloud and Nextcloud only.
⁵ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash.
⁶ Mail.ru uses its own modified SHA1 hash
⁷ pCloud only supports SHA1 (not MD5) in its EU region
⁸ Opendrive does not support creation of duplicate files using
their web client interface or other stock clients, but the underlying
storage platform has been determined to allow duplicate files, and it
is possible to create them with
rclone. It may be that this is a
mistake or an unsupported feature.
The cloud storage system supports various hash types of the objects.
The hashes are used when transferring data as an integrity check and
can be specifically used with the
--checksum flag in syncs and in
To use the verify checksums when transferring between cloud storage systems they must support a common hash type.
The cloud storage system supports setting modification times on
objects. If it does then this enables a using the modification times
as part of the sync. If not then only the size will be checked by
default, though the MD5SUM can be checked with the
All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.
If a cloud storage systems is case sensitive then it is possible to
have two files which differ only in case, e.g.
FILE.txt. If a cloud storage system is case insensitive then that
This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
The local filesystem and SFTP may or may not be case sensitive depending on OS.
Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.
If a cloud storage system allows duplicate files then it can have two objects with the same name.
This confuses rclone greatly when syncing - use the
command to rename or remove duplicates.
Some cloud storage systems might have restrictions on the characters
that are usable in file or directory names.
rclone detects such a name during a file upload, it will
transparently replace the restricted characters with similar looking
This process is designed to avoid ambiguous file names as much as possible and allow to move files between many cloud storage systems transparently.
The name shown by
rclone to the user or during log output will only
contain a minimal set of replaced characters
to ensure correct formatting and not necessarily the actual name used
on the cloud storage.
This transformation is reversed when downloading a file or parsing
For example, when uploading a file named
my file?.txt to Onedrive
will be displayed as
my file?.txt on the console, but stored as
my file？.txt (the
? gets replaced by the similar looking
character) to Onedrive.
The reverse transformation allows to read a file
from Google Drive, by passing the name
to be replaced by the similar looking
／ character) on the command line.
The table below shows the characters that are replaced by default.
When a replacement character is found in a filename, this character
will be escaped with the
‛ character to avoid ambiguous file names.
(e.g. a file named
␀.txt would shown as
Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend.
The default encoding will also encode these file names as they are problematic with many cloud storage systems.
Some backends only support a sequence of well formed UTF-8 bytes as file or directory names.
In this case all invalid UTF-8 bytes will be replaced with a quoted
representation of the byte value to allow uploading a file to such a
backend. For example, the invalid byte
0xFE will be encoded as
A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the local filenames section for details.
Most backends have an encoding options, specified as a flag
backend is the name of the backend, or as
a config parameter
encoding (you'll need to select the Advanced
rclone config to see it).
This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above).
However this can be incorrect in some scenarios, for example if you
have a Windows file system with characters such as
you want to remain as those characters on the remote rather than being
--backend-encoding flags allow you to change that. You can
disable the encoding completely with
--backend-encoding None or set
encoding = None in the config file.
Encoding takes a comma separated list of encodings. You can see the
list of all available characters by passing an invalid value to this
--local-encoding "help" and
rclone help flags encoding
will show you the defaults for the backends.
|CrLf||CR 0x0D, LF 0x0A|
|Ctl||All control characters 0x00-0x1F|
|InvalidUtf8||An invalid UTF-8 character (e.g. latin1)|
|LeftCrLfHtVt||CR 0x0D, LF 0x0A,HT 0x09, VT 0x0B on the left of a string|
|LeftSpace||SPACE on the left of a string|
|None||No characters are encoded|
|RightCrLfHtVt||CR 0x0D, LF 0x0A, HT 0x09, VT 0x0B on the right of a string|
|RightSpace||SPACE on the right of a string|
To take a specific example, the FTP backend's default encoding is
However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are
to the existing ones, giving:
This can be specified using the
--ftp-encoding flag or using an
encoding parameter in the config file.
Or let's say you have a Windows server but you want to preserve
？, you would then have this as the encoding (the Windows
This can be specified using the
--local-encoding flag or using an
encoding parameter in the config file.
MIME types (also known as media types) classify types of documents
using a simple text classification, e.g.
Some cloud storage systems support reading (
R) the MIME type of
objects and some support writing (
W) the MIME type of objects.
The MIME type can be important if you are serving files directly to HTTP from the storage system.
If you are copying from a remote which supports reading (
R) to a
remote which supports writing (
W) then rclone will preserve the MIME
types. Otherwise they will be guessed from the extension, or the
remote itself may assign the MIME type.
All rclone remotes support a base command set. Other features depend upon backend specific capabilities.
|Enterprise File Fabric||Yes||Yes||Yes||Yes||Yes||No||No||No||No||Yes|
|Google Cloud Storage||Yes||Yes||No||No||No||Yes||Yes||No||No||No|
|Microsoft Azure Blob Storage||Yes||Yes||No||No||No||Yes||Yes||No||No||No|
|OpenStack Swift||Yes †||Yes||No||No||No||Yes||Yes||No||Yes||No|
|The local filesystem||Yes||No||Yes||Yes||No||No||Yes||No||Yes||Yes|
This deletes a directory quicker than just deleting all the files in the directory.
† Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
‡ StreamUpload is not supported with Nextcloud
Used when copying an object to and from the same remote. This known
as a server-side copy so you can copy a file without downloading it
and uploading it again. It is used if you use
rclone copy or
rclone move if the remote doesn't support
If the server doesn't support
Copy directly then for copy operations
the file is downloaded then re-uploaded.
Used when moving/renaming an object on the same remote. This is known
as a server-side move of a file. This is used in
rclone move if the
server doesn't support
If the server isn't capable of
Move then rclone simulates it with
Copy then delete. If the server doesn't support
Copy then rclone
will download the file and re-upload it.
This is used to implement
rclone move to move a directory if
possible. If it isn't then it will use
Move on each file (which
falls back to
Copy then download and upload - see
This is used for emptying the trash for a remote by
If the server can't do
rclone cleanup will return an
‡‡ Note that while Box implements this it has to delete every file individually so it will be slower than emptying the trash via the WebUI
The remote supports a recursive list to list all the contents beneath
a directory quickly. This enables the
--fast-list flag to work.
See the rclone docs for more details.
Some remotes allow files to be uploaded without knowing the file size
in advance. This allows certain operations to work without spooling the
file to local disk first, e.g.
Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.
about prints quota information for a remote. Typical output
includes bytes used, free, quota and in trash.
If a remote lacks about capability
rclone about remote:returns
Backends without about capability cannot determine free space for an
rclone mount, or use policy
mfs (most free space) as a member of an
rclone union remote.
The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this.