Yes they do. All the rclone commands (eg
copy etc) will
work on all the remote storage systems.
Sure! Rclone stores all of its config in a single file. If you want
to find this file, run
rclone config file which will tell you where
See the remote setup docs for more info.
This has now been documented in its own remote setup page.
Rclone can sync between two remote cloud storage systems just fine.
Note that it effectively downloads the file and uploads it again, so the node running rclone would need to have lots of bandwidth.
The syncs would be incremental (on a file by file basis).
rclone sync drive:Folder s3:bucket
You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg
Server A> rclone sync /tmp/whatever remote:ServerA Server B> rclone sync /tmp/whatever remote:ServerB
If you sync to the same directory then you should use rclone copy otherwise the two rclones may delete each others files, eg
Server A> rclone copy /tmp/whatever remote:Backup Server B> rclone copy /tmp/whatever remote:Backup
The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates.
Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system.
Cloud storage systems (at least none I’ve come across yet) don’t support partially uploading an object. You can’t take an existing object, and change some bytes in the middle of it.
It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system.
All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects.
No, not at present. rclone only does uni-directional sync from A -> B. It may do in the future though since it has all the primitives - it just requires writing the algorithm to do it.
Yes. rclone will follow the standard environment variables for proxies, similar to cURL and other programs.
In general the variables are called
http_proxy (for services reached
https_proxy (for services reached over
public services will be using
https, but you may wish to set both.
The content of the variable is
protocol://server:port. The protocol
value is the one used to talk to the proxy server, itself, and is commonly
Slightly annoyingly, there is no standard for the name; some applications
http_proxy but another one
rclone will try both variations, but you may wish to set all
possibilities. So, on Linux, you may end up with code similar to
export http_proxy=http://proxyserver:12345 export https_proxy=$http_proxy export HTTP_PROXY=$http_proxy export HTTPS_PROXY=$http_proxy
NO_PROXY allows you to disable the proxy for specific hosts.
Hosts must be comma separated, and can contain domains or parts.
For instance “foo.com” also matches “bar.foo.com”.
export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy
Note that the ftp backend does not support
This means that
rclone can’t file the SSL root certificates. Likely
you are running
rclone on a NAS with a cut-down Linux OS, or
possibly on Solaris.
Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc. "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL "/etc/ssl/ca-bundle.pem", // OpenSUSE "/etc/pki/tls/cacert.pem", // OpenELEC
So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly.
mkdir -p /etc/ssl/certs/ curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ntpclient -s -h pool.ntp.org
The two environment variables
SSL_CERT_DIR, mentioned in the x509 pacakge,
provide an additional way to provide the SSL root certificates.
Note that you may need to add the
--insecure option to the
curl command line if it doesn’t work without.
curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23.
See the system requirements section in the go install docs for full details.
This is caused by uploading these files from a Windows computer which hasn’t got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions’ file formats
This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g.
# both should print a long list of possible IP addresses dig www.googleapis.com # resolve using your default DNS dig www.googleapis.com @188.8.131.52 # resolve with Google's DNS server
If you are using
systemd-resolved (default on Arch Linux), ensure it
is at version 233 or higher. Previous releases contain a bug which
causes not all domains to be resolved properly.
Additionally with the
GODEBUG=netdns= environment variable the Go
resolver decision can be influenced. This also allows to resolve certain
issues with DNS resolution. See the name resolution section in the go docs.
It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag.