View previous topic :: View next topic |
Author |
Message |
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sat Apr 20, 2019 2:45 pm Post subject: |
|
|
Syl20 wrote: | krinn wrote: | The 2nd problem with rsync is that your server need to run it, while for scp you need a server ready for ssh. |
Not necessarily, if you use rsync over ssh ( -e). | To be clear, it uses ssh by default, and recognizes options in ~/.ssh/config etc. The -e is only really needed to override that.
Tom |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sat Apr 20, 2019 2:52 pm Post subject: |
|
|
Ant P. wrote: | And again: sshfs and gvfs work fine from a terminal and both let you use standard filesystem tools instead of the journalctl-ish scp. | I use sshfs sometimes as well...really nice. I have one Windows machine I need for work, and I run the Bitvise ssh server on that, and use an sshfs mount when I need to transfer files to it. 1000x better than screwing with CIFS etc.
Tom |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sat Apr 20, 2019 3:00 pm Post subject: |
|
|
Tony0945 wrote: | It is possible to drive nails with a screwdriver instead of a hammer. Most carpenters carry both. | I think the analogy is more overkill than rsync . To be clear: Transferring a single file with scp: Code: | scp file1 user@host: | ...and transferring it with rsync: Code: | rsync file1 user@host: | ...and you could make it behave more like scp adding --progress. I guess that's why I happened to get used to using just rsync. The complexity thing seems to be a bit overstated here.
Tom |
|
Back to top |
|
|
Ant P. Watchman
Joined: 18 Apr 2009 Posts: 6920
|
Posted: Sat Apr 20, 2019 6:22 pm Post subject: |
|
|
Have a few more examples!
Code: | ssh -p 12345 user@bar foo
scp -p 12345 file user@bar:
# 12345: No such file or directory
# ssh: connect to host bar port 22: Connection refused
man scp
scp -P 12345 file user@bar: # ????? |
Code: | cp -a dir ~/mnt/user@host/
rsync -a dir user@host:
scp -a dir user@host:
# unknown option -- a
# usage: scp [-346BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]
# [-l limit] [-o ssh_option] [-P port] [-S program] source ... target |
scp is discouraged because it's bad software by the authors' own admission and there are countless better ways to do the same thing, not because someone's trying to fluoridate your distro and steal its essence. |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Sat Apr 20, 2019 6:58 pm Post subject: |
|
|
Code: | sync file1 user@host: |
I'm afraid this will wipe out everything on host, except file1. When I sync portage, it wipes out old ebuilds no longer in the tree.
EDIT: And where does it put it? root?
Code: | scp file1 user@host:`pwd`/file1 | or Code: | scp -r * user@host:`pwd`/ |
Are the forms I most often use. |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Sat Apr 20, 2019 7:10 pm Post subject: |
|
|
tld wrote: | I think the analogy is more overkill than rsync . |
Well, to continue the argument, you could write all your code with a hex editor like we did in the early '80s. But to "modernize" it, the hex editor has to be embedded in systemd and communicate via dbus. |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sat Apr 20, 2019 7:55 pm Post subject: |
|
|
Tony0945 wrote: | Code: | sync file1 user@host: |
I'm afraid this will wipe out everything on host, except file1. When I sync portage, it wipes out old ebuilds no longer in the tree.
EDIT: And where does it put it? root?
Code: | scp file1 user@host:`pwd`/file1 | or Code: | scp -r * user@host:`pwd`/ |
Are the forms I most often use. |
Wow...you've totally lost me. I assume you meant rsync as appose to sync, but that aside, this: Code: | rsync file1 user@host: |
...will send file1 to the home directory of user "user" exactly as would scp. There's some serious misunderstanding going on here for sure.
EDIT: To clarify...even scp uses the users home for any relative paths.
Without expressly using --delete rsync never deletes anything, and you'd have to go way out of your way with that plus the -r (--recursive) as well. I'm frankly not even sure offhand how you would make it do what you're describing. I've been doing this for years.
Tom
Last edited by tld on Sat Apr 20, 2019 8:14 pm; edited 1 time in total |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sat Apr 20, 2019 8:05 pm Post subject: |
|
|
To clarify a little more, without using -r rsync will never send or receive anything but the files expressly specified. In addition, just like scp, for it so send anything to anywhere other than the users home you'd have to specify a full path, like user@host:/some/path.
This is why I've been stressing that, especially without -r and --delete, rsync is almost identical to scp. One thing I actually prefer is the syntax for specifying multiple remote files. With scp the syntax would be, for example: Code: | scp user@host:"dir1/file1 dir2/file2" . | ...whereas with rsync the syntax doesn't require quoting: Code: | rsync user@host:dir1/file1 :dir2/file2 . |
Tom |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Sat Apr 20, 2019 8:40 pm Post subject: |
|
|
tld wrote: | There's some serious misunderstanding going on here for sure.
| I'm sure too. I'd like to use rsync for backups, but the backup device already has 2T of data. I'd hate to have all that copied again. I read the man page with about 20 options and a webpage that suggested backing up using about ten options and decided to do noting to avoid losing the data. Thoght about writing a script that would check file size/date but it seeed like an awful lot of work. |
|
Back to top |
|
|
Hu Administrator
Joined: 06 Mar 2007 Posts: 22659
|
Posted: Sat Apr 20, 2019 9:20 pm Post subject: |
|
|
When Portage calls rsync, the passed arguments include --delete to clear out old ebuilds.
If you are concerned what rsync will do, use -ni (dry-run, itemize) to prevent changes and get a detailed listing of what would have been done. |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6148 Location: Dallas area
|
Posted: Sat Apr 20, 2019 10:22 pm Post subject: |
|
|
From my nightly rsync runs...
Options on root are "-aPxS --delete" because I want to keep it as much a mirror as possible.
Options on two other partitions are "-aPxO --delete" with some --excludes, because I generally want mirror copies of what's there but don't care about dir times.
Options on distfiles and packages are "-aPx" because I want to keep the packages, even if removed from source dirs.
Other than an occasional "-n" if I've changed something and want to make sure it does what I think I've told it to do I don't use the other flags.
Having said that, if I need to grab a file on the desktop when I'm on the laptop, I still use scp, but that's on a home network, not public.
Tony: as Hu said use the n flag for a dry-run but if the modification times are different it will try and copy those files again, even if they are the same size.
If I were going to change to rsync, and the mod times weren't in sync, I would write a script to make the backup times match the source times.
I used to do my backups with "cp -a" to keep the times the same, so it wasn't a real problem to convert to rsync.
Edit to add: the "x" option I use so I don't cross mount points. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland
Last edited by Anon-E-moose on Sat Apr 20, 2019 10:59 pm; edited 1 time in total |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Sat Apr 20, 2019 10:41 pm Post subject: |
|
|
Anon-E-moose wrote: | I used to do my backups with "cp -a" to keep the times the same, so it wasn't a real problem to convert to rsync. |
That's what I used for the original backup. Thanks very much for the examples. I hate to lose data. Programs can always be regenerated.
Hu, thanks for your tip also. I'll be sure to use it! |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sat Apr 20, 2019 11:41 pm Post subject: |
|
|
Interesting. That example is a lot more complex than anything I've needed to use. I backup a development directory under my home directory to a few places, but in that case everything owned by my user and there are no links or anything special. The options I use are just: Code: | rsync -rvut --delete --exclude '.*.swp' <source> <target> | With the -t to update modify times it works bidirectionally as well. That is, I can backup from my workstation to me laptop, and if I modify things on the laptop when I'm in the office I can go the other direction when i get home.
Tom |
|
Back to top |
|
|
Akkara Bodhisattva
Joined: 28 Mar 2006 Posts: 6702 Location: &akkara
|
Posted: Sun Apr 21, 2019 5:52 am Post subject: |
|
|
If you're just starting to use rsync and the file times hadn't been preserved previously, you can use the --checksum (-c is the short synonym) option to force a comparison based on checksums of the actual file contents. This will read thru the entire source (and destination, if you're copying recursively) computing checksums, so if you have a lot of data be prepared for a wait. Or do it piecemeal on smaller subdirectories at a time. Or try the --size-only option to get started, but beware that that one will miss changes that result in the same-sized file.
I'll sometimes use the -c option even when I know little has changed, as a way of periodically checking my backups for bitrot (often in conjunction with the -n dry-run option for that purpose).
Unfortunately it isn't particularly intelligent in how it schedules the operations and it will read from the source device a long time checksumming it before turning attention to the destination and starting there, when both could have been proceeding in parallel and cut the time needed nearly in half.
Also note that (if I recall correctly), rsync uses md5 checksums. Collision attacks on md5 are known to exist, and thus rsync may be unreliable if operating in an adversarial environment. For normal everyday use without an active agent trying to mess with you, md5 is fine, the odds of collision are miniscule. _________________ Many think that Dilbert is a comic. Unfortunately it is a documentary. |
|
Back to top |
|
|
krinn Watchman
Joined: 02 May 2003 Posts: 7470
|
Posted: Sun Apr 21, 2019 10:30 am Post subject: |
|
|
Like i said, i would myself not sync directories with scp but with rsync
But i would also just never use rsync to do that "scp /etc/hosts faramir:/etc" ; it scare me too much doing that with rsync
With scp it's an easy task, with rsync, i only use it for specific directories i like keep in sync, it's ability to delete files is so dangerous, and because no way i'll use an rsync server over /etc
The right tool for the right task |
|
Back to top |
|
|
blubbi Guru
Joined: 27 Apr 2003 Posts: 564 Location: Halle (Saale), Germany
|
Posted: Sun Apr 21, 2019 10:54 am Post subject: |
|
|
One big + for rsync is the ability to resume a transfer - but since I prefer scp because of its simplicity:
Code: |
alias scpresume="rsync --rsh=ssh --partial --progress"
|
Apart from that: There's life in the old dog yet!
Cheers,
Bjoern _________________ -->Please add [solved] to the initial post's subject line if you feel your problem is resolved.
-->Help answer the unanswered
http://olausson.de |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Sun Apr 21, 2019 2:36 pm Post subject: |
|
|
The files I back up are mostly video and databases so checksum might be worthwhile but will take a long time checking terabytes of data moving between HDD's over USB.
When I moved root from a 10,000 RPM HDD to an SSD, I just booted sysrescuecd, partition the SSD, mounted both drives (/mnt/new and /mnt/old) then
Code: | cp -a /mnt/old/* /mnt/new/ | then I went and got a cup of coffee. The video and databases are on a separate big HDD. |
|
Back to top |
|
|
mike155 Advocate
Joined: 17 Sep 2010 Posts: 4438 Location: Frankfurt, Germany
|
Posted: Sun Apr 21, 2019 4:27 pm Post subject: |
|
|
I am surprised that rsync is so much faster than scp:
Code: | cd /usr/src
# rm -rf /tmp/linux-4.19 on remote machine
time scp -rp linux-4.19 user@remote-machine:/tmp |
takes 115 seconds and
Code: | # rm -rf /tmp/linux-4.19 on remote machine
time rsync -av linux-4.19 user@remote-machine:/tmp |
takes only 10 seconds.
I guess it's time to sit down and learn all those rsync options. |
|
Back to top |
|
|
krinn Watchman
Joined: 02 May 2003 Posts: 7470
|
Posted: Sun Apr 21, 2019 5:09 pm Post subject: |
|
|
mike155 wrote: | I am surprised that rsync is so much faster than scp: |
no bench need, scp use an encrypt communication... because primary task of ssh is that.
never really dig rsync, but by its concept it should even be faster than cp when the file already exists (it should only transfert changes made to the file and not the whole file like cp would). |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sun Apr 21, 2019 5:09 pm Post subject: |
|
|
mike155 wrote: | I am surprised that rsync is so much faster than scp | Strange. I just tried a few tests here and they were actually the same within fractions of a second.
Tom |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Sun Apr 21, 2019 5:20 pm Post subject: |
|
|
krinn wrote: | mike155 wrote: | I am surprised that rsync is so much faster than scp: |
no bench need, scp use an encrypt communication... because primary task of ssh is that.
never really dig rsync, but by its concept it should even be faster than cp when the file already exists (it should only transfert changes made to the file and not the whole file like cp would). | As far as encryption, rsync is using ssh by default just as is scp. I think that's been the default behavior of rsync since around 2004(?). In any case, a few tests I tried were identical in cases where the file didn't exist.
I just did a test sending a file that already existed. I actually assumed that that would just overwrite the file unless the -u option was used and would take the same time, but that's not the case, as it was considerably faster (about 1/4 the time in the test I did). Apparently there's a check there to see what content it actually needs to send.
Tom |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6148 Location: Dallas area
|
Posted: Sun Apr 21, 2019 6:35 pm Post subject: |
|
|
According to this from techmint
Quote: |
It’s faster than scp (Secure Copy) because rsync uses remote-update protocol which allows to transfer just the differences between two sets of files. First time, it copies the whole content of a file or a directory from source to destination but from next time, it copies only the changed blocks and bytes to the destination.
Rsync consumes less bandwidth as it uses compression and decompression method while sending and receiving data both ends.
|
Nice examples @ https://rsync.samba.org/examples.html
Nice set of facts and other stuff https://rsync.samba.org/ FAQ link has nice talking points and simple problem resolutions. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
bobbymalone n00b
Joined: 21 Apr 2019 Posts: 1
|
|
Back to top |
|
|
miket Guru
Joined: 28 Apr 2007 Posts: 497 Location: Gainesville, FL, USA
|
Posted: Mon Apr 22, 2019 3:09 am Post subject: |
|
|
I'm a big fan of using tar streams when copying multiple files between hosts. Unlike scp, this technique deals happliy with multiple files and/or directory trees. It really well for me in moving files to and from my web host, which allows ssh but not the Secure Copy Protocol. If both computers are local, I use the technique without using encryption (netcat instead of ssh).
Example of transport over ssh: Code: | ssh otherhost 'cd some/directory; tar cz file1 file2 file3' | tar xz |
Note there is no need for rsyncd or fuse. All it takes is ssh and the ability to run commands on the other computer.
If both computers are on a trusted network and there's a lot of data to transfer, I forgo the encryption for an increase in throughput. On one computer run a command like this Code: | tar cz bigTree | nc -l -p 8888 -q0 | and on the other Code: | nc otherhost 8888 | tar xz |
(Note that it does not matter which machine is in listening mode. You do have to start the listening side first. It helps to apply the -q0 switch to whichever side pipes data into netcat.) |
|
Back to top |
|
|
1clue Advocate
Joined: 05 Feb 2006 Posts: 2569
|
Posted: Mon Apr 22, 2019 4:57 am Post subject: |
|
|
It's interesting that most of the discussion here is around:
- sshfs (mount an ssh share, then perform your copy, then unmount -- 3x the things to type) but STILL ssh protocol
- rsync (but it seems to use ssh by default, so STILL ssh protocol)
My most common use case is to transfer a single file or a recursive directory to a remote host, which is very often across the Internet. Very often that data contains customer information, which means I am bound by contract to keep the data private. So any non-encrypted transfer option is out of the question.
My earlier observation was that I find scp command structure very convenient. I do, although rsync can use the same exact structure and do the job. I also said that if this is just a client-side security hole, then the client should be able to be rewritten from scratch and remove all the badness.
The thing is, every solution I see here as a viable option uses ssh protocol for the transfer. So either all those solutions are flawed with the same vulnerability, or the scp client could be rewritten from scratch and get every bit as much security as any of these other solutions have.
Based on my understanding of the issue at https://seclists.org/fulldisclosure/2019/Jan/43, this is just a buggy client based on an antique implementation. Nothing wrong with the ssh protocol or server. I don't see why they can't fix it or replace it. |
|
Back to top |
|
|
|