View previous topic :: View next topic |
Author |
Message |
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Thu Feb 25, 2021 1:18 pm Post subject: The really simple way to use the portage tree on squashfs ? |
|
|
IMHO the portage tree doesn't sit well on an SSD, nor indeed on rotating rust - too many small files, to many changes in each sync. So I read up on using Squashfs to hold it. Most of solutions get involved in ways to update the squashed file system after an "emerge --sync", and so get involved in overlayfs and the like. Too complex for a bear of little brain such as me.
I notice the portage rsync mirrors contain snapshots apparently taken a bit after midnight each day, with the latest called "current". There's also an sha512sum.txt file of checksums. So I wondered if you can replace "emerge --sync" with rsync of the squashfs snapshot, and then an sha512sum to check it. Yes, it works, just like that. rsync is clever with large files, and just transfers changed blocks, so the sync is damn fast.
OK, you get a read-only portage tree, and syncing during the day won't work, but my weekly "emerge --update" should be fine. And you need to setup the PACKAGES and DISTFILES directories in their new /var locations, not the portage tree.
Here's some output, and timings for today's snapshot versus yesterday's, and an emerge --sync today versus a sync yesterday (note these are not particularly comparable, but they give some idea):
Code: | packager@ryzen ~ $ rsync -v --copy-links rsync://mirrors.gethosted.online/gentoo/snapshots/squashfs/gentoo-current.xz.sqfs Downloads/gentoo-current.xz.sqfs
sent 51,703 bytes received 29,611 bytes 32,525.60 bytes/sec
total size is 54,431,744 speedup is 669.40
# I should add an rsync update on sha512sum.txt - to be investigated - below, I just downloaded it
packager@ryzen ~ $ sha512sum -c --ignore-missing sha512sum.txt
gentoo-current.xz.sqfs: OK
sha512sum: WARNING: 22 lines are improperly formatted |
This compares with (to skip rather a lot of rsync output)
Code: | packager@ryzen ~ $ emerge --sync
...
Number of files: 146,441 (reg: 119,956, dir: 26,485)
Number of created files: 75 (reg: 75)
Number of deleted files: 225 (reg: 215, dir: 10)
Number of regular files transferred: 1,068
Total file size: 209.09M bytes
Total transferred file size: 8.16M bytes
Literal data: 8.16M bytes
Matched data: 0 bytes
File list size: 3.36M
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 51.82K
Total bytes received: 9.99M
sent 51.82K bytes received 9.99M bytes 647.86K bytes/sec
total size is 209.09M speedup is 20.82 |
So, sent amount is about the same (~52KB), but the received is over 300 times less!
I'll try this all again next week, when there will be more to compare. _________________ Greybeard |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20506
|
Posted: Thu Feb 25, 2021 6:34 pm Post subject: |
|
|
Goverp wrote: | I notice the portage rsync mirrors contain snapshots apparently taken a bit after midnight each day, with the latest called "current". | Do you download the file named with "current" or do you download the file with the timestamp in the file name? I briefly looked at using those snapshots, but I want the version with the timestamp in the filename. It wasn't immediately obvious to me how that could be automated.
Having the timestamp versions of snapshots makes it easier to attempt incremental updates on systems that have not been updated in months (or worse).
Goverp wrote: | Most of solutions get involved in ways to update the squashed file system after an "emerge --sync", and so get involved in overlayfs and the like. | I use squashfs, but not unionfs. When I update, I extract my current ::gentoo squashfs to a tmpfs (currently 800M works), sync to that tmpfs, then create a new squashfs. _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Fri Feb 26, 2021 10:18 am Post subject: |
|
|
pjp wrote: | ... Do you download the file named with "current" or do you download the file with the timestamp in the file name? I briefly looked at using those snapshots, but I want the version with the timestamp in the filename. It wasn't immediately obvious to me how that could be automated. ... |
The --copy-links means it gets the target of the "current" version (which is a symbolic link on the mirror). Without it, rsync just barfs at reading the non-regular file. You can work it out from the sha512sum.txt file - the "current" snapshot has the same sum as the file it points at (which happens to the the penultimate group of entries in the sha512sum.txt file).. Alternatively, the file "Manifest" in the snapshot contains a datestamp as well as other checksums.
pjp wrote: | ... I use squashfs, but not unionfs. When I update, I extract my current ::gentoo squashfs to a tmpfs (currently 800M works), sync to that tmpfs, then create a new squashfs. |
Doing it my way, I just rsync the snapshot file itself; that gives the new squashfs; no need to extract it, just mount it; something like:
Code: | mount -t squashfs -o loop Downloads/gentoo-current.xz.sqfs /usr/portage |
(you can even have the appropriate entry in /etc/fstab to mount it at boot).
Obviously, you need to unmount it during the rsync, and remount it afterwards. _________________ Greybeard |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20506
|
Posted: Fri Feb 26, 2021 7:36 pm Post subject: |
|
|
Interesting, thanks. I had only tried with wget.
If I read how you used rsync, I read too quickly. That looks a LOT easier. I'll have to try that as a default.
I don't recall the last time I "needed" to sync at some random time, so I doubt I'll need my current method. But it isn't too difficult to fall back on if it is needed.
I think this will solve a not so insignificant problem I had. Thanks again! _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Fri Feb 26, 2021 8:42 pm Post subject: |
|
|
Thanks, I changed the way I was syncing.
I added this to crontab (replacing a regular emerge --sync) could have more checks in it.
Code: | #!/bin/bash
dout=`date "+%Y-%m-%d"`
umount /usr/portage
rsync -v --copy-links rsync://mirrors.gethosted.online/gentoo/snapshots/squashfs/gentoo-current.xz.sqfs /var/portage
rc_sqfs=$?
rsync -v --copy-links rsync://mirrors.gethosted.online/gentoo/snapshots/squashfs/sha512sum.txt /var/portage
rc_sha=$?
if [ $rc_sqfs -eq 0 ] && [ $rc_sha -eq 0 ]
then
cd /var/portage
sha512sum -c --ignore-missing sha512sum.txt --status
rc=$?
else
rc=1
fi
mount /usr/portage
if [ $rc -eq 0 ]
then
/usr/bin/eix-update -q
/usr/bin/rsync -aix /var/portage/gentoo-current.xz.sqfs nas:/mnt/backup/bkup/$dout.portage.xz.sqfs
fi
exit $rc |
Note: nas is my backup server for portage related stuff
fstab entry for squashfs
Code: | /var/portage/gentoo-current.xz.sqfs /usr/portage squashfs user,loop 0 0 |
_________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
pjp Administrator
Joined: 16 Apr 2002 Posts: 20506
|
Posted: Fri Feb 26, 2021 9:24 pm Post subject: |
|
|
Anon-E-moose wrote: | could have more checks in it. | Code: | egrep -v '^$|^#|^\s+#' pdm |wc -l
133 | Includes "help", timestamp checks / creation, unmounting, temporary write space, unsquashfs, sync, (re)squashfs, re-linking to the generic mountpoint (timestamp vs. fstab no timestamp), and remounting the squashfs repo. :) I still had some cleanup to do and was considering a rewrite. _________________ Quis separabit? Quo animo? |
|
Back to top |
|
|
geki Advocate
Joined: 13 May 2004 Posts: 2387 Location: Germania
|
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Sat Feb 27, 2021 10:35 am Post subject: |
|
|
Glad to be of service! _________________ Greybeard |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Sat Feb 27, 2021 10:57 am Post subject: |
|
|
I did my first pull of sqfs this morning (automated), check, and save sqfs file to my backup area.
Syncing the sqfs was indeed faster than a regular portage rsync (even on a tmpfs which I was using) so I'm happy.
I'm going to ask that this thread be moved to Documentation/tips. I think it belongs there. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54615 Location: 56N 3W
|
Posted: Sat Feb 27, 2021 11:32 am Post subject: |
|
|
Moved from Portage & Programming to Documentation, Tips & Tricks. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Sat Feb 27, 2021 1:28 pm Post subject: |
|
|
I looked at writing a suitable module for emaint, but my python fu lacks, and I'm sure I couldn't generate as much code as /usr/lib/python3.8/site-packages/portage/sync/modules/rsync/rsync.py !
Anon-e-mouse's script looks more my level. _________________ Greybeard |
|
Back to top |
|
|
Leonardo.b Guru
Joined: 10 Oct 2020 Posts: 308
|
Posted: Sat Feb 27, 2021 2:54 pm Post subject: |
|
|
Nice, thanks; even here on the other face of the planet, this is going to be useful. |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Sat Feb 27, 2021 5:59 pm Post subject: |
|
|
FWIW: A bit of digging on t' web shows that lzop compression uses significantly less resource than xz; the Snap guys noticed significant startup delays for images using xz compression. Of course, if you're short of "disk" space, xz is better. _________________ Greybeard |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Sat Feb 27, 2021 6:38 pm Post subject: |
|
|
Goverp wrote: | FWIW: A bit of digging on t' web shows that lzop compression uses significantly less resource than xz; the Snap guys noticed significant startup delays for images using xz compression. Of course, if you're short of "disk" space, xz is better. |
My squashfs loading seems pretty fast, but I did set things in the kernel this way
Code: | CONFIG_SQUASHFS=m
CONFIG_SQUASHFS_FILE_CACHE=y
CONFIG_SQUASHFS_DECOMP_MULTI=y
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
CONFIG_SQUASHFS_LZ4=y
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
CONFIG_SQUASHFS_ZSTD=y
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3 |
_________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
mv Watchman
Joined: 20 Apr 2005 Posts: 6780
|
Posted: Sat Feb 27, 2021 7:07 pm Post subject: |
|
|
Goverp wrote: | FWIW: A bit of digging on t' web shows that lzop compression uses significantly less resource than xz |
If you have a choice, I would always prefer zstd: Almost the quality of xz and speed in the order of lzop. |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Sat Feb 27, 2021 8:11 pm Post subject: |
|
|
mv wrote: | If you have a choice, I would always prefer zstd: Almost the quality of xz and speed in the order of lzop. |
The only choice in the mirrors is .lzop and .xz!
(For expansion, I agree. However AFAIR lzop is faster for compression, so I use if for filesystem dumps to USB drives, anything else is slower than the IO saved; lz4 is possibly about as good) _________________ Greybeard |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Sat Feb 27, 2021 8:37 pm Post subject: |
|
|
For compression one could use pxz/pixz (parrallel xz) same compression, faster end result (albeit with more memory/processors)
I wonder if lzop uses multiple cores?
Edit to add: It looks like lzop is similar to gzip, not terribly great on compression, but pretty fast (de)compession _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
mv Watchman
Joined: 20 Apr 2005 Posts: 6780
|
Posted: Sun Feb 28, 2021 8:58 am Post subject: |
|
|
Goverp wrote: | (For expansion, I agree. However AFAIR lzop is faster for compression, so I use if for filesystem dumps to USB drives, anything else is slower than the IO saved; lz4 is possibly about as good) |
On my system, for mksquashfs, lz4 is fastest, and then lzo or zstd are similar with lzo being only slightly faster.
Once I had made a table https://github.com/vaeth/squashmount/blob/master/compress.txt, but apparently I did not add zstd; probably because my hardware had changed since then.
Note that squashfs uses parallelization by default.
Anyway, one must be careful with such comparisons since lzo and zstd have different compression levels which can easily influence the speed up to a factor of 10.
While the best compression level on lzo hardly saves any space, the best compression level on zstd is about the quality of xz or even brotli in some cases.
Oh yes: When doing comparisons, the disk cache has an enormous influence. It seems even drop_caches does not really drop all read caches (maybe also local caches on the harddisk itself play a role), so you have to repeat in many different orders of compression algorithms and take the median. |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Sun Feb 28, 2021 11:04 am Post subject: |
|
|
Some comparisons
https://community.centminmod.com/threads/round-3-compression-comparison-benchmarks-zstd-vs-brotli-vs-pigz-vs-bzip2-vs-xz-etc.17259/ --- missing lzo but does look at parallel compressors
https://catchchallenger.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO --- no tests of parallel compressors
https://larryreznick.com/2019/09/19/comparing-compression/ --- some parallel compressors but does mention how many cores per compressor/results
And I've seen a few others with various tests and various compressors.
What I take away, for single (non-parallel) lzo/lz4 best in speed, xz best in compression
For parallel, it changes a lot, even gzip gets spirited when done in a group. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
mv Watchman
Joined: 20 Apr 2005 Posts: 6780
|
Posted: Sun Feb 28, 2021 2:07 pm Post subject: |
|
|
Anon-E-moose wrote: | What I take away, for single (non-parallel) lzo/lz4 best in speed, xz best in compression |
At least in the first test, zstd with highest level is about the same as xz with highest level. This corresponds to my experience.
It is strange that brotli at highest level is worse than xz at highest level. This is unusual and can indicate that the test data archive was unusual in the sense that e.g. it contained some identical or almost identical large files which did fit into the window of xz and zstd but just not into the window of brotli. Or it contained a lot of clearly binary files (like sound or video) on which the plain-text based pre-initialization from brotli does more harm than good. I had a few such cases, but these are rare: Usually, brotli is the best in compression. |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Sun Feb 28, 2021 8:11 pm Post subject: |
|
|
Warning:
I've been beefing up my current "squashfs-sync" script to check dates and so forth.
Today's gentoo-current.lzo.sqfs turns out to be linked to gentoo-20210226.lzo.sqfs which is not the most recent file: gentoo-20210227.lzo.sqfs.
If your scripts assume that gentoo-current points to the most recent file, you'd be disappointed.
I'll test mine a few times before posting it.
FWIW, the rsync between gentoo-20210220.lzo.sqfs and gentoo-20210226.lzo.sqfs is about 50MB, so the saving's not as good as I'd hoped, though probably still worthwhile. It's still simpler than setting up unionfs and running a full emerge --sync. _________________ Greybeard |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Sun Feb 28, 2021 8:17 pm Post subject: |
|
|
I find that just downloading the squashfs and mounting it, provides a lot of benefits (over the way I did have it)
As far as dates, it's easy enough to grab todays date and apply that to the fetch so that you pull the right date, then you can just rename it current on your end.
something like
dout=`date "+%Y%m%d"`
rsync fetch gentoo-$dout.lzo.sqfs
rename gentoo-$dout.lzo.sqfs gentoo-current.lzo.sqfs
ETA: This morning the tarballs were out of date, but the squashfs was newer ~le sigh~
So I think I'll go back to a modifed old setup, rsync portage tree (on tmpfs), and from that create a backup xz compressed tarball and a squashfs to use for /usr/portage. Little bit more trouble than just downloading a tarball/sqfs but at least I know it's up to date, and I do run it from cron in the middle of the night. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
Goverp Advocate
Joined: 07 Mar 2007 Posts: 2185
|
Posted: Tue Mar 02, 2021 7:37 pm Post subject: |
|
|
Yup, trying my new script, the gentoo-current link seems to be one day behind - I wonder if the process that builds the snapshots directory sets the link BEFORE it creates the new snapshot, maybe for "safety", and then somehow fails to update it for the new one. Whatever. To add to the fun, the TIMESTAMP in the Manifest file is yet another day back. So today I have
from the sha512sum.txt file entries for .lzo files:
Code: | d94925dce716d81f025031387e63debdd5b4dbdd00de63ee033434b4a20cdc4b2ae4b8106e697aa686e29a12280b574c7c3b3bd96839e90967ec0859eb478b96 gentoo-20210227.lzo.sqfs
f54d7cfe625dadb8bca084e1b724947d06dd4666f59b13a6e838f8a7195e85326b49c3d81a3197764a1af5272418ad2131e09377ab260e1bca01012843c7fcb6 gentoo-20210228.lzo.sqfs
a30995ba0a9011fce03eb80331033549582e5669ae3c9c8f29439fb7fcc25266bd7e1ed161214c6207d918ad9038133b89d058c846390dc30c3e010a77c97e0e gentoo-20210301.lzo.sqfs
a30995ba0a9011fce03eb80331033549582e5669ae3c9c8f29439fb7fcc25266bd7e1ed161214c6207d918ad9038133b89d058c846390dc30c3e010a77c97e0e gentoo-current.lzo.sqfs |
so the current claims to be 2021/03/01
but with my latest rsynced gentoo-current.lzo.sqfs, sha512sum returns
Code: | f54d7cfe625dadb8bca084e1b724947d06dd4666f59b13a6e838f8a7195e85326b49c3d81a3197764a1af5272418ad2131e09377ab260e1bca01012843c7fcb6 gentoo-current.lzo.sqfs |
which you can see is the one for gentoo-20210228.lzo.sqfs, which the web page for ftp://rsync.uk.gentoo.org/gentoo/snapshots/squashfs/ claims was created: 2021-03-01 01:45 (which is reasonable if the snapshot creation started at midnight on 02/2, but having mounted the snapshot, the Manifest file within it says
Code: | grep TIMESTAMP /var/db/repos/gentoo/Manifest
TIMESTAMP 2021-02-27T01:38:33Z |
so it seems to be a day older again!
OK, I'm not too bothered about being a day (or even 2) out, but clearly summat is going weird. I'll see if I can work out who to contact in the infrastructure world. _________________ Greybeard |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Tue Mar 02, 2021 8:48 pm Post subject: |
|
|
I was looking at that this morning, and that's when I decided to rewrite my script, to just do a regular rsync (against tmpfs) and then make a squashfs from that for mounting as /usr/portage. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6171 Location: Dallas area
|
Posted: Wed Mar 03, 2021 10:41 am Post subject: |
|
|
Ok, first test of new rewrite from last night, worked fine.
Code: | #!/bin/bash
# Paths for different things
# /n/tmp/nas -- save/read dir on hd
# /tmp/portage -- tmpfs for updating portage tree
# /usr/portage -- where squashfs is mounted
# /var/portage -- where the squashfs file is kept
# nas related -- the backup box
exit_msg() {
echo $2
exit $1
}
main() {
cur_date=`date "+%Y-%m-%d"`
cur_txz="$cur_date.portage.txz"
yday_date=`date -d "yesterday" '+%Y-%m-%d'`
yday_txz="$yday_date.portage.txz"
file_save_path="/n/tmp/nas"
tmp_port_path="/tmp/portage"
# check to see if todays file has been copied to nas bkup
if [ "$1" != "f" ]
then
ssh nas "ls /mnt/backup/bkup/$cur_txz" >/dev/null 2>&1
if [ $? -eq 0 ]; then exit_msg 0 "$cur_txz exists"; fi
fi
# check for existence of ydays tarball and untar it to $tmp_port_path
if [ ! -e $file_save_path/$yday_txz ]; then exit_msg 1 "$file_save_path/$yday_txz doesn't exist !!!"; fi
mkdir $tmp_port_path
tar xfC $file_save_path/$yday_txz $tmp_port_path || exit_msg $? "problem untarring $file_save_path/$yday_txz to $tmp_port_path"
# rsync the latest portage against $tmp_port_path -- limit of 5 retries
rm -f $file_save_path/rsync[1-5].log
count=0
while [ $count -lt 5 ]; do
rc=-1
/usr/bin/rsync -ai --delete --timeout=300 rsync://rsync.gentoo.org/gentoo-portage $tmp_port_path >$file_save_path/rsync$count.log 2>&1
rc=$?
if [ $rc -eq 0 ]; then break; fi
let count=count+1
sleep 60
done
# check Manifest(s)
if [ $rc -ne 0 ]
then
exit_msg $rc "rsync failed after $count tries"
else
chown -R portage:portage $tmp_port_path
echo "-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-" >> $file_save_path/rsync$count.log 2>&1
gemato verify -K /usr/share/openpgp-keys/gentoo-release.asc $tmp_port_path >> $file_save_path/rsync$count.log 2>&1
fi
# change to /tmp to make tarballs/sqfs
cd /tmp
# create xz compressed tarball for backup server
{ tar cfC - /tmp/portage . | pixz > $cur_txz ; } || exit_msg $? "couldn't create $cur_txz"
# copy to hd for tomorrows run
cp $cur_txz $file_save_path
# sync to backup server
/usr/bin/rsync -aix $cur_txz nas:/mnt/backup/bkup/ || exit_msg $? "*** rsync of $cur_date.portage.xz.sqfs to nas has failed***"
# check for shits and giggles
ssh nas "ls /mnt/backup/bkup/$cur_txz"
# create sqfs for mounting on /usr/portage
mksquashfs portage gentoo-current.xz.sqfs -comp xz -no-progress || exit_msg $? "error creating sqfs for mounting on /usr/portage"
# set up for mounting new /usr/portage
umount /usr/portage && mv gentoo-current.xz.sqfs /var/portage && mount /usr/portage && /usr/bin/eix-update -q
rc=$?
if [ $rc -eq 0 ]
then
rm -r portage $cur_txz $file_save_path/$yday_txz || echo could not remove portage files/dir in /tmp
else
exit_msg $rc "problems with unmount/move/mount of /usr/portage or eix-update failed"
fi
}
time main
exit $rc |
With these results
Code: | Wed 03 Mar 2021 02:30:04 AM CST
-- just the rysnc against gentoo on tmpfs
Wed 03 Mar 2021 02:30:09 AM CST
-- copy file to backup
<f+++++++++ 2021-03-03.portage.txz
/mnt/backup/bkup/2021-03-03.portage.txz
-- pixz of /tmp/portage into xz.sqfs
Parallel mksquashfs: Using 16 processors
Creating 4.0 filesystem on gentoo-current.xz.sqfs, block size 131072.
Exportable Squashfs 4.0 filesystem, xz compressed, data block size 131072
compressed data, compressed metadata, compressed fragments,
compressed xattrs, compressed ids
duplicates are removed
Filesystem size 52880.24 Kbytes (51.64 Mbytes)
24.93% of uncompressed filesystem size (212072.62 Kbytes)
Inode table size 1165960 bytes (1138.63 Kbytes)
24.92% of uncompressed inode table size (4679570 bytes)
Directory table size 1336310 bytes (1304.99 Kbytes)
34.49% of uncompressed directory table size (3874653 bytes)
Number of duplicate files found 9270
Number of inodes 146104
Number of files 119610
Number of fragments 1482
Number of symbolic links 0
Number of device nodes 0
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 26494
Number of ids (unique uids + gids) 1
Number of uids 1
portage (250)
Number of gids 1
portage (250)
-- time for main() total run time minus the original sourcing of the shell script.
real 0m42.140s
user 2m24.132s
sys 0m6.701s |
all in all it worked well, now I'm current (as of the rsync) and I've got the benefits of a sqfs. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|