Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
gpkg to xpak conversion? (solved)
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Portage & Programming
View previous topic :: View next topic  
Author Message
miket
Guru
Guru


Joined: 28 Apr 2007
Posts: 496
Location: Gainesville, FL, USA

PostPosted: Sat Jul 27, 2024 9:41 pm    Post subject: gpkg to xpak conversion? (solved) Reply with quote

I had a nice game plan for moving to profile 23.0 with minimal pain: start a fresh build root on my build host with a profile 23.0 stage tarball, edit the parent file of my custom profile to point to the needed 23.0 profile, emerge my world set, and have an easy transition on my client machines. After resyncing portage on the client machines needing upgrades, I didn't even have to change the /etc/portage/make.profile symlinks (ah, the beauty of having that all under /var/db/repos, making it so that there's only one thing to sync). Now, all I needed was to do the --emptytree emerge. (Well yes, there were minor issues, but they were easy to work out.)

But then I never noticed the transition from xpak to gpkg container format for binary packages. That didn't get to be a problem until today.

I'm doing an upgrade of a Gentoo installation that wasn't updated for a couple of years. Preliminary steps of the upgrade went pretty well: removal of packages no longer in Portage, depclean, database backup, adjustment of configuration files, and working out issues when running emerge -vpe before committing to the Big Emerge.
Things went south, though, when the version 3.0.30 of emerge on that machine barfed over the gpkg files it pulled from the build host. What do you know--version 3.0.30 was the very last Portage version that did not understand gpkg.

I understand the reasoning behind the gpkg format, and after having read GLEP 78, I endorse it wholeheartedly. The thing remains, though, that I need to update this machine. Is there some kind of converter that would let me convert the set of .gpkg.tar files to .xpak format for the sake of this one upgrade? The version of Portage on that machine doesn't seem to be so old as to require the intermediate-chroot approach, which, the time I used it, left me with not-quite-satisfactory results.


Last edited by miket on Tue Jul 30, 2024 1:27 pm; edited 1 time in total
Back to top
View user's profile Send private message
bstaletic
Guru
Guru


Joined: 05 Apr 2014
Posts: 361

PostPosted: Sat Jul 27, 2024 11:54 pm    Post subject: Reply with quote

I don't think there's a ready-made tool for this, but hat shouldn't be a problem.
All you need in order to create and extract gpkg binary packages is tar.
All you need in order to create and extract xpak binary packages is tar, qtbz2 and qxpak.

https://wiki.gentoo.org/wiki/Binary_package_guide#Understanding_the_binary_package_format

Alternatively, get your binhost to generate XPAK packages instead.
See man emerge and look for BINPKG_FORMAT.
Back to top
View user's profile Send private message
bstaletic
Guru
Guru


Joined: 05 Apr 2014
Posts: 361

PostPosted: Sun Jul 28, 2024 4:14 am    Post subject: Reply with quote

I couldn't sleep, so I have written your script. Completely untested. For starters, the mapping from the chosen compression method to archive extension only works with XZ.

Code:
#!/bin/bash

compression=${BINPKG_COMPRESS-xz}
test -n ${compression} && compression_ext=.${compression}
pkgdir=${PKGDIR-/var/cache/binpkgs}
pushd ${pkgdir}
for category in *; do
   pushd ${category}
   for pkg in *.gpkg.tar; do
      tar xf ${pkg}
      pvr=${pkg%*.gpkg.tar}
      pushd ${pvr}* # glob because the directory could have an optional suffix
      for name in image metadata; do
         if [[ -f ${name}.tar${compression_ext}.sig ]]; then
            gpg --homedir /etc/portage/gnupg/ --verify ${name}.tar${compression_ext}.sig ${name}.tar${compression_ext} 2>/dev/null || ( echo "Verification of ${name} signature for ${pvr} failed. Skipping." && continue )
         fi
      done
      tar xf image.tar${compression_ext}
      tar xf metadata.tar${compression_ext}
      pushd image
      tar caf ${OLDPWD}/${pvr}.tar.bz2 .
      popd
      pushd metadata
      qxpak -c ${OLDPWD}/${pvr}.xpak *
      popd
      qtbz2 -j ${pvr}.tar.bz2 ${pvr}.xpak ${pkgdir}/${category}/${pvr}.tbz2
      popd
      rm -rf ${OLDPWD}
   done
   popd
done
Back to top
View user's profile Send private message
miket
Guru
Guru


Joined: 28 Apr 2007
Posts: 496
Location: Gainesville, FL, USA

PostPosted: Tue Jul 30, 2024 1:27 pm    Post subject: Reply with quote

bstaletic wrote:
I couldn't sleep, so I have written your script. Completely untested. For starters, the mapping from the chosen compression method to archive extension only works with XZ.


Thank you very much, bstaletic, for your hints as to how to approach the problem and your sample script. I about never have trouble sleeping--and had a few other things to do--so it took me a bit longer.

I wanted a converter that was a bit more bulletproof than bstaletic's sample script, so I rolled my own that would convert a single file. The man pages for qxpak and qtbz2 confused me as to what was an xpak: wasn't that for the binary-package container as a whole, or was it of just the metadata part appended to the compressed stream? Finally, I saw that it comes down to the setting of FEATURES in make.conf. If binpkg-multi-instance is set, the extension is .xpak; otherwise it's .tbz2. Since I have that FEATURE set, I went with .xpak in my script.

I made some tests with my converter script (final version shown below) and then used a little driver script (also shown below) to iterate it over /var/cache/binpkgs on the build host. First mistake (but not evident until later): I ran it as a non-root user; second mistake: it did not handle package-version indicators correctly; third mistake: the commands to untar .gpkg components omitted switches to save extended attributes and hence file capabilities.

I can tell you for sure a system becomes really wonky if suddenly everything that ought be owned by root is owned by another user. Fortunately, I tested first in a virtual machine so starting over was easy. Solution to my "I really don't want to risk running this as root" problem: run the conversion in a chroot.

After a run of emaint -f binhost, I was good to go. (On the initial converted-by-user attempt, I transferred all the new .xpak files to the VM and ran emaint there; on the second go-round I ran emaint in the chroot after placing the converted files into /var/cache/binpkgs).


=====

Here is the converter. It takes two arguments: the name of the input package file and the name of the target directory. The converter deals with extensions automatically. One oversight: it does not recognize .tbz2 files.
Code:
#!/bin/bash

fatal () {
        echo $@ >&2
        if [ -n "$workdir" ]; then
                echo "Partial results are in temporary directory $workdir" >&2
        fi
        exit 1
}

mkWorkdir () {
        workdir=$(mktemp --tmpdir -d 'gpkg2xpak.XXXXXX') || \
                fatal "Error creating temporary directory"
}

getBasenameAndExtension () {
        local nm=$(basename "$1")
        if [ -z "$nm" ]; then
                nm="$1"
        fi
        if [[ "$nm" =~ \.([a-zA-Z][a-zA-Z0-9._-]*)$ ]]; then
                ext="${BASH_REMATCH[1]}"
                base=${nm:0: -${#ext}-1}
        else
                base="$1"
                ext=
        fi
}

inputFile="$1"
outputDir="$2"

if [ -z "$inputFile" -o -z "$outputDir" ]; then
        fatal "Need gpkg-filename and output-directory-name arguments"
fi

if [ ! -f "$inputFile" ]; then
        fatal "$inputFile does not exist"
fi
if [ ! -d "$outputDir" ]; then
        fatal "$outputDir is not a directory"
fi

getBasenameAndExtension "$inputFile"
if [ "$ext" != xpak -a "$ext" != gpkg.tar ]; then
        fatal "$inputFile has unknown filename extension $ext"
fi
inputBasename="$base"
inputExt="$ext"

outputName="$inputBasename.xpak"

if [ "$ext" != gpkg.tar ]; then
        cp -p "$inputFile" "$outputDir"
        exit 0
fi


mkWorkdir
tar -C $workdir -xf "$inputFile" || \
        fatal "Error untarring $inputFile"

gpakdir="$workdir/$inputBasename"
if [ ! -d "$gpakdir" ]; then
        fatal "Expected top-level directory $inputBasename; invalid gpkg"
fi

for n in gpkg-1 Manifest; do
        if [ ! -f "$gpakdir/$n" ]; then
                fatal "Missing $inputBasename/$n"
        fi
done

for member in metadata image; do
        mapfile -t arr < <(compgen -G "$gpakdir/$member.tar.*")
        if [ ${#arr[*]} -ne 1 ]; then
                if [ ${#arr[*]} -eq 0 ]; then
                        fatal "Missing $inputBasename/$member.tar*; invalid gpkg"
                fi
                fatal "Multiple $inputBase/$member.tar members; invalid gpkg"
        fi
        memfile="${arr[0]}"
        tar -C "$workdir" -xf "$memfile" --xattrs-include='*.*' --numeric-owner || \
                fatal "Error untarring $memfile"
done

pushd "$workdir/image" >/dev/null || \
        fatal "Could not change to image directory; invalid gpkg"
tar  cjf "$workdir/image.tar.bz2" . || \
        fatal "Error tarring image files"
popd >/dev/null

pushd "$workdir/metadata" >/dev/null || \
        fatal "Could not change to metadata directory; invalid gpkg"
qxpak -c "$workdir/metadata.xp" * || \
        fatal "Error gathering metadata"
popd >/dev/null

qtbz2 -j "$workdir/image.tar.bz2" "$workdir/metadata.xp" "$workdir/$outputName" || \
        fatal "Error creating .xpak"

fdate=$(stat -c %y "$inputFile")
touch -d "$fdate" "$workdir/$outputName"

cp -p "$workdir/$outputName" "$outputDir" || \
        fatal "Error copying $outputName to $outputDir"

rm -rf "$workdir"



And this is the walk-through-the-tree script. It places the new binary-package tree in a new binpkgs directory in the current directory. The run took a bit under an hour on my build host. Since it doesn't stop if there is a problem converting a file, I teed its output into a log file to spot any problems. I didn't have any.
Code:
#!/bin/bash

inbase=/var/cache/binpkgs
outbase=binpkgs

for cat in $(compgen -G "$inbase/*"); do
        if [ ! -d "$cat" ]; then
                continue
        fi
        cat=$(basename "$cat")
        for pkg in $(compgen -G "$inbase/$cat/*"); do
                pkg=$(basename "$pkg")
                catpkg="$cat/$pkg"
                echo "$catpkg"
                mkdir -p "$outbase/$catpkg"
                for ver in $(compgen -G "$inbase/$cat/$pkg/*"); do
                        ./gpkg2xpak.sh "$ver" "$outbase/$catpkg"
                done
        done
done
Back to top
View user's profile Send private message
bstaletic
Guru
Guru


Joined: 05 Apr 2014
Posts: 361

PostPosted: Tue Jul 30, 2024 5:35 pm    Post subject: Reply with quote

miket wrote:
I about never have trouble sleeping

Oh, how I envy you...
You have definitely made a more robust script
miket wrote:
Code:
mapfile -t arr < <(compgen -G "$gpakdir/$member.tar.*")


[*]The "$gpakdir/$member.tar.*" looks like it could match image.tar.xz.sig? I don't think you want to fail because the image archive was properly signed.

This is why I got lazy about automatically handling extensions.

Now I realize you're not verifying the signature, if it happens to exist in the gpkg archive. I'm guessing you are not signing your own gpkgs.
miket wrote:
Code:
 qtbz2 -j "$workdir/image.tar.bz2" "$workdir/metadata.xp" "$workdir/$outputName" || \

The XPAK metadata usually has extension xpak, but that's neither here nor there.
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Portage & Programming All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum