Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Why does it seem like dynamic linking is going out of style?
View unanswered posts
View posts from last 24 hours

 
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
Zeault
n00b
n00b


Joined: 03 Aug 2019
Posts: 23
Location: New England, United States

PostPosted: Wed Feb 07, 2024 11:53 pm    Post subject: Why does it seem like dynamic linking is going out of style? Reply with quote

I typed this title into my search engine hoping to find some discussion but nothing really relevant came up so I am posting here.

When I look at the trends in the broader linux/unix world I see a lot of technology just doing away with dynamic linking of executable code. Correct me if I am wrong but here are some examples:

  • Rust by default statically links all dependent libs.

  • Go (the google one) statically links everything by default.

  • One of the killer features of the rapidly growing NixOS is how it ships each package with its own set of libs instead of installing all shared libs into the system folder. The executables may still be compiled with dynamic linking enabled but since they will only ever link to the libs that they are shipped with I consider this to effectively be a form of static linking. (Or at least this is how it was explained to me. I still have not used NixOS yet so I would love if someone could chime in start a more informed discussion.)

  • The growing popularity of containers which also ship with their own set of libs independent from the system set.

  • The shift of flatpak from being an alternative method of installing software that may not be supported by your distro to being THE preferred method of installing software by some people (e.g. some GNOME developers).

I am not really against any of technologies above. I use all of them! I just want to hear some perspectives from developers on why things are going this way. I guess I would say that I prefer dynamic linking because it makes for a small efficient system. I am aware that dependency management and especially ABI management can be a really annoying thing for packagers ("Dependency Hell" still has it's own Wikipedia article). I've never had that problem on Gentoo, but of course that's probably because we rebuild our packages all the time.

I like flatpak and its competitors. I don't really see the big advantage of one over the other besides flatpak being the most popular and widely supported. I do think it is a bit ironic and silly though how some flatpaks pull in other dependent flatpaks. Like damn are they gonna reinvent dependency hell or what.

I find it annoying how Rust and Go packages on Gentoo with the meson build system will check to see if their dependent libs are installed and then go ahead and redownload their own version of it anyways to statically link with. I don't understand this. I wish they could just use the library I already have installed.

What are some engineering problems that prevent the continued usage of plain old dynamic linking? Do programming languages need a more unified system of specifying ABI and maintaining compatibility through updates? Does the old Executable and Linkable Format (ELF) need an upgrade? Do object file formats need new features? Do the various runtime linkers used across different systems need to behave more similarly?

Share your thoughts, experiences, and opinions!
Back to top
View user's profile Send private message
flexibeast
Guru
Guru


Joined: 04 Apr 2022
Posts: 429
Location: Naarm/Melbourne, Australia

PostPosted: Thu Feb 08, 2024 12:27 am    Post subject: Reply with quote

Interesting topic; i'm looking forward to reading people's thoughts on this.

My feeling is that it's at least partly driven by the sort of issues described in this 2015 post by Michael Orlitzky: "Motherfuckers need package management".

Regarding:

Quote:
I don't really see the big advantage of one over the other besides flatpak being the most popular and widely supported.

i rarely use Flatpak myself, but for me the big advantage that they have over Snap is that the latter depends on systemd, which i don't use. (On Gentoo i'm currently using OpenRC, but when i was using Void, i used runit and then s6+66).
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3407

PostPosted: Thu Feb 08, 2024 12:51 am    Post subject: Reply with quote

I suppose it's another mark of open source development being taken over by companies.
Static linking (and app containers) provides convenience of build-once-use-everywhere and removes a bunch of variables like versions of libraries installed on your system. This translates to more control (by developers, not by you) and predictability, and thus easier project management (within company structures), as well as reliable (and paid) tech support.

Basically, the old ecosystem used to be like:
A user creates/modifies a piece of software which scratches his own itch, and shares it with the world because copying is free
and now it's shifting towards:
A company modifies software it doesn't need or intend to use, hoping someone else will look at a shiny thing, pay for it, and then pay again for a dude from India to walk him through an instruction manual, and shares the code because the original project they hijacked was already released on GPL and it's cheaper to suck it up than start from scratch.


In the first model, the original developer doesn't care what setup you have, because it's not his problem. At the same time, interfaces don't change every day, so multitudes of unsupported setups will work just fine, and package managers tend to do pretty good job navigating the dependency hell.
In the second model the developer doesn't care about your setup, because he forces the only supported option on you. It's your responsibility as the user to provide the extra storage, and if libs provided by the app happen to be outdated and buggy... Well, it sucks to be you. You'll update your dependencies when the app's dev decides to update the dependencies.
_________________
Make Computing Fun Again
Back to top
View user's profile Send private message
GDH-gentoo
Veteran
Veteran


Joined: 20 Jul 2019
Posts: 1676
Location: South America

PostPosted: Thu Feb 08, 2024 12:55 am    Post subject: Reply with quote

Relevant thread from Debian's developers mailing list. Popcorn worthy in places.

Note: it was started by someone who saw this in Debian bookworm's release notes:
Quote:
5.2.1.2. Go- and Rust-based packages
The Debian infrastructure currently has problems with rebuilding packages of types that systematically use static linking. With the growth of the Go and Rust ecosystems it means that these packages will be covered by limited security support until the infrastructure is improved to deal with them maintainably.

In most cases if updates are warranted for Go or Rust development libraries, they will only be released via regular point releases.

_________________
NeddySeagoon wrote:
I'm not a witch, I'm a retired electronics engineer :)
Ionen wrote:
As a packager I just don't want things to get messier with weird build systems and multiple toolchains requirements though :)
Back to top
View user's profile Send private message
flexibeast
Guru
Guru


Joined: 04 Apr 2022
Posts: 429
Location: Naarm/Melbourne, Australia

PostPosted: Thu Feb 08, 2024 1:47 am    Post subject: Reply with quote

GDH-gentoo wrote:
Relevant thread from Debian's developers mailing list.

Further to szatox's comment above, an excerpt from this message:

Quote:
[T]he Rust/Go ecosystems are not designed to be compatible with the Linux distributions model, and are instead designed to be as convenient as possible for a _single_ application developer and its users - at the detriment of everybody else - and for large corporations that ship a handful of applications with swathes of engineers that can manage the churn, and it is not made nor intended to scale to ~60k packages or whatever number we have today in unstable.
Back to top
View user's profile Send private message
GDH-gentoo
Veteran
Veteran


Joined: 20 Jul 2019
Posts: 1676
Location: South America

PostPosted: Thu Feb 08, 2024 2:17 am    Post subject: Reply with quote

This one has a funny part as well:

Quote:
It's mostly an issue with the deployment scheme at Google: I heard they recompile all their software and reinstall it on their fleet EVERY WEEK. This is about the only case where non-special-case (/sbin/ldconfig etc) static linking works.

Then they shared their internal project with the world (good) without caring how it affects others (bad). Unlike them, we can't rebuild the world every time a bug is fixed (security or not).

_________________
NeddySeagoon wrote:
I'm not a witch, I'm a retired electronics engineer :)
Ionen wrote:
As a packager I just don't want things to get messier with weird build systems and multiple toolchains requirements though :)
Back to top
View user's profile Send private message
e8root
Tux's lil' helper
Tux's lil' helper


Joined: 09 Feb 2024
Posts: 94

PostPosted: Mon Feb 19, 2024 7:02 am    Post subject: Reply with quote

Coming from Windows world I found it convenient if application had all dependencies included. For binaries either statically linked or right there in the installation folder so that I didn't need to install program again and just keep it in folder with other such programs. Likewise having source or prebuilt libs in source repository means it is easy to modify and rebuild application.

Open source projects focusing on Windows have such setup more often than those focusing on Linux.
Now something which on Linux needs bazzilion apt/else commands to build is usually trivial (though not bullet proof - and especially after some time has passed). On Windows it is genuine hell to deal with such projects.

Then there is an issue of reproducibility. It is nice to have known working starting point even when wanting to update dependencies.

Now we come to the crux of the issue - current/mainstream way its handled by Linux distros is too much focused on single use case which is having all packages (binary or source) in package manager and all dependencies defined and then optimizing for providing fast updates and saving disk space by not keeping old and potentially buggy/insecure code. This is all great and all but only when it works and I cannot blame people for designing the system in such a way to keep it working like that forever and ever. The issue is that sooner or later it does lead to dependency hell and lack of reproducibility. In Gentoo we have added USE flags so we crafted additional ring to our dependency hell - not saying its an issue for me or that it is unmanageable (break it in to smaller chunks by rebuilding the world weekly like Google does and it can even be considered as form of fun :lol:) and rather that unlike our world which we rebuild ourselves the real world often has different needs. Even Linus himself said it that distributing binary packages on Linux is terrible. This is the same issue I had with building programs on Windows but on steroids by adding dependency on distribution maintainers to make sure they update all the dependencies to minimum required versions and that all dependencies do not break anything. Otherwise you might release new version of your application but you won't see it released in all distros. They (distro maintainers) might try to build your app but when it fails then they will need to spend effort in to making it work delaying things and possibly delaying them forever.

Solution for developers is to do it in such a way you just clone repository and it all just compiles with least outside dependencies. And if someone doesn't like what you do they can fork your project and apply patches. For them it means their applications will be easier to get deployed - be it in distributions or even by just providing binaries directly.

The solution for us Gentoo users is to either suck it up and use programs like upstream designed them or to find people doing backports of applications made with statically linked code to make them build in our linuxy style way of doing things - which BTW is what I think how it already works and why some packages have USE flags to use system versions of libraries.

If we do it like that it is us users who have choice to give in to trends or to keep things nice and tidy in good old Linux package management way. In the end if done right (read: someone spends effort to do it right) it gives us less dependency hell. For example we might find it hard to build package with external dependencies but due to conflicts and whatnot but be able to make statically (or with libs in program directory) linked version.

In fact imho it would be nice to always have this very option! What I mean by that is to be able to build self-sufficient packages like some package managers (Guix, Nix, etc) allow us to do. Of course we can always use these already existing solutions.but it might be worth considering to add similar capabilities to Gentoo.

My 2 cents.
_________________
Unix Wars - Episode V: AT&T Strikes Back
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22598

PostPosted: Mon Feb 19, 2024 1:49 pm    Post subject: Reply with quote

Zeault wrote:
I find it annoying how Rust and Go packages on Gentoo with the meson build system will check to see if their dependent libs are installed and then go ahead and redownload their own version of it anyways to statically link with. I don't understand this. I wish they could just use the library I already have installed.
I will go a step further and argue that it is a design flaw in the package if its build process downloads anything from the Internet. Builds should have a specific list of things they need, which can be downloaded in advance by wget or similar clients, and which once downloaded, enable building offline. Needing Internet access for the test suite is undesirable too, but at least that is forgivable in certain circumstances. Rust and Google Go fail twice here. First, their "specific list" is canonically stored in a form that is not suitable for use with wget, and instead requires special processing to convert to a list of URLs. (If I am wrong here, someone please show a way, without the use of the Rust language toolchain, to take Rust's cargo.lock and reliably produce a useful invocation of wget that will download everything cargo will need for the build. Use of shell, sed, and coreutils is also allowed.) Second, their build system actively encourages developers to design for the download-at-build model.
Zeault wrote:
What are some engineering problems that prevent the continued usage of plain old dynamic linking?
Dynamic linking has its downsides, but they are well understood, and we have good tooling to handle them, at least for the well established languages. Young languages like Rust think dynamic linking is too much trouble to support well, so they just push everyone to static linking because that was easier for the toolchain developers to implement.
Zeault wrote:
Do programming languages need a more unified system of specifying ABI and maintaining compatibility through updates?
The C ABI has its faults, and some extensions might improve things here. However, we do quite well with what we have. As I understand it, the newcomers just cannot be bothered to implement usable dynamic linking, even within their own ecosystem, much less make it compatible with C.
Zeault wrote:
Do the various runtime linkers used across different systems need to behave more similarly?
At this point, the worst offenders refuse to even pursue dynamic linking within their own area using a dynamic linker specific to their language, so I think we are quite a ways off from worrying about cross-platform behaviors.
Back to top
View user's profile Send private message
wanne32
n00b
n00b


Joined: 11 Nov 2023
Posts: 69

PostPosted: Thu Feb 22, 2024 9:34 am    Post subject: Reply with quote

flexibeast wrote:
My feeling is that it's at least partly driven by the sort of issues described in this 2015 post by Michael Orlitzky: "Motherfuckers need package management".
He is wrong about apt. apt is astonishing good in installing things from source. It is just hiding it well. You can even easily install a whole debian source only if your hardware is missing a few instructions that are a requirement for binary debian. :lol:

Otherwise I fully agree.

I think the reason for that is more a shift in politics/society. This accounts to both the closed source as the open source world: Back in the days programmers where seen as command receivers: They where payed and had to provide what was ordered. While Stallman saw the power that comes with copyright ownership he feared the companies and their management behind the software dictating licenses and how software is working not so much the programmers as he assumed that programming would be a skill that everybody acquires. So the obvious move was to oppose this power by creating projects like GNU, KDE or Debian that would give the power to the users and programmers (that he considered to be the same people) to make programs behave in a way they like. Customizable for everybody, adaptive to every environment... Stallman was rightfully assuming, that companies will see these advantages and also support this concept.

But time has changed. While the anglo-american style trade oriented copyright was more or less adopted by the whole world it is more or less in any case overwritten by license agreements (be it an EULA in proprietary software or the GPL in OSS). We value the work of programmers (and creators in general) much more now. You see this change in valuation that programmers usually earn much more than admins or salesman now. Society has changed to a much more European way of seeing things: Programs as unsalable extension of human personality that are only owned by its creator. Very much like like dignity or health is bound to one person and can not be soled. Even in language we usually keep using "his/her" program when we speak about already soled (or gifted) programs. We also assume now that the programmers know best how to use their programs. Zawinski got lots of support when he tried to deliberately break XScreenSaver on debian because he was thinking they are using it wrong. This is just one example. There are various other examples: See for example Apples decisions on which features are allowed on iOS apps and which not. Or Tesla limiting the acceleration of "their" cars. 30 years ago such things would have been unthinkable. MY car MY decision! Now its Teslas car Teslas decision. This Idea is not new. "Sharing economy" is just a new word for "feudal system" where things are acquired once and then stays for ever owned by its "rightful owner" who has a direct line to the first acquistor and can given to others in form of fiefs, fideicommissa but never soled, detaching it from its original owner.

This kind of thinking leads to the conclusion that the creator, as the only one who understands "his" program is the one who needs to decide how his program is run. So the user has the obligation to provide an environment where this high valued program can run. Not so much the user is telling the programmer in what kind of environment the program has to run. So distributions which apply a set on quality standards (at very first: A common way to install and update programs and ensure that they work together afterwards) to the programs they distribute are replaced by distribution systems that ensure that the system environment fits the requirements of the program. Since interoperability, easy management, security and resource saving is it is not in the interest of the programmer there is no need for dynamic linking. Static linking provides much easier to predict results and is therefore more in the interest of the programmer.
Back to top
View user's profile Send private message
pa4wdh
l33t
l33t


Joined: 16 Dec 2005
Posts: 881

PostPosted: Thu Feb 22, 2024 4:24 pm    Post subject: Reply with quote

As far as i know there are two main reasons for dynamic linking:
1) Save some disk space because all code is present only once. E.g. all programs that need crypto just use openssl.
2) Security. E.g. if a bug is found in openssl it is updated and all programs that use the shared library immediately benefit

Reason 1 might not be very valid anymore. TB's are cheap so no one cares about a few MB used extra because of some code duplication.

Reason 2 i think is still very valid. Consider all types of "static" linking listed in the OP, and now assume a vulnerability has been found in a library included in those programs:
1) Do you know which programs are affected? Keep in mind that real static linking makes it hard to figure out which libs are originally included
2) How can you make sure all programs that need to be updated are updated? Keep in mind that the programs version doesn't change, it's just one shipped library that has changed
3) And what if the project is dead the dev doesn't make the update?
_________________
The gentoo way of bringing peace to the world:
USE="-war" emerge --newuse @world

My shared code repository: https://code.pa4wdh.nl.eu.org
Music, Free as in Freedom: https://www.jamendo.com
Back to top
View user's profile Send private message
GDH-gentoo
Veteran
Veteran


Joined: 20 Jul 2019
Posts: 1676
Location: South America

PostPosted: Thu Feb 22, 2024 4:29 pm    Post subject: Reply with quote

pa4wdh wrote:
TB's are cheap [...]

In which currency? :wink:
_________________
NeddySeagoon wrote:
I'm not a witch, I'm a retired electronics engineer :)
Ionen wrote:
As a packager I just don't want things to get messier with weird build systems and multiple toolchains requirements though :)
Back to top
View user's profile Send private message
dmpogo
Advocate
Advocate


Joined: 02 Sep 2004
Posts: 3414
Location: Canada

PostPosted: Thu Feb 22, 2024 5:41 pm    Post subject: Reply with quote

It is about control during distribution. The downside is that it breeds a proliferation of slightly different and potentially incompatible versions of the libraries
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22598

PostPosted: Thu Feb 22, 2024 5:44 pm    Post subject: Reply with quote

pa4wdh wrote:
1) Save some disk space because all code is present only once. E.g. all programs that need crypto just use openssl.
Also, there is a memory savings angle. The kernel is able to share a single page of RAM for all processes that map a given code page from a position-independent dynamic shared object. If two programs statically link openssl, and both programs are open, the kernel must give each program its own copy of that page, because the pages are not identical, even though the source code that was used to generate the objects was the same. Even if the pages were identical, if they came from separate files, Kernel Samepage Merging would be required to combine them.
Back to top
View user's profile Send private message
dmpogo
Advocate
Advocate


Joined: 02 Sep 2004
Posts: 3414
Location: Canada

PostPosted: Thu Feb 22, 2024 5:50 pm    Post subject: Reply with quote

pa4wdh wrote:

Reason 1 might not be very valid anymore. TB's are cheap so no one cares about a few MB used extra because of some code duplication.



I am actually not sure this is literally correct. My bet would be that the cheapest terabyte (with inflation) was at the end of mechanical drives, circa 2014 or so. Everybody then had 2-3 terabytes or more in their consumer computers. I feel now people rarely go above 1TB with NVMe's and SSDs


Last edited by dmpogo on Thu Feb 22, 2024 11:39 pm; edited 1 time in total
Back to top
View user's profile Send private message
szatox
Advocate
Advocate


Joined: 27 Aug 2013
Posts: 3407

PostPosted: Thu Feb 22, 2024 8:03 pm    Post subject: Reply with quote

Quote:
I am actually not sure this is literally correct. My bet would be that the cheapest terabyte (with inflation) was at the end of mechanical drives, circa 2014 or so. Everybody then had 2-3 terabytes or more in their consumer computers. I feel know people are rarely go above 1TB with NVMe's and SSDs
Yeah, there was a time where you'd buy a 4TB drive just because it was right there on a shelf in the store, while 1 or 2 TB head to be ordered, or would cost only a few $$ less than the 4TB one staring at you from behind the counter.

Quote:
Or Tesla limiting the acceleration of "their" cars. 30 years ago such things would have been unthinkable. MY car MY decision!
I'm pretty sure this particular decision was made by accountants rather than programmers. I think you can just pay extra and have the limit removed, so it's clearly not for safety or mechanical integrity or other real reasons.
_________________
Make Computing Fun Again
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3683
Location: Rasi, Finland

PostPosted: Fri Feb 23, 2024 8:34 am    Post subject: Reply with quote

pa4wdh wrote:
2) Security. E.g. if a bug is found in openssl it is updated and all programs that use the shared library immediately benefit
I think this is one of the most important feature of dynamic linking.

I've heard that electron projects have suffered from "static linking" too. Devs are too lazy to update the underlying webkit stuff even if the newer electron would fix some security issues.

I've heard you can build rust binaries with dynamic linking... but how limited is it?
_________________
..: Zucca :..

My gentoo installs:
init=/sbin/openrc-init
-systemd -logind -elogind seatd

Quote:
I am NaN! I am a man!
Back to top
View user's profile Send private message
GDH-gentoo
Veteran
Veteran


Joined: 20 Jul 2019
Posts: 1676
Location: South America

PostPosted: Fri Feb 23, 2024 2:55 pm    Post subject: Reply with quote

Zucca wrote:
I've heard you can build rust binaries with dynamic linking... but how limited is it?

It looks like there are guidelines for making a library crate of type cdylib not only installable as a shared library, but also callable from code written in other langages.

Library crates of type dylib can be installed as shared libraries, but they probably don't have a stable ABI.
_________________
NeddySeagoon wrote:
I'm not a witch, I'm a retired electronics engineer :)
Ionen wrote:
As a packager I just don't want things to get messier with weird build systems and multiple toolchains requirements though :)
Back to top
View user's profile Send private message
Zucca
Moderator
Moderator


Joined: 14 Jun 2007
Posts: 3683
Location: Rasi, Finland

PostPosted: Fri Feb 23, 2024 5:35 pm    Post subject: Reply with quote

GDH-gentoo wrote:
Library crates of type dylib can be installed as shared libraries, but they probably don't have a stable ABI.
That mostly/partly defeats the purpose of shared library.
_________________
..: Zucca :..

My gentoo installs:
init=/sbin/openrc-init
-systemd -logind -elogind seatd

Quote:
I am NaN! I am a man!
Back to top
View user's profile Send private message
GDH-gentoo
Veteran
Veteran


Joined: 20 Jul 2019
Posts: 1676
Location: South America

PostPosted: Fri Feb 23, 2024 9:23 pm    Post subject: Reply with quote

Zucca wrote:
GDH-gentoo wrote:
Library crates of type dylib can be installed as shared libraries, but they probably don't have a stable ABI.
That mostly/partly defeats the purpose of shared library.

Yes. But crate type cdylib (plus the guidelines?) should result in a shared library with the standard C ABI, if I understood correctly.
_________________
NeddySeagoon wrote:
I'm not a witch, I'm a retired electronics engineer :)
Ionen wrote:
As a packager I just don't want things to get messier with weird build systems and multiple toolchains requirements though :)
Back to top
View user's profile Send private message
Leonardo.b
Guru
Guru


Joined: 10 Oct 2020
Posts: 307

PostPosted: Sat Feb 24, 2024 12:58 pm    Post subject: Reply with quote

I don't mind static linking.

pa4wdh wrote:
2) Security. E.g. if a bug is found in openssl it is updated and all programs that use the shared library immediately benefit

The solution is to rebuild all linked packages.

I have seen are a few lean and mean all-statically-linked Linux distros out there. For example:
https://github.com/oasislinux/oasis
I never tried a all-statically-linked Gentoo. Hmm, it may be a project for a rainy moon.
Back to top
View user's profile Send private message
Leonardo.b
Guru
Guru


Joined: 10 Oct 2020
Posts: 307

PostPosted: Sat Feb 24, 2024 1:03 pm    Post subject: Reply with quote

Hu wrote:
pa4wdh wrote:
1) Save some disk space because all code is present only once. E.g. all programs that need crypto just use openssl.
Also, there is a memory savings angle.

On the other side, all this kernel stuff is quite a bit of extra complexity.

It's not like I have a strong opinion, just pointing out another point of view.
Back to top
View user's profile Send private message
Hu
Administrator
Administrator


Joined: 06 Mar 2007
Posts: 22598

PostPosted: Sat Feb 24, 2024 4:09 pm    Post subject: Reply with quote

Leonardo.b wrote:
pa4wdh wrote:
2) Security. E.g. if a bug is found in openssl it is updated and all programs that use the shared library immediately benefit
The solution is to rebuild all linked packages.
Even the closed source ones which use an open library, but for which the vendor went bankrupt and lost the source years ago? ;) Or the ones that have not been maintained in so long that modern tooling refuses to rebuild the main program without first "modernizing" it to current standards? Or the ones that no one even knows what to rebuild, because the original builder failed to document that this program is statically linked to a security-sensitive library, so no one knows it should be rebuilt? There are plenty of scenarios where rebuilding the main program is infeasible or outright impossible. Even for cases where it is possible, rebuilding every consumer over one bug in a popular library may mean a significant amount of CPU time and a lot of bandwidth to redistribute the fixed static versions.
Leonardo.b wrote:
On the other side, all this kernel stuff is quite a bit of extra complexity.
What kernel stuff adds complexity here, that could be removed if we used only statically linked programs?
Back to top
View user's profile Send private message
dmpogo
Advocate
Advocate


Joined: 02 Sep 2004
Posts: 3414
Location: Canada

PostPosted: Sat Feb 24, 2024 5:51 pm    Post subject: Reply with quote

Leonardo.b wrote:
I don't mind static linking.

pa4wdh wrote:
2) Security. E.g. if a bug is found in openssl it is updated and all programs that use the shared library immediately benefit

The solution is to rebuild all linked packages.

I have seen are a few lean and mean all-statically-linked Linux distros out there. For example:
https://github.com/oasislinux/oasis
I never tried a all-statically-linked Gentoo. Hmm, it may be a project for a rainy moon.



But who said that the bundled libraries are updated ? Only if vendor bothered doing it
Back to top
View user's profile Send private message
dmpogo
Advocate
Advocate


Joined: 02 Sep 2004
Posts: 3414
Location: Canada

PostPosted: Sat Feb 24, 2024 6:10 pm    Post subject: Reply with quote

Leonardo.b wrote:
Hu wrote:
pa4wdh wrote:
1) Save some disk space because all code is present only once. E.g. all programs that need crypto just use openssl.
Also, there is a memory savings angle.

On the other side, all this kernel stuff is quite a bit of extra complexity.

It's not like I have a strong opinion, just pointing out another point of view.


Is complexity is nothing relative to responsibility that would fall on distributions to check that everything bundled with software they distribute is safe. Basically, they will have to go in Apple mode and allow installation only of pre-approved software through "official" channels, or risk being quickly known as unsafe platforms.
Back to top
View user's profile Send private message
Leonardo.b
Guru
Guru


Joined: 10 Oct 2020
Posts: 307

PostPosted: Sun Feb 25, 2024 1:05 pm    Post subject: Reply with quote

Hu wrote:
Even the closed source ones which use an open library, but for which the vendor went bankrupt and lost the source years ago? ;) Or the ones that have not been maintained in so long that modern tooling refuses to rebuild the main program without first "modernizing" it to current standards? Or the ones that no one even knows what to rebuild, because the original builder failed to document that this program is statically linked to a security-sensitive library, so no one knows it should be rebuilt? There are plenty of scenarios where rebuilding the main program is infeasible or outright impossible.

Yeah. All I wrote applies only to a source based distro.
Here you run only programs you built yourself, and dependencies are tracked by another tool (like Portage).
In this case static linking is not so evil, it's just an unusual choice.

Quote:
 Even for cases where it is possible, rebuilding every consumer over one bug in a popular library may mean a significant amount of CPU time and a lot of bandwidth to redistribute the fixed static versions.

Fair point, if a bug is found in libc you have to reinstall a ton of stuff.

Quote:
What kernel stuff adds complexity here, that could be removed if we used only statically linked programs?

Aww. I don't know, it just looks more complicated than loading a self-contained executable. Isn't it?
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum