Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Bloated Gnome!?
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next  
Reply to topic    Gentoo Forums Forum Index Gentoo Chat
View previous topic :: View next topic  
Author Message
EldermysticRazorsnout
n00b
n00b


Joined: 06 Mar 2005
Posts: 41

PostPosted: Mon May 09, 2005 1:41 am    Post subject: Reply with quote

Quote:
This is not true. Consider the Unix philosophy of many small apps that do one thing well.

The "UNIX philosophy" is not a shining example of good software engineering. UNIX was known to those who came before for its inconsistent utilities with meaningless names. For some fun, go read the venerable UNIX haters handbook. Its aged, and not every complaint is really valid, but on the other hand many valid complaints are still true, especially when using something with a direct line of descent from UNIX, like Solaris or a BSD. Most of them apply to the GNU utilities as well. The old Lisp hackers hated UNIX for doing these worst practices. In fact the phrase "worse is better" is not used to compliment the UNIX philosophy. It comes from the Lisp community, and it means that although UNIX is "worse," that worst practice makes it better suited to survival (but not usage). UNIX did not win out because it was good. It won because everything else did and does suck so much worse.

Quote:
While it is true that CPU time is not consumed by functionality not active at runtime, extra functionality does lead to greater processing at load time, or the initial execution. Global variables are often initialized, for example, even for features which do not necessarily come into play immediately (or even at all). Functions also exist for any additional services provided. These functions become compiled into the executable or shared library, and increase its ultimate (final) size. A larger binary (depending on the magnitude of the size increase) takes either a minutely longer time to load into RAM from disk

In other words, you don't understand what demand paging is either. Program A, that is 16kB, takes the same amount of time to load as Program A + 700mB of uncalled junk functions. The junk functions won't be paged in. The same philosophy is used for things such as fork(). In most cases fork() is followed immediatly by exec(). Do you think the kernel actually copies the memory byte by byte when you call fork()? The VM address is mapped to the same physical memory as the original process, and only changed when a page is written to. Hence you can call fork() with little cost. And the same thing happens again for zero-initialized memory, like the stack or heap allocated when a program begins. A single read-only page of all zeros always exists, and initially the VM address maps to it. It is only copied when written to.

File size is actually significant, unlike your ignorant views on memory. But not for our specific conversation, since we addressed bloat, which refers to unnecessary size. Unless that excess code does not benefit you, or its spent on a feature totally unrelated to the product, then its not bloat.

Quote:
Mr. Elder can argue all he likes that iexplore.exe is not loaded on initial windows startup. That is often true, and is the default. So long as the vast majority of its dependencies come pre-loaded however,

Good work agreeing with me, although quite verbosely.

ElderMysticRazersnout wrote:
The real story on IE is that Microsoft decided that IE would be an integral part of Windows, in that it would always be available. (Emphasis in original)

Taking liberties with the above quote, Kagerato wrote:
The real story on IE is that Microsoft decided that IE would be an integral part of Windows...

:lol:

Quote:
the word integral means incorporated

:lol:

Quote:
virtual memory (the use of swap) is a primary cause of reduced performance

Ehehe, I can't wait for this one to end up on funroll-loops. It contains two jokes in one: first that virtual memory actually hurts performance, and secondly that virtual memory is the same thing as swap. Perhaps you need to hit the books again?

Quote:
slowest UNIX/UNIX-derived/UNIX-like operating system there is save OS X

Quote:
I honestly can't tell whether he is grouping OS X as an operating system which uses Linux or not.

UNIX/UNIX-derived/UNIX-like means Linux? An interesting claim...I'm afraid I cannot endorse it.

Quote:
Mac OS X does not use the linux kernel. It uses mach

The OS X kernel is called XNU. And before you say it, XNU != Mach. Mach is only a portion of the kernel, with another portion coming from FreeBSD, and a third part Apple's own IOKit.
http://en.wikipedia.org/wiki/XNU

Quote:
Notice that he provides no actual figures

Figures like vi using 1.1mb of memory, nvi using 1.8mb of memory, and vim using 5.2mb of memory, on a Solaris 9 machine? I guess you right, vim only uses 4.7x more memory than vi, not 5. I retract my statement in shame. :roll:

Quote:
By the way, I'm sure the extended features of vim in no way offset the minor increases in memory usage

I agree that the increase is minor, since the official definition of "major" in respect to increases in memory usage is an increase greater than 5x, and vim only increases memory usage by 4.7x.

Quote:
He defeats himself.

Really? By pointing out that by saying that he was contradicting himself means I defeat myself? Seriously, try harder.

Quote:
The effective use of shared objects (*.so), the equivalent of Windows' dynamic link libraries (*.dll), is by no means theory.

While you struggle to read the definitions of integral and incorporated, you may want to look up theory. Its not uncommon for it to be used to refer to a practice or principal. Shadow Skill probably assumed those reading it understood English, although that assumption turned out to be incorrect.

UNIX style shared objects are not equivalent to Windows dynamic link libraries. A DLL, unlike UNIX .so files, does not use position independent code.

Ironically I originally came here as a troll to mock Gentoo users for constantly talking about how much they understand Linux when they really knew nothing (so you were right codergeek), after a friend was apparently banned for pointing out the well known fact that Python has a bad garbage collector. And rather than being banned myself (even when I attacked the Python GC in this thread), I was rewarded with a perfect example of the stereotype of the Gentoo pseudo-techie. :P

Edit: http://docs.hp.com/en/5965-4641/ch01s02.html#d0e103
An HP paper on virtual memory; this particular link covers what paging is, since so few here seem very familiar with it.
Back to top
View user's profile Send private message
slougi
Apprentice
Apprentice


Joined: 12 Nov 2002
Posts: 222
Location: Oulu, Finland

PostPosted: Mon May 09, 2005 8:16 am    Post subject: Reply with quote

EldermysticRazorsnout wrote:
Quote:
This is not true. Consider the Unix philosophy of many small apps that do one thing well.

The "UNIX philosophy" is not a shining example of good software engineering. UNIX was known to those who came before for its inconsistent utilities with meaningless names. For some fun, go read the venerable UNIX haters handbook. Its aged, and not every complaint is really valid, but on the other hand many valid complaints are still true, especially when using something with a direct line of descent from UNIX, like Solaris or a BSD. Most of them apply to the GNU utilities as well. The old Lisp hackers hated UNIX for doing these worst practices. In fact the phrase "worse is better" is not used to compliment the UNIX philosophy. It comes from the Lisp community, and it means that although UNIX is "worse," that worst practice makes it better suited to survival (but not usage). UNIX did not win out because it was good. It won because everything else did and does suck so much worse.


But isn't that a testament to the effectiveness of Unix' design philosophy? I agree that many things could be done better; I don't particularly like job control for example, and some of the early Unix api's are just bad (for example, gets before fgets was available, etc.) Terminal handling remains in a continuous state of suckage (although curses alleviates this somewhat). Some of these stem from the C language: char* as strings for example. Localisation mostly sucks, although with unicode it is slowly getting better.

You could look at systems like VMS or ITS, which arguably are better. But in the end they lost to Unix; whether due to cost, performance or the software culture around these operating systems.

At the time, worse is better was good for a few reasons: limited computing power; technically more competent users; and the plain lack of quality system APIs, and in some sense programming languages (LISP was and is slow, although beautiful in design) But in many cases it still works; software does not have to be perfect, just good enough for it to become widely used, at which point it will be worked on, in order to get it to be perfect. The Linux kernel might be the ultimate example of this.

And I have read the Unix Haters Handbook, and agree with many points therein. However, equally many points do not apply anymore; Unix, and Unix-like systems have moved on since then.

I am sure you have read Richard Gabriel's essay "Worse is better", but I am just pointing it out again in case you haven't, and for the other posters who might not have. Very nice read.

[edit] And I agree, way too many people have no real concept of how VM and paging work and interact [/edit]
Back to top
View user's profile Send private message
Shadow Skill
Veteran
Veteran


Joined: 04 Dec 2004
Posts: 1023

PostPosted: Mon May 09, 2005 9:53 am    Post subject: Reply with quote

Not really because its the same exact situation with the install method theory/prototype currently in vouge in the Linux world, it doesn't really work save in very narrow constraints once you move out of those constraints [Which is very likely to happen due to the structure of Linux, especially if people are content to whine about it being easier for developers.] what looks excellent on paper falls apart miserably. The only reason the rather dirty way Windows handles things is "better" is because its been all but ensured that you absolutely will have one sure fire way to get things installed quickly and with very little user interaction being nessecary. The system is structured so that the developer is probably going to have a working binary available simply because compiling tools are not a default package of the system, by effectively limiting the developers in this respect they made the damned thing work.

Linux is the exact oppossite the developer has total freedom at the potential expense of the ability for an end user to make use of the tool the developer has put out, however this hardly means that the method of dependency handling [throwing everything into one exe and then using the registry to prevent dll confusion between versions of the same dependencies and fostering a practice of locally defining library locations. (making code that looks for dlls in the main program directory for a given app.)] is actually good all it is, is effective unlike the Linux method.

Having something be considered good simply because everything else sucks absolute ass is not the mark of a program that IS in reality a well made program. As for doing one thing well I sometimes get the feeling that people are more interested in a tool that does one thing and does it badly [See Winamp2 clones, every version of Mplayer prior to the latest hard masked one which suppossedly allows you to change streams on the fly like every other important video player that supports formats capable of dual audio in existence for the past two years maybe more; the Iriver H340 mp3 player


The worst 350usd I ever spent in my life you can't even browse inside playlist like the Ipod or every single half useful software audio player in existence for about five or six years! Those moron firmware coders actually sold this thing expecting you to hammer the fast forward and rewind buttons to move through a playist as opposed to being able to SCROLL through the list like you could with an Ipod [Which also sucks mind you.] and other simillar mp3/ogg players..I mean WTF was it meant to be a glorified CD player? Anyway I think you get the point now.
_________________
Ware wa mutekinari.
Wa ga kage waza ni kanau mono nashi.
Wa ga ichigeki wa mutekinari.

"First there was nothing, so the lord gave us light. There was still nothing, but at least you could see it."
Back to top
View user's profile Send private message
EldermysticRazorsnout
n00b
n00b


Joined: 06 Mar 2005
Posts: 41

PostPosted: Mon May 09, 2005 1:51 pm    Post subject: Reply with quote

Quote:
But isn't that a testament to the effectiveness of Unix' design philosophy? I agree that many things could be done better; I don't particularly like job control for example, and some of the early Unix api's are just bad (for example, gets before fgets was available, etc.) Terminal handling remains in a continuous state of suckage (although curses alleviates this somewhat). Some of these stem from the C language: char* as strings for example. Localisation mostly sucks, although with unicode it is slowly getting better.

Depends on what you mean by "effectiveness." A lot of things in UNIX are very ineffective for the end user, but good for the businesses that really drive OS adoption. The old Lisp machines, being based around a more dynamic language, could do a lot of things that would astonish the UNIX world. They let you modify the system's code while the system was still up and running, for example. Imagine if changing your kernel in UNIX didn't mean recompiling it, installing it, and rebooting, but simply applying a patch. Or if a bug simply meant you immediatly were allowed to debug the program, fix the error, and restart execution where it left off, instead of just dumping core. These systems were fundamentally superior to UNIX in a technical way, but not at survival. In the end businesses didn't want to shell out tens of thousands of dollars for a system that could run it effectively, so they turned to companies like Sun to give them cheap workstations and an OS that could worked on them (at least Sun got theirs, again at end user expense, when X won out over NeWS for ostensibly the same reason). You are correct to point out C as one source for UNIX's problems. Operating systems are fundamentally shaped by the language they are based on. UNIX, like C, is a very static and rigid system, in contrast to the Lisp oriented computers of the past. If there is any good news its that desktop UNIX is now second to none, and high level languages are being rediscovered on a much bigger level than just Perl, and this time without hardware limits; so at least you can put a more pleasant layer on top of the traditional UNIX system and its core language.
Back to top
View user's profile Send private message
Lokheed
Veteran
Veteran


Joined: 12 Jul 2004
Posts: 1295
Location: /usr/src/linux

PostPosted: Mon May 09, 2005 5:48 pm    Post subject: Reply with quote

I get bloated when I drink pop in the morning.
_________________
You're not afraid of the dark are you?
Back to top
View user's profile Send private message
slougi
Apprentice
Apprentice


Joined: 12 Nov 2002
Posts: 222
Location: Oulu, Finland

PostPosted: Mon May 09, 2005 10:08 pm    Post subject: Reply with quote

EldermysticRazorsnout wrote:
Quote:
But isn't that a testament to the effectiveness of Unix' design philosophy? I agree that many things could be done better; I don't particularly like job control for example, and some of the early Unix api's are just bad (for example, gets before fgets was available, etc.) Terminal handling remains in a continuous state of suckage (although curses alleviates this somewhat). Some of these stem from the C language: char* as strings for example. Localisation mostly sucks, although with unicode it is slowly getting better.

Depends on what you mean by "effectiveness." A lot of things in UNIX are very ineffective for the end user, but good for the businesses that really drive OS adoption. The old Lisp machines, being based around a more dynamic language, could do a lot of things that would astonish the UNIX world. They let you modify the system's code while the system was still up and running, for example. Imagine if changing your kernel in UNIX didn't mean recompiling it, installing it, and rebooting, but simply applying a patch. Or if a bug simply meant you immediatly were allowed to debug the program, fix the error, and restart execution where it left off, instead of just dumping core. These systems were fundamentally superior to UNIX in a technical way, but not at survival. In the end businesses didn't want to shell out tens of thousands of dollars for a system that could run it effectively, so they turned to companies like Sun to give them cheap workstations and an OS that could worked on them (at least Sun got theirs, again at end user expense, when X won out over NeWS for ostensibly the same reason). You are correct to point out C as one source for UNIX's problems. Operating systems are fundamentally shaped by the language they are based on. UNIX, like C, is a very static and rigid system, in contrast to the Lisp oriented computers of the past. If there is any good news its that desktop UNIX is now second to none, and high level languages are being rediscovered on a much bigger level than just Perl, and this time without hardware limits; so at least you can put a more pleasant layer on top of the traditional UNIX system and its core language.


Hmm, you mention that Unix is ineffective for the end user, and then go on to talk about how Lisp machines allow for easy kernel hacking. I think it's important to differentiate between two main groups of users: 1) Hackers 2) The rest (lusers :P)

Now, a Lisp machine might be a Hacker's wet dream, and I confess I have thought about trying to dig one up from somewhere. Lisp is a nice language as I said before; I get somewhat confused by the parantheses sometimes (I guess I'm not a particularly good programmer :)). And changing the kernel code at runtime is pretty neat; but seriously, how often do you really need that? At least today, I am very satisfied with the free kernels available.

Also remember that C does allow for changing standard library calls like malloc with your own versions. (There are exceptions though, some old systems fail at this)

The NeWS vs. X debate is quite interesting. I think price, licensing and sentiments against Sun (after NFS) definitely played a role in NeWS' demise. But maybe the biggest problem was complexity: developing NeWS applications meant that you had to 1) Write the client side code in C and 2) Write the server side code in postscript. Talk about pain in the ass. Additionally there was not a large pool of existing code to draw from. Xlib is pretty damn fugly, but still beat NeWS in this regard, and there has always been a lot of Xlib code out there that you could look at. I think X is additionally the technology that would have aged better in any case, due it's inherent flexibility and extensibility.

I won't talk too much about high level (interpreted?) languages. They are good for utilities were performance is not critical, rapid prototyping (for today's measure of buzzwords), etc. Really to talk about the merits of these languages you'd have to talk about some language group or feature in particular, like static vs. dynamic typing, etc.
Back to top
View user's profile Send private message
Kagerato
Tux's lil' helper
Tux's lil' helper


Joined: 01 Dec 2004
Posts: 81

PostPosted: Mon May 09, 2005 11:25 pm    Post subject: Reply with quote

Shadow Skill wrote:
The reason I consider Linux dependency theoratical in nature is because you can still see various dependency related error messages/difficulties within any of these systems, they simply are not as effective as the Windows method of handling things, when was the last time you saw a windows app complain about a missing dll file from a program that was already installed baring file system corruption or registry corruption. If you answered once in a blue moon or never I would be inclined to agree with you.


Windows programs love to complain about missing dependencies even to this day. (It was so much worse back when 9x was the primary base.) The matter has certainly improved over time, since Microsoft took specific steps such as the implementation of the "Side-by-Side" library installation system on XP, but it simply doesn't compare to the GNU/Linux model (or that of any unix-like, it seems).

In GNU/Linux, if a program needs to install a library there is rather rarely any need to interfere with any of the existing shared objects. Multiple side-by-side installations of different versions for various libraries has been around for quite a number of years now. Windows has only recently effectively implemented this functionality, and it is hardly used at all by third-party developers. Microsoft has employed it to protect certain core libraries, like the newer common controls, but it doesn't go far beyond that.

Personally, the idea that every program installation should package all of its dependencies into itself or in adjacent archives has always seemed asinine. To a limited extent you would be right to say it works and it is simpler, but you cannot ignore the unintended side effects of such a method. Those installers get quite bulky at times, and end up with the capability of interfering with other programs on the system. This is partly due to the fact that Windows explicitly stores around 90% (or more; just taking an estimate) of its DLLs in %SystemRoot%\system32. Programs most often either copy their libraries there or place them in the same directory as the core launch executable. /usr/lib is somewhat similar, except that the model was designed from the ground up to avoid name conflicts (and subsequently, overwrites).

You may still feel that Windows ultimately implements a more practical solution to the shared library "dilemma". That's fine; a great deal of this problem is philosophy.

Shadow Skill wrote:
Before quoting people and screaming how .so and dll are similar actually understand what the person is talking about why do you think I was talking about the Windows method of dependency handling as opposed to Linux's method. Will people please stop screamingtroll when ever someone has a point that does not nessecarily cater to the mass psychosis that seems to grip some people in the computer world leading them to worship sorely outdated and or featureless applications becuase they have managed to twist gui and/or modern features into "bloat"


In terms of being shared libraries, their concept and reason for existence is identical. But in terms of the content of your paragraph, that's irrelevant. It seems to me that a casual observer would agree you're in the far more emotional position here.

Please note that I never once associated you with the term 'troll'. I'm not convinced your intent is primarily to stir up useless flames.

Shadow Skill wrote:
I never said or implied that you needed a JVM to use OO however it IS still required for some functionality so it could be said that it is required for a fully functioning OO.


That would, of course, depend on how one defines "fully functioning". While it is necessary to have the JVM to run the extra, add-on code, all of the core of the office suite is usable without it. Some would declare that they have a fully functioning office environment without java, and therefore deeming the JVM required is not the most general truth.

In any case, that's a rather minor point.

Shadow Skill wrote:
I will give you this you did actually present counter information as opposedd to simple screaming troll and running around like a fool unlike some of the posters in this thread but I still would love to know why having a different viewpoint [even if he is not entirely correct, which of course he probably is not.] is to be considered trolling without question while not even reading what someone has actually said in response to you or the thread and then flaming them is not called trolling?


On the first point, thank you. On the second, the core reason why I declared EldermysticRazorsnout to be a troll is that his comments carry an extreme weight to them, even if interpreted with an open, unbiased mind. He openly insulted and attacked a fellow poster, pretending to possess a far superior degree of knowledge. While he has demonstrated that he knows a decent fact here and there and is capable of reasonable argument, I still have absolutely no doubt his original intent was simply to create trouble.

EldermysticRazorsnout wrote:
In other words, you don't understand what demand paging is either. Program A, that is 16kB, takes the same amount of time to load as Program A + 700mB of uncalled junk functions. The junk functions won't be paged in. The same philosophy is used for things such as fork(). In most cases fork() is followed immediatly by exec(). Do you think the kernel actually copies the memory byte by byte when you call fork()? The VM address is mapped to the same physical memory as the original process, and only changed when a page is written to. Hence you can call fork() with little cost. And the same thing happens again for zero-initialized memory, like the stack or heap allocated when a program begins. A single read-only page of all zeros always exists, and initially the VM address maps to it. It is only copied when written to.


It is true that a program with megabytes of data (which can be executable code) tagged to the end of its file will normally have no effect on runtime performance. In many cases, it will also not affect the same program's load time, dependent partly on the seek time of the hard disk in finding what must be loaded as part of the startup routines.

However, you will be hard pressed to find a program that contains the extreme example you present. Most of the heavier programs do not contain excesses of uncalled routines.

If you really wanted to prove your point, instead of presenting excesses of technical jargon you would have laid out the results of an experimental test. Simply find two programs, one of which has a far greater code size (and resultingly, feature list), that load in precisely the same real time under identical conditions. I'm certain that somewhere out there, two such programs exist. However, this is beyond the scope of the original argument, as I shall present.

Since we have diverged from my core, original premise, I'll present it again: extra functionality does make for larger programs that tend to load more slowly and ultimately consume more memory.

Whereas Elder's original premise boils down to this: because it is possible to increase the efficiency of programs and this is a common and on-going goal in many projects, you should choose to use programs which have more features since the difference is so incredibly miniscule in any measurable performance factors.

While his concept holds in case for much of the newer hardware being sold today, it simply can't stand for all scenarios. If Elder never intended it to be generally and widely interpreted, that would have been crucial to point out in the early stages of his argument.

EldermysticRazorsnout wrote:
Good work agreeing with me, although quite verbosely.


Do not take a concession of a minor fact as an admittal of defeat. That is utter nonsense; you're merely drawing the attention away from the conclusion that mattered (does anyone smell red herring?).

EldermysticRazorsnout wrote:
Ehehe, I can't wait for this one to end up on funroll-loops. It contains two jokes in one: first that virtual memory actually hurts performance, and secondly that virtual memory is the same thing as swap. Perhaps you need to hit the books again?


First, let's define virtual memory in simple terms without needless jargon: virtual memory is an address space where actual physical devices may map real storage space. My statements were not intended to correlate virtual memory as synonymous with swap, though I see how you drew that conclusion. It was ambiguous indeed. Swap, meaning hard disk space, is an example of one way in which the virtual mapping extends the area of memory available for application data. Due to the fact that the hard disk is significantly slower at reads and writes, and especially because of the overhead for seeking to the appropriate sector, the usage of swap ultimately degrades system performance. Thus, if at all possible, it is highly recommended to avoid it -- especially in general computing. High-stress scenarios are another matter entirely.

EldermysticRazorsnout wrote:
UNIX/UNIX-derived/UNIX-like means Linux? An interesting claim...I'm afraid I cannot endorse it.


Linux and Unix-likes have an IS-A relationship which I understand. I was relatively certain I saw a particularly mentioning of Linux in your statement. However, at this point I simply appear to be mistaken. I apologize for any needless confusion, particularly because it is on a rather useless tangent discussion anyway.

EldermysticRazorsnout wrote:
Figures like vi using 1.1mb of memory, nvi using 1.8mb of memory, and vim using 5.2mb of memory, on a Solaris 9 machine? I guess you right, vim only uses 4.7x more memory than vi, not 5. I retract my statement in shame.


In large part, a good deal of that "excess" memory is due to the graphical components of vim (X11 and mouse support, among other things). It also includes syntax highlighting for an enormous number of languages, the capability to integrate with Perl/Tcl/Python, and do OLE automation under Windows. Yes, those features do require more memory. And indeed, they are not present in vanilla vi or nvi.

So while 4.7 is a rather large factor (I'm assuming its accurate, of course), it is not so unbelievable and such a sure-fire declaration of a "bloated" application when you consider the full capabilities of the programs being compared. The significance of the multiplier is also reduced when you identify 1.1 mbyte as not being much of a base memory usage in the first place. If the original program's memory usage were significantly higher, say tenfold, then you certainly have a valid concern so long as our features lists remain constant.

For the overall picture here, I simply do not see it as reasonable or consistent for you to advocate the usage of some programs and systems (in some cases this is Windows, in others it is not) over others while still maintaining the point you presented about application efficiency not suffering significantly as a result of additional features.

EldermysticRazorsnout wrote:
While you struggle to read the definitions of integral and incorporated, you may want to look up theory. Its not uncommon for it to be used to refer to a practice or principal. Shadow Skill probably assumed those reading it understood English, although that assumption turned out to be incorrect.


Here stands another example of 'ad hominem' tactics. I'm curious; is this your favorite technique? On further review, it's probably diversion.

Quote:
Ironically I originally came here as a troll to mock Gentoo users for constantly talking about how much they understand Linux when they really knew nothing (so you were right codergeek), after a friend was apparently banned for pointing out the well known fact that Python has a bad garbage collector. And rather than being banned myself (even when I attacked the Python GC in this thread), I was rewarded with a perfect example of the stereotype of the Gentoo pseudo-techie.


First, let me say your discrimination against members of a particular community is extremely admirable. Might I advise running for political office? Your techniques would surely earn votes.

Secondly, I am glad we're finally in agreement on the trolling.

As a tertiary but still relatively important point, I am nowhere close to being representative of an advanced Gentoo user, or indeed an advanced GNU/Linux user for the general case. My post count alone would have been one clue to that conclusion. The mere fact that I take a good deal of time to present a relatively convincing argument and demonstrate reasonable logic does not classify me as an expert in any technical field, nor did I ever intend to present that idea. I responded to this thread because I found reprehensible bashing supported by flawed logic or a clear lack of evidence. Personally, I feel most of that has now been resolved.


Last edited by Kagerato on Mon May 09, 2005 11:28 pm; edited 1 time in total
Back to top
View user's profile Send private message
EldermysticRazorsnout
n00b
n00b


Joined: 06 Mar 2005
Posts: 41

PostPosted: Mon May 09, 2005 11:27 pm    Post subject: Reply with quote

slougi wrote:
Hmm, you mention that Unix is ineffective for the end user, and then go on to talk about how Lisp machines allow for easy kernel hacking. I think it's important to differentiate between two main groups of users: 1) Hackers 2) The rest (lusers :P)

Changing the kernel is not necessarily kernel hacking. Most Linux users will never touch kernel code themselves, but plenty will recompile a kernel to add or remove features, to upgrade, etc. The average user would benefit greatly from a simplified upgrade process.

Quote:
Now, a Lisp machine might be a Hacker's wet dream, and I confess I have thought about trying to dig one up from somewhere. Lisp is a nice language as I said before; I get somewhat confused by the parantheses sometimes (I guess I'm not a particularly good programmer :)). And changing the kernel code at runtime is pretty neat; but seriously, how often do you really need that? At least today, I am very satisfied with the free kernels available.

Don't bother trying to find a lisp machine. Its not worth the trouble. If you really want to try it out, there is an emulator for the MIT CADR lisp machine. The CADR was one of the early ones, the last one before Symbolics/LMI came to be IIRC, so don't expect the really cool stuff that came later, like Genera. But it might be fun to look at out of historical curiosity.

Changing the kernel at runtime might not seem like a necessity, but think of how often you have run into userland programs that have bugs. Maybe not severe ones, perhaps just annoyances. Wouldn't it be cool to be able to fix it on the spot? Even if normal users didn't do that, it would make things much easier for developers. If you program in C, you are probably well aware of how annoying it can be to constantly debug, edit, recompile, debug again, and so on.

Quote:
The NeWS vs. X debate is quite interesting. I think price, licensing and sentiments against Sun (after NFS) definitely played a role in NeWS' demise. But maybe the biggest problem was complexity: developing NeWS applications meant that you had to 1) Write the client side code in C and 2) Write the server side code in postscript. Talk about pain in the ass. Additionally there was not a large pool of existing code to draw from. Xlib is pretty damn fugly, but still beat NeWS in this regard, and there has always been a lot of Xlib code out there that you could look at. I think X is additionally the technology that would have aged better in any case, due it's inherent flexibility and extensibility.

Postscript was not a good choice of primary language, but that was not why it wasn't chosen by the industry. Complexity was one thing that was going for NeWS. It actually reduced the complexity of implementation. X applications need to track more information themselves, since X does nothing but tell the application about events. And the X protocol is a mess. Most programs separate the interface and backend already, so separating the display is not such a big deal. What was a big deal was that NeWS needed several times the amount of RAM that any affordable workstation had in its day, at a time when people only wanted a windowing system to run a terminal and clock. It would be like a program today requiring two gigabytes of RAM to run (now that is something that would be called bloated). And it didn't help that everyone would have to pay the dominant UNIX vendor to use NeWS in their own UNIX.
Back to top
View user's profile Send private message
slougi
Apprentice
Apprentice


Joined: 12 Nov 2002
Posts: 222
Location: Oulu, Finland

PostPosted: Mon May 09, 2005 11:53 pm    Post subject: Reply with quote

EldermysticRazorsnout wrote:
slougi wrote:
Hmm, you mention that Unix is ineffective for the end user, and then go on to talk about how Lisp machines allow for easy kernel hacking. I think it's important to differentiate between two main groups of users: 1) Hackers 2) The rest (lusers :P)

Changing the kernel is not necessarily kernel hacking. Most Linux users will never touch kernel code themselves, but plenty will recompile a kernel to add or remove features, to upgrade, etc. The average user would benefit greatly from a simplified upgrade process.

Ok, I had not considered it from this point of view.

EldermysticRazorsnout wrote:

Changing the kernel at runtime might not seem like a necessity, but think of how often you have run into userland programs that have bugs. Maybe not severe ones, perhaps just annoyances. Wouldn't it be cool to be able to fix it on the spot? Even if normal users didn't do that, it would make things much easier for developers. If you program in C, you are probably well aware of how annoying it can be to constantly debug, edit, recompile, debug again, and so on.

It's definitely an annoyance, although there are ways to fix that. Both XCode and Visual C++ support debugging at run time; the free IDE's are behind here.

EldermysticRazorsnout wrote:

Quote:
The NeWS vs. X debate is quite interesting. I think price, licensing and sentiments against Sun (after NFS) definitely played a role in NeWS' demise. But maybe the biggest problem was complexity: developing NeWS applications meant that you had to 1) Write the client side code in C and 2) Write the server side code in postscript. Talk about pain in the ass. Additionally there was not a large pool of existing code to draw from. Xlib is pretty damn fugly, but still beat NeWS in this regard, and there has always been a lot of Xlib code out there that you could look at. I think X is additionally the technology that would have aged better in any case, due it's inherent flexibility and extensibility.

Postscript was not a good choice of primary language, but that was not why it wasn't chosen by the industry. Complexity was one thing that was going for NeWS. It actually reduced the complexity of implementation. X applications need to track more information themselves, since X does nothing but tell the application about events. And the X protocol is a mess. Most programs separate the interface and backend already, so separating the display is not such a big deal. What was a big deal was that NeWS needed several times the amount of RAM that any affordable workstation had in its day, at a time when people only wanted a windowing system to run a terminal and clock. It would be like a program today requiring two gigabytes of RAM to run (now that is something that would be called bloated). And it didn't help that everyone would have to pay the dominant UNIX vendor to use NeWS in their own UNIX.

I disagree. An event loop for X can be coded very easily and quickly, with a simple switch statement, in simple C code. Here is an example I wrote some time ago for an app I hacked up quickly. And maintaining state is not a very big burden on the programmer.
Also, the X protocol is decidedly not a mess. Some of the areas covered by the protocol are: color handling is something I have never understood, for example. The protocol itself is designed very well; and at 200 pages or so, it's not very long for a completely network transparent, transport independant windowing system.
Back to top
View user's profile Send private message
Shadow Skill
Veteran
Veteran


Joined: 04 Dec 2004
Posts: 1023

PostPosted: Tue May 10, 2005 5:24 am    Post subject: Reply with quote

What part of technical elegance does not matter do you not understand? The Linux method is technically more elegant than the Windows method but it does not yet actually work. By the same token the concept of a single place for config files to reside and be edited is more elegant than simply having one general directory for config files [/etc] however the implementation is so utterly pathetic that it creates the need for a reformat every few months as the registry becomes totally unmanageable. If the registry was actually done right it would make the current Linux setup look like a joke however because the Windows implementation is so awful I would conclude that the current method Linux uses is much better than the one Windows employs despite the Windows method being more elegant technically.


Just because one thing is bad doesn't mean one should do another thing badly because often times the "new" or "better" way ends up breaking things in a rather massive way.[See adoption of black children in the US, and the entire compulsory school system in the US for perfect examples.]
_________________
Ware wa mutekinari.
Wa ga kage waza ni kanau mono nashi.
Wa ga ichigeki wa mutekinari.

"First there was nothing, so the lord gave us light. There was still nothing, but at least you could see it."
Back to top
View user's profile Send private message
slougi
Apprentice
Apprentice


Joined: 12 Nov 2002
Posts: 222
Location: Oulu, Finland

PostPosted: Tue May 10, 2005 10:29 am    Post subject: Reply with quote

Shadow Skill wrote:
What part of technical elegance does not matter do you not understand? The Linux method is technically more elegant than the Windows method but it does not yet actually work. By the same token the concept of a single place for config files to reside and be edited is more elegant than simply having one general directory for config files [/etc] however the implementation is so utterly pathetic that it creates the need for a reformat every few months as the registry becomes totally unmanageable. If the registry was actually done right it would make the current Linux setup look like a joke however because the Windows implementation is so awful I would conclude that the current method Linux uses is much better than the one Windows employs despite the Windows method being more elegant technically.


Just because one thing is bad doesn't mean one should do another thing badly because often times the "new" or "better" way ends up breaking things in a rather massive way.[See adoption of black children in the US, and the entire compulsory school system in the US for perfect examples.]

What are you talking about?
Back to top
View user's profile Send private message
Shadow Skill
Veteran
Veteran


Joined: 04 Dec 2004
Posts: 1023

PostPosted: Tue May 10, 2005 7:47 pm    Post subject: Reply with quote

I was talking to Kagerato, not you sorry if that was not clear. :)
_________________
Ware wa mutekinari.
Wa ga kage waza ni kanau mono nashi.
Wa ga ichigeki wa mutekinari.

"First there was nothing, so the lord gave us light. There was still nothing, but at least you could see it."
Back to top
View user's profile Send private message
Kagerato
Tux's lil' helper
Tux's lil' helper


Joined: 01 Dec 2004
Posts: 81

PostPosted: Tue May 10, 2005 9:30 pm    Post subject: Reply with quote

Ah, but Shadow Skill: you're defending what can only be perceived here as a side topic (and the same one that you originally brought up, by the way). My focus has not been placed on any of the elegance of the linux implementation, nor on theories about whether it does or does not work in any and all situations. The central point I was attempting to present, but hasn't been effectively portrayed judging by your response, is that in practical terms the linux solution to dependencies works better. If you didn't agree with the reasons by which I presented this opinion, your response would be best suited to contradicting the evidence behind it.

As I implied, arguing the philosophy itself doesn't lead anywhere useful.

The configuration management systems of Windows and Linux/GNU are quite a separate topic from dependencies, also. I didn't address those subsystems, nor do I intend to.
Back to top
View user's profile Send private message
Shadow Skill
Veteran
Veteran


Joined: 04 Dec 2004
Posts: 1023

PostPosted: Tue May 10, 2005 11:27 pm    Post subject: Reply with quote

So creating a system in which the user is bound to either destroy his or her system because of binaries being incompatible with source code tar files, or propagating a methodology that otherwise guarantees that users will find that some applications just won't install properly not because the program is in any way broken but simply because the OS can't handle the installation method properly despite the nessecary tools being installed by default. It's ridiculous to claim that the Linux methodology is superior to the Windows [in this particular case.] when it creates a highly constrained environment while the people who make the tools are free to package their tools in any way the choose [Which they should be as long as the rest of these systems actually are able to handle the chosen method with minimal user interaction. (No hacking ebuilds or rpmbuild scripts for example.)] at the expense of the user being able to actually use tools without risking breaking his or her system to actually use the computer.

We can have the above or we can have Windows where the user is able to install virtually anything without incident, without hacking the exe or having to patch the code and rebuild the exe. [Normally, I'm sure its possible if the coder screws up really bad or there is some unforseen bug because of some setting the user happens to have.] The developer is pretty muc restricted in the methodology he or she can use so the problems that Linux experiences are circumvented almost entirely.
_________________
Ware wa mutekinari.
Wa ga kage waza ni kanau mono nashi.
Wa ga ichigeki wa mutekinari.

"First there was nothing, so the lord gave us light. There was still nothing, but at least you could see it."
Back to top
View user's profile Send private message
CoffeeMonster
n00b
n00b


Joined: 11 Apr 2005
Posts: 22
Location: /opt/nwn

PostPosted: Tue May 10, 2005 11:37 pm    Post subject: Reply with quote

Hate to interrupt, (Yes, that was a lie).

I use openbox, with torsmo, pypanel and composite extentions.

How much memory do I use up:

50MB

EDIT:

@EldermysticRazorsnout and the other Pseudo Intellectuals on this forum

I think one thing we can all agree on is:

Nobody Cares.
Back to top
View user's profile Send private message
Shadow Skill
Veteran
Veteran


Joined: 04 Dec 2004
Posts: 1023

PostPosted: Wed May 11, 2005 12:44 am    Post subject: Reply with quote

Wooptydoo what a freaking miracle, that is like me screaming I only use 50mb when I am running Fluxbox which is a very BASIC WM that does not have many of the features that many people want/need to use on their systems. To talk about being a psuedo intellectual you run in here like a fool screaming about Openbox when it is designed to be minimalist in nature and lacks many features so yourachievement of sorts is not really in any way surprising, if you pulled that off with a default install of Xfce then I would be impressed.

There is and was some reletively intelligent discussion going on for these past couple of pages yet you feel the need to pollute it with your foolishnessscreaming about your 50mb ram usage as if that magically means Openbox is the greatest thing in the friggin universe.
_________________
Ware wa mutekinari.
Wa ga kage waza ni kanau mono nashi.
Wa ga ichigeki wa mutekinari.

"First there was nothing, so the lord gave us light. There was still nothing, but at least you could see it."
Back to top
View user's profile Send private message
CoffeeMonster
n00b
n00b


Joined: 11 Apr 2005
Posts: 22
Location: /opt/nwn

PostPosted: Wed May 11, 2005 12:59 am    Post subject: Reply with quote

Indeed, it is. Oh yeah try to spell properly.
Actually I just wanted to bitch about you dipshit pseudo intellectuals, so I thought I could get away with it, by the Openbox bit. You say that Openbox doesnt have many features, yes I agree, and what are you going on about (xfce). Thats the point dumbass.

You need these tools to get you set on Openbox:

pypanel, torsmo (optional), feh and menumaker.

Good Job.
Back to top
View user's profile Send private message
codergeek42
Bodhisattva
Bodhisattva


Joined: 05 Apr 2004
Posts: 5142
Location: Anaheim, CA (USA)

PostPosted: Wed May 11, 2005 1:34 am    Post subject: Reply with quote

CoffeeMonster wrote:
Indeed, it is. Oh yeah try to spell properly.
Actually I just wanted to bitch about you dipshit pseudo intellectuals, so I thought I could get away with it, by the Openbox bit. You say that Openbox doesnt have many features, yes I agree, and what are you going on about (xfce). Thats the point dumbass.

You need these tools to get you set on Openbox:

pypanel, torsmo (optional), feh and menumaker.

Good Job.
Reported. Please refrain from posting such offensive or harrassing posts.
_________________
~~ Peter: Programmer, Mathematician, STEM & Free Software Advocate, Enlightened Agent, Transhumanist, Fedora contributor
Who am I? :: EFF & FSF


Last edited by codergeek42 on Wed May 11, 2005 2:52 am; edited 1 time in total
Back to top
View user's profile Send private message
pilla
Bodhisattva
Bodhisattva


Joined: 07 Aug 2002
Posts: 7729
Location: Underworld

PostPosted: Wed May 11, 2005 2:15 am    Post subject: Reply with quote

CoffeeMonster wrote:
Indeed, it is. Oh yeah try to spell properly.
Actually I just wanted to bitch about you dipshit pseudo intellectuals, so I thought I could get away with it, by the Openbox bit. You say that Openbox doesnt have many features, yes I agree, and what are you going on about (xfce). Thats the point dumbass.


Oh you're so l33t that I feel the urge of banning you so I don't look so dumb.
_________________
"I'm just very selective about the reality I choose to accept." -- Calvin
Back to top
View user's profile Send private message
thagame
Apprentice
Apprentice


Joined: 07 Mar 2004
Posts: 210
Location: Windsor, Ontario, Canada

PostPosted: Wed May 11, 2005 3:56 am    Post subject: Reply with quote

wow. this is like sitting in a windows xp forum bitching that kde dont run. if i recall the guy asked if gnome is using alot of ram for you not for a bunch of %&$^&*# to start defining bloat and what xp loads and doesnt load. this is a gentoo forum so who cares about what xp does.
Back to top
View user's profile Send private message
Shadow Skill
Veteran
Veteran


Joined: 04 Dec 2004
Posts: 1023

PostPosted: Wed May 11, 2005 4:46 am    Post subject: Reply with quote

Not really the thread has moved on from the original post as most long threads tend to, mainly because someone tried to distinguish between features and code bloat so it is still on topic especially if you read and understand that the initial post was talking about ram usage and the actual title of the thread is asking whether Gnome is bloated or not you will see that it is very much on topic for the most part. Bringing up how things do and don't work in other environments that have simillar applications to the subject when you are describing how people confuse featurelessness with efficiency is hardly invalid either. If you can't see how the discussion has moved because of what the OP initially asked you have not read the thread properly because I can clearly see how it became the topic of discussion in about four posts on the first page.
_________________
Ware wa mutekinari.
Wa ga kage waza ni kanau mono nashi.
Wa ga ichigeki wa mutekinari.

"First there was nothing, so the lord gave us light. There was still nothing, but at least you could see it."
Back to top
View user's profile Send private message
superstoned
Guru
Guru


Joined: 17 Dec 2004
Posts: 432

PostPosted: Wed May 11, 2005 7:36 am    Post subject: Reply with quote

i did really enjoy the discussion (at least i feel like i'm learning some things) until coffeemonster told us openbox uses only 50 mb ram.

wake up, man, don't use X at all and you can run with 4 mb. openbox sux, the console uses way less!!!

damn


anyway, i would like to add a bit to the discussion about the dependency handling win has, as opposed to linux.

i guess its very DUH that every windows user can install whatever he wants without having to rebuild the exe. windows simply does not change its core libraries. at least, almost not. and if some small changes are made, they are backwards compattible. binary. most linux libs, in contrast, change almost daily. and they change a lot. so at least binary compattibility can't be garantueed. so you have to recompile your apps. thats the price you pay for the fast pace of development under linux.

now i would be the last to say there shouldn't be some thought on how to do this better. the linux libs could at least try to give more backwards binary compatibility, and it should be easier to have several versions of the same lib on your pc. but the first option (binary compatibility) would hamper development, while the second (having 1.01, 1.02 and 1.02differentcompile on your pc) destroys the purpose of *shared* libraries.

sorry for my english but i guess its clear what i try to say :D
and if i'm wrong, please correct me...
Back to top
View user's profile Send private message
Shadow Skill
Veteran
Veteran


Joined: 04 Dec 2004
Posts: 1023

PostPosted: Wed May 11, 2005 4:38 pm    Post subject: Reply with quote

I think we can keep the faster pace of development if you just allow the systems to handle these different things, source code should already be the de facto standard sure fire way to get things going but judging from the pain and anguish you get to go through when using a binary distro this has not happened even though the compiler tools are all but standard on every system. What I think happened was that it was made into a de facto standard without the people taking measures to ensure it actually worked on every system, every time so now we are left with this mess.
_________________
Ware wa mutekinari.
Wa ga kage waza ni kanau mono nashi.
Wa ga ichigeki wa mutekinari.

"First there was nothing, so the lord gave us light. There was still nothing, but at least you could see it."
Back to top
View user's profile Send private message
slougi
Apprentice
Apprentice


Joined: 12 Nov 2002
Posts: 222
Location: Oulu, Finland

PostPosted: Wed May 11, 2005 8:51 pm    Post subject: Reply with quote

superstoned wrote:
most linux libs, in contrast, change almost daily. and they change a lot. so at least binary compattibility can't be garantueed. so you have to recompile your apps. thats the price you pay for the fast pace of development under linux.

now i would be the last to say there shouldn't be some thought on how to do this better. the linux libs could at least try to give more backwards binary compatibility, and it should be easier to have several versions of the same lib on your pc. but the first option (binary compatibility) would hamper development, while the second (having 1.01, 1.02 and 1.02differentcompile on your pc) destroys the purpose of *shared* libraries.

Glibc nowadays supports a thing called symbol versioning. In C every function or global variable is a so-called symbol, which is looked up when loading a shared library. The address of that symbol is then used for function calls or variable access. Using symbol versioning you can define several versions in one shared library. This would allow old programs to run on new glibc versions, for example, although all shared libraries can use it. Pity it's not used :?

[edit]
On the other hand, you don't need that for binary compatibility. Look at Xlib; it's been binary compatible since forever, and the X11 protocol has been binary compatible on the wire level since 1987.
[/edit]
Back to top
View user's profile Send private message
GerManson
Tux's lil' helper
Tux's lil' helper


Joined: 17 Mar 2005
Posts: 86
Location: Sonora, Mexico.

PostPosted: Wed May 11, 2005 9:32 pm    Post subject: Reply with quote

amm.. btw.. has anyone found the solution? :roll:
_________________
GerManson - Gentoo Linux i686 Dual Intel(R) Pentium(R) 4 CPU 2.80GHz Processor
http://www.ebrios.com.mx

Do The Evolution!! -> ∂f/∂y = ∫e^(x^2)+y^2 dx + ∫x^2+y^2 dy
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Gentoo Chat All times are GMT
Goto page Previous  1, 2, 3, 4, 5, 6, 7  Next
Page 6 of 7

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum