View previous topic :: View next topic |
Author |
Message |
steveL Watchman
Joined: 13 Sep 2006 Posts: 5153 Location: The Peanut Gallery
|
Posted: Mon Nov 02, 2015 5:58 pm Post subject: |
|
|
gwr wrote: | Computers are completely different, now. </sarcasm> |
LOL. We don't need no steenkin' modularity^W transistors.. ;-) |
|
Back to top |
|
|
gwr Apprentice
Joined: 19 Nov 2014 Posts: 194
|
Posted: Mon Nov 02, 2015 7:09 pm Post subject: |
|
|
steveL wrote: | gwr wrote: | Computers are completely different, now. </sarcasm> |
LOL. We don't need no steenkin' modularity^W transistors.. ;-) |
Computers don't use transistors any more. They are all Software As A Service in The Cloud. |
|
Back to top |
|
|
tld Veteran
Joined: 09 Dec 2003 Posts: 1845
|
Posted: Mon Nov 02, 2015 11:22 pm Post subject: |
|
|
Does anyone else think that Redhat is managing to game Goggles search, for example, when you search Google news for systemd? It sure seems that way to me. There was a time when doing that brought up all the critical articles etc, and very likely would have brought up things like the slashdot thread on that busybox commit.
Lately, searching Google news for systemd brings up just about nothing but pro-systemd softballs...currently at the top, an article on opensource.com titled "Why systemd is a practical tool for sys admins", followed by a ton of softball fluff pieces on softpedia.com. I mean FFS!...there's no way that's happening naturally with the level of controversy...I just don't buy it. |
|
Back to top |
|
|
gwr Apprentice
Joined: 19 Nov 2014 Posts: 194
|
Posted: Mon Nov 02, 2015 11:43 pm Post subject: |
|
|
tld wrote: | Does anyone else think that Redhat is managing to game Goggles search, for example, when you search Google news for systemd? It sure seems that way to me. There was a time when doing that brought up all the critical articles etc, and very likely would have brought up things like the slashdot thread on that busybox commit.
Lately, searching Google news for systemd brings up just about nothing but pro-systemd softballs...currently at the top, an article on opensource.com titled "Why systemd is a practical tool for sys admins", followed by a ton of softball fluff pieces on softpedia.com. I mean FFS!...there's no way that's happening naturally with the level of controversy...I just don't buy it. |
Google search has sucked since about 2010 or so when they started preferring brand web site results before the unknown web, even if theresults are less relevant. |
|
Back to top |
|
|
arnvidr l33t
Joined: 19 Aug 2004 Posts: 629 Location: Oslo, Norway
|
Posted: Tue Nov 03, 2015 7:35 am Post subject: |
|
|
ddg top results are mostly from official pages and wikis, a few articles with "latest controversy", "harbinger of the apocalypse" and such. Been very happy with it unless I'm searching for something very obscure that google's crawlers might be able to better find for me. _________________
|
|
Back to top |
|
|
krinn Watchman
Joined: 02 May 2003 Posts: 7470
|
Posted: Tue Nov 03, 2015 9:07 am Post subject: |
|
|
Thank you NeddySeagoon, there's just no better way to have that topic split and start with the busybox commit |
|
Back to top |
|
|
steveL Watchman
Joined: 13 Sep 2006 Posts: 5153 Location: The Peanut Gallery
|
Posted: Tue Nov 03, 2015 10:59 am Post subject: |
|
|
gwr wrote: | Computers are completely different, now. </sarcasm> |
steveL wrote: | LOL. We don't need no steenkin' modularity^W transistors.. ;-) |
gwr wrote: | Computers don't use transistors any more. They are all Software As A Service in The Cloud. |
Oh man, hated "SaaS" when M$ Marketing first came up with it. (1999 I heard of it, though no doubt it'd been in "development" as a "concept" for a while.)
SaaS == Metered software.
Like WTF? First off, if you don't control the physical setup, you simply cannot pretend to any due diligence with your data security.
Secondly, and more to the executive mindset, corporate data is the corporation for the vast majority of businesses.
You simply cannot "outsource your IT", or you are effectively a subsidiary of whomever you outsource to.
Nor can you "outsource your security", or you have none, for the core of what your business actually is, when it comes to legal existence.
IOW, purely from a business pov, it's a completely stupid idea, akin to falling for an email scam and sending your company reserves to an account in Nigeria. |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Tue Nov 03, 2015 3:14 pm Post subject: |
|
|
steveL wrote: |
You simply cannot "outsource your IT", or you are effectively a subsidiary of whomever you outsource to.
Nor can you "outsource your security", or you have none, for the core of what your business actually is, when it comes to legal existence. |
I recall a Dilbert cartoon. The company CEO was explaining his new vision to the pointy haired boss. "I'll fire them all, outsource the entire operation, no employees whatsover." The pointy haired boss says, "Sounds Great! When do we start." The CEO looks at him and says, "What do you mean, we?" |
|
Back to top |
|
|
depontius Advocate
Joined: 05 May 2004 Posts: 3522
|
Posted: Tue Nov 03, 2015 7:33 pm Post subject: |
|
|
Thinking back to "Unix Philosophy", something happened to a co-worker recently, and I helped him out, today.
Basically, I had a good idea what his problem was, and it was something that I'd fixed another time several years ago. Simply put, I'd forgotten what to do, I just knew that I had done it before, and had half an idea of where to start. But it was really no problem. I had a window open with a root prompt, and another window where I was perusing man pages, as needed. He was back and running in a few minutes.
This is a stupidly simple scenario - and that's the really good thing about it. It was discoverable, hackable, and all that stuff.
To be fair, I only ever ran systemd a few times, about three years ago. But I have on occasion fought with the XML mess that seems to pervade so many freedesktop.org tools, and it's anything but discoverable, hackable, and all that stuff. Everything I've read puts systemd squarely into that camp. With the freedesktop XML stuff, the answer seems so often to be a magic string whose value looks obvious once you've seen it, but there's never any idea of how to get there from knowing nearly noting. Like I just did with my co-worker's workstation. _________________ .sigs waste space and bandwidth |
|
Back to top |
|
|
khayyam Watchman
Joined: 07 Jun 2012 Posts: 6227 Location: Room 101
|
Posted: Tue Nov 03, 2015 9:24 pm Post subject: |
|
|
gwr wrote: | Neil Brown wrote: | One of the big weaknesses of the "do one job and do it well" approach is that those individual tools didn't really combine very well |
|
gwr ... hehe, way to constuct a fallacious argument. There is absolutely no relation between "combin[ing] well" (whatever that might mean ... someone care to provide an example of tools "combining well"?) and the "clear big picture" touted. With such a mismatch of ideas you can pretty much come to any conclusion you like ...
best ... khay |
|
Back to top |
|
|
gwr Apprentice
Joined: 19 Nov 2014 Posts: 194
|
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Fri Nov 06, 2015 6:31 pm Post subject: |
|
|
khayyam wrote: | gwr wrote: | Neil Brown wrote: | One of the big weaknesses of the "do one job and do it well" approach is that those individual tools didn't really combine very well |
|
gwr ... hehe, way to constuct a fallacious argument. There is absolutely no relation between "combin[ing] well" (whatever that might mean ... someone care to provide an example of tools "combining well"?) and the "clear big picture" touted. With such a mismatch of ideas you can pretty much come to any conclusion you like ...
best ... khay |
I dunno. It seems to me that I pipe one tool into another a lot. |
|
Back to top |
|
|
khayyam Watchman
Joined: 07 Jun 2012 Posts: 6227 Location: Room 101
|
Posted: Fri Nov 06, 2015 7:24 pm Post subject: |
|
|
Tony0945 wrote: | khayyam wrote: | gwr wrote: | Neil Brown wrote: | One of the big weaknesses of the "do one job and do it well" approach is that those individual tools didn't really combine very well |
|
gwr ... hehe, way to constuct a fallacious argument. There is absolutely no relation between "combin[ing] well" (whatever that might mean ... someone care to provide an example of tools "combining well"?) and the "clear big picture" touted. With such a mismatch of ideas you can pretty much come to any conclusion you like ... |
I dunno. It seems to me that I pipe one tool into another a lot. |
Tony ... you're speaking of "combing well"? That would be the use of stdout and stdin, it doesn't really explain what this "combine very well" means in the above argument, or how the "unix philosophy" misses the "big picture".
He might as well have said that "this stuffed polar bear and the empire state building doesn't really combine well in this martini" ;)
best ... khay |
|
Back to top |
|
|
steveL Watchman
Joined: 13 Sep 2006 Posts: 5153 Location: The Peanut Gallery
|
Posted: Fri Nov 06, 2015 7:58 pm Post subject: |
|
|
I think Tony's point is the same as mine; the argument "that those individual tools didn't really combine very well" is completely false, prima facie for anyone who's ever used a *nix.
There's no need to analyse it on a deeper level: it's obviously horse-shit. ;)
If we want to explain why in a more basic fashion: it's because the whole of UNIX is about combining tools and their output as input to the next.
The reason it doesn't appeal to "rockstar" developers, is because they want to prove how important they are, by rewriting everything and showing how much code they've produced ("I made this!").
Whereas the older ones know that it's all about how little code, and time, you outlay. Because it's not about you: it's about the CPU, and if you can do it with the stdlib, or a standard library, rather than rewriting it all, then it will perform well.
On the one hand: think icache. More importantly: simple clean code, is much easier to optimise, in the very rare cases where you actually need to. (It opens the door to algorithmic optimisations, which are the most effective, and open up all the other ones.)
It's also much easier for the compiler to optimise the rest of the time, so it tends to run faster overall, and tends to stay fast when moved to another platform.
It still takes at least a decade to grow out of ego-attachment and the feeling that you need to produce more, which is so prevalent in every other part of society, when in fact you need to produce less, while still fulfilling the brief. (Necessary complexity.) |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54577 Location: 56N 3W
|
Posted: Fri Nov 06, 2015 8:25 pm Post subject: |
|
|
Tony Hoare wrote: | There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult. |
While the first quote is interesting systemd/kdbus fails both options in the latter quote. There are obvious deficiencies to some reviewers or it would be in the kernel already. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
steveL Watchman
Joined: 13 Sep 2006 Posts: 5153 Location: The Peanut Gallery
|
Posted: Sat Nov 07, 2015 8:26 am Post subject: |
|
|
The first quote is more interesting when you supply it in full, and switch it round:
Knuth wrote: | We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. |
Service startup is not in the critical 3%, whereas most of the stdlib is: which is why it's there.
The thing I find remarkable, is that no-one bothers with input times to programming languages, which has always taken up about 40% of the time in parsing.
Instead everyone gets their knickers in a twist about service startup, which takes a great deal less than a tenth of a percent of the useful time.
And ofc everyone still complains about compilation time, and "script startup" time. Extraordinary. |
|
Back to top |
|
|
NeddySeagoon Administrator
Joined: 05 Jul 2003 Posts: 54577 Location: 56N 3W
|
Posted: Sat Nov 07, 2015 9:23 am Post subject: |
|
|
steveL,
steveL wrote: | And ofc everyone still complains about compilation time, and "script startup" time. Extraordinary |
Aww :( I did supply a link.
Those two times are visible to users as waiting time and its human nature to try to avoid waiting.
Think supermarket queues and traffic jams. The other queues all seem to to move faster that you.
I agree that its not rational. _________________ Regards,
NeddySeagoon
Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail. |
|
Back to top |
|
|
steveL Watchman
Joined: 13 Sep 2006 Posts: 5153 Location: The Peanut Gallery
|
Posted: Sat Nov 07, 2015 11:28 am Post subject: |
|
|
It's not an issue to (want to) speed these things up; the irrational part is listening to hoodoo about speeding up the wrong things, from people who could not code a shellscript efficiently, if their lives depended on it, and who clearly are still wet behind the ears, as well as in love with web-bloat, rather than crafted software.
Gossip is good and all, but I don't want to be treated by a surgeon who listens to gossip about how "hygiene is unnecessary", and follows fashion, rather than science. |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Sat Nov 07, 2015 3:21 pm Post subject: |
|
|
steveL wrote: | It's not an issue to (want to) speed these things up; the irrational part is listening to hoodoo about speeding up the wrong things, from people who could not code a shellscript efficiently, if their lives depended on it, and who clearly are still wet behind the ears, as well as in love with web-bloat, rather than crafted software. |
When I was a consultant, I worked on a semi-large project. It took about six months. I wrote it in C. When it was all working fine, I carefully benmarked it and time it with an oscilloscope. I found the critical path which was in one subroutine. In fact, one loop in the subroutine. I re-wrote that loop in assembler and did a good job, even though I say so. A couple of years later, I returned to that company and talked with the regular who took over maintenance. He proudly told me that he had rewritten the entire project in assembler "for speed". |
|
Back to top |
|
|
orlfman n00b
Joined: 31 Jul 2006 Posts: 68
|
Posted: Sun Nov 08, 2015 5:12 am Post subject: |
|
|
when systemd first came out i was extremely interested in it. the idea of it was great, and something linux needed. an updated init system that could easily replace sysvinit and finally bring more standardization across distros regarding the init system. linux was heavily fragmented in this regard. they all used sysvinit underneath but each had their own flavor.. enough flavors that made you sit there wondering why? did you really have to go that far just to be different?
when systemd came out it was great... it wasn't "bloat." it wasn't slow. but now... its grown... tremendously. incorporating so much stuff that its no longer a init system... but the system itself. people have been saying "whats next? the kernel being taken over by systemd?" when really its "when?"
they're making a lot of design choices that mimic windows... choices that even windows administrators absolutely hate. linux being modular is what makes it so great.
the problem wasn't the different flavors and modularity. the issue was going to far causing to much fragmentation. now with systemd its being to monolithic. _________________ Orlfman |
|
Back to top |
|
|
digi_owl n00b
Joined: 04 Oct 2015 Posts: 9
|
Posted: Sun Nov 08, 2015 7:21 am Post subject: |
|
|
Best i can tell, the reason outside devs disliked distro "fragmentation" was the issue of dependencies.
This because major distros could not make up their mind about package handling.
Should a upstream source archive that produce multiple binaries be treated as one package or multiple?
Never mind that all but a few have problems with library versions.
For most if you want to install libx-1.1 and libx-1.2 side by side, you have to pull crap like libx-1.1 and libx-1-1.2 when naming packages.
This is not a problem at the OS level, because it can use things like SONAME to keep the lib versions separate. Nope, it is squarely a package manager and package format issue.
And the "fix"? the fad of the day, container. Stuff everything every binary needs into its own container and call it a day.
You can really tell that most of the systemd people are coming out of desktop, web and devops backgrounds, slowly working their way down to the kernel with (free)desktop provided blinders firmly in place. |
|
Back to top |
|
|
steveL Watchman
Joined: 13 Sep 2006 Posts: 5153 Location: The Peanut Gallery
|
Posted: Sun Nov 08, 2015 12:18 pm Post subject: |
|
|
steveL wrote: | It's not an issue to (want to) speed these things up; the irrational part is listening to hoodoo about speeding up the wrong things.. |
Tony0945 wrote: | When I was a consultant, I worked on a semi-large project. It took about six months. I wrote it in C. When it was all working fine, I carefully benmarked it and time it with an oscilloscope. I found the critical path which was in one subroutine. In fact, one loop in the subroutine. I re-wrote that loop in assembler and did a good job, even though I say so. |
Lovely :-)
I wanted to mention profiling as the essential pre-requisite before you begin "optimalising", but figured I was ranting as it was.. ;)
Quote: | A couple of years later, I returned to that company and talked with the regular who took over maintenance. He proudly told me that he had rewritten the entire project in assembler "for speed". |
Lol; precisely what Knuth warns against: Knuth wrote: | Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. | which I think is the quote that people should really use, along with:
Kernhighan & Plauger, 1978 wrote: | It is more important to make the purpose of the code unmistakable than to display virtuosity. The problem with obscure code is that debugging and modification become much more difficult, and these are already the hardest aspects of computer programming. | or to put it more directly:
Kernighan wrote: | Everyone knows that debugging is twice as hard as writing a program in the first place.
So if you are as clever as you can be when you write it, how will you ever debug it? | all of which is summed up in the principle:
Code: | Write clearly -- don't be too clever. |
The best overview I've read on optimisation, is a combination of "The Practice of Programming" (Kernighan & Pike, 1999) which is essential, and Bentley's "Programming Pearls" (1st vol) to see how algorithmic optimisations are applied, first and foremost, before we get specific.
The overriding principle to bear in mind is: YAGNI.
It's very unlikely your code is the bottleneck: even where it could be done "better", it's very unlikely your code runs often enough for it to be an issue. When it does, profile the whole task/process/job first, or you'll end up "optimising the idle loop", and render the project unmaintainable.
And be very wary of rewriting the stdlib, or someone else's implementation of whatever that you could use; this is what will get your code a label of "reinventing the wheel", rather than "a smart algorithm".
Remember: simple clean code, is simple to maintain, debug, and understand, and far simpler for the compiler to optimise, and much more likely to stay fast on different platforms, and in situations you never envisaged.
That is why the code which survives from the last 20-30 years, is so "boring": because it's clean, and simple, at least in form.[1]
And that doesn't appeal to intellectuals looking for a puzzle, or to hoodwink the "masses"; only to coders looking for a result who know in their bones that, per Djikstra: Code: | the computing scientist's main challenge is not to get confused by the complexities of one's own making. |
[1] It's also robust: that means errors are expected, and handled where we know what they mean.
"Fail early, fail hard" is much more useful in the overall scheme of things, ime of bash-scripting, than "let's try to be smart." |
|
Back to top |
|
|
Tony0945 Watchman
Joined: 25 Jul 2006 Posts: 5127 Location: Illinois, USA
|
Posted: Sun Nov 08, 2015 6:35 pm Post subject: |
|
|
steveL wrote: |
Remember: simple clean code, is simple to maintain, debug, and understand, and far simpler for the compiler to optimise, and much more likely to stay fast on different platforms, and in situations you never envisaged. |
Another job at another client was a demo board for an offset printer. I knew and now know nothing about offset printers, so I had to program strictly on their requirements. I think that project was all assembly language, an 8051 I think. The deadline was for a trade show. I finished the project and went on vacation. They had a spec change that required a program change. They couldn't get hold of me and changed it on their own. The program manager later complimented me on writing it in a clear manner with function comments that were clear and appropriate, function names that made sense and overall clarity. they figured out what to do in an hour. We had had no walk throughs or design reviews because everyone was busy. I was rather proud of that praise. It takes skill to make it simple, just like it takes writing skill for an author to write a novel that flows well. L.P. would probably be proud of obscurity rather than clarity.
Although as my boss at the consulting firm once said,"If they knew how to write software, they wouldn't need us."
EDIT: "complimented" not "completed"
Last edited by Tony0945 on Tue Nov 10, 2015 2:34 am; edited 1 time in total |
|
Back to top |
|
|
Anon-E-moose Watchman
Joined: 23 May 2008 Posts: 6147 Location: Dallas area
|
Posted: Mon Nov 09, 2015 10:43 am Post subject: |
|
|
Was on one of the linux news sites this morning and ran across this link,
it does have to deal with a corporation that we mention regularly and
so IMO it does tie into the discussion about sysd/(k)dbus
http://www.fosspatents.com/2015/11/hypocritical-red-hat-hopes-to-leverage.html
Quote: | While I don't mean to endorse everything Dr. Roy Schestowitz has written about Microsoft on his TechRights blog (and certainly not everything he's ever written about me), I agree with him that media reports on the Microsoft-Red Hat deal could have dug deeper, especially into the patent aspects of that deal. I furthermore agree that Red Hat is apparently happy about making it easier for Microsoft to impose a patent tax on Linux and that Red Hat has simply sold out FOSS values. According to TechRights, Red Hat executives tried to dissuade Dr. Schestowitz from his vocal criticism of the deal, but failed.
I've been saying for years that Red Hat is utterly hypocritical when it comes to patents. It has a history of feeding patent trolls and fooling the open source community. There is, to put it mildly, no assurance that all of its related dealings actually comply with the GPL. |
It is an interesting read. _________________ UM780, 6.1 zen kernel, gcc 13, profile 17.0 (custom bare multilib), openrc, wayland |
|
Back to top |
|
|
digi_owl n00b
Joined: 04 Oct 2015 Posts: 9
|
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|