[lug] Re: recycling code [WAS Fwd: NICHOLAS PETRELEY: "The Open Source" ]
mec at dotorg.org
Thu Mar 22 12:25:46 MST 2001
On Thursday 22 March 2001 11:49, you wrote:
> Matt Clauson wrote:
> > You miss the point I'm trying to make here. I contend that by re-using
> > someone else's still functional code instead of rewriting all of it by
> > scratch (even after looking at someone else's to understand it -- I'll
> > cover that below) you can save MASSIVE amounts of time and effort, and
> > devote time
> That depends on how the code was designed. Not just quality, but
> consider some programs are designed for OOP, others procedural. Some
> code depends on libraries that are desirable to avoid. Sometimes the
> goal isn't just the functionality, but a more modular means of doing
> something for later extension. Sometimes reinventing the wheel is just
> for one's own fun when not doing things for money. Sometimes it is
> necessary to rewrite something just to understand it better. One of my
> favorite versions of Murphy's laws is: "Interchangeable parts DON'T".
> From a programmer's point of view, there are usually goals that the end
> user doesn't see, that go far beyond the end appearance or functionality
> that the user will see.
All of these are valid points. However, making something more modular, or
converting it from OOP to procedural, does not necessarily mean that you have
to rewrite the damn thing from scratch. Yes, some items will have to be
rewritten. That's an acceptable casualty. But it doesn't mean having to
rewrite from the ground up now, does it? Splitting a class up into several
procedures, as well as the data... Ah, hell, I forget the term now... (kids:
don't use Perl too the exception of C -- it rots your brain) ah, yes,
structures. Anyway, splitting up the class into its components doesn't
necessarily mean than you have to rewrite supermassive amounts of code from
the ground up. Some rewriting is needed, yes.... But do we have to rewrite
Pine from scratch to make PIMP?
Rewriting code to avoid bad libraries (either in code, licensing, or some
other issue) is also something that can't be avoided sometimes, and falls in
the category of 'acceptable casualty'. Sometimes that happens. A good
example of this would be the entire KDE vs. Gnome debacle. The QT library
had some issues, and still does. The competition is healthy, and Gnome is
making progress. Nautilus looks pretty damn sweet overall, and I'd probably
choose it over kfm fairly rapidly. But my interests and needs have changed.
Nautilus is (as I understand it -- I may need to recheck my facts) pretty
much all-GPL software. I could take a huge chunk of it and port it into kfm,
should I want to. I could also submit KDE functionality patches to Eazel for
inclusion into Nautilus. Both are acceptable. Both are easy. Both don't
require re-writing something from scratch. It doesn't mean I have to rewrite
the entire damn thing from scratch. And projects for the pure hack value,
while cool and sometimes VERY worthwhile, are taking programmers and eyes
from projects that have much more real usage value AS OF RIGHT NOW.
Anyway, back to my original argument, which was KDE/Qt becoming the "Gold
Standard" over Gnome/GTK+...
Where KDE has features and integration, and is rapidly approaching
"ready for prime time" status, I see Gnome foundering. Why? App bloat.
Everyone and his borther is putting out apps, and designs... But they aren't
getting finished. Scratching an itch, writing a Gnome mailer in Perl because
of the hack value, this is all good and well... But the 'user friendliness'
of Gnome has suffered -- I find apps that are somewhat unstable (I averaged a
crash with Balsa about 3 times a day. Not catastrophic, no data loss, just
forcing me to reload the app), don't have massive chunks of functionality
(the multiple personality thing still isn't in the massively released code,
and the KDE functionality works better), and just seems somewhat 'unpolished'.
The Gnome UI still feels 'kludgy', and has some rendering glitches. It's
still a major "developmer's platform". This isn't a bad thing, because the
people who use it will fix bugs that annoy them. Gnome doesn't feel like it's
progressing as fast as KDE did in the same stage. There's lots of apps still
in the 0.x stage in the code... And Gnome is trying to push v2.0 out the
door. The backend framework is there, but the USABLE apps that
[(boss)|(parents)|(kids)|(joe sixpack)] can use JUST ISN'T there.
Admittedly, with KDE, the entry barrier that you encounter with Unix
(multi-user over single, different concepts, etc) is still there... But
still, I find KDE much easier to use and more friendly, especially when
looking at it more from a novice perspective.
You bring up good points about the need to rewrite code, or even the desire
to. However, this is costing Gnome a lot in user share, and the app bloat
that Gnome is seeing may eventually cost the project dearly. Multiple apps
are a good thing given the right circumstances... Scratching an itch or
doing something for the 'hack value' is a good thing, given the right
circumstances... But doing to excess may destroy the goals you REALLY want
to achieve, just for the sake of 'being cool'.
More information about the LUG