[lug] software engineering

Michael J. Hammel mjhammel at graphics-muse.org
Wed Nov 15 09:31:30 MST 2006

On Wed, 2006-11-15 at 01:13 -0700, Nate Duehr wrote:
> Ahh, yeah -- you've hit something here, that leads to my 
> curiousity-based question about underlying motivation.  No one's paid 
> (or paid too little) to work on certain parts of Linux?

OSDL.  Red Hat.  Even Novell (where the GNOME guys are, those bunch of
sellouts).  Not to mention IBM and HP. There are plenty of people paid
to work on Linux and core technologies related to the well known

> But... if the installer's 
> bad, there's a darn good possibility there are other serious bugs 
> lurking, right?  

Actually, the installer has next to nothing to do with most of the rest
of the runtime distribution.  The installer is a value-add product of
the distribution maker.  Linux, GNOME, KDE, the GNU utilities - all of
these are created completely outside the control of the distribution
maker.  The distribution maker simply packages them, and often that
process has been automated outside of their control too.

> Our installers are "our" first and best chance to put a 
> good foot forward, and if the darn stuff won't even install -- that 
> bodes very badly for Linux in general.  

That's true.  It's a problem with perception for the distribution
makers.  And it's a problem that the Linux community will have to
resolve over the long term to have a greater impact on the desktop.  But
in the embedded world, installers are irrelevant.  

> I truly haven't found an 
> installer I really think works well on all hardware, etc... 

You'll find many who would argue this is true of Windows too.  Macs are
probably (I say with a grain of salt, since I haven't done a Mac install
in years) less of a problem to install because the software is tied a
bit more closely to the hardware.

> You think there are that many?  I think that's the mystery I'm trying to 
> solve there... just how much testing REALLY goes into things? 

Keep in mind that not all the testing you think happens in buildings
actually happens.  Do you think they test every bolt, every beam?  Nope.
They test based on statistical probabilities of a group.  Consider that
every piece of code has *some* testing done before it's shipped (unless
the guy who wrote it never ran it at all).  The same is not true of
every bolt.

So how much testing is enough?  Who defines this?  How do we know
software has been "tested enough"?  Is quantity of testing sufficient to
satisfy you?  Or would the quality of the testing suffice?  Who measures
what quality testing is?  I ask all this because I spent about 10 years
as a software tester and spent a lot of time trying to define bounds and
meaning to test environments; it's not as easy as you might think.  As a
developer, I know there are huge areas of software that go under tested.
I also know that many of those areas really don't need the effort
expended.  90% (my estimate) of your test efforts go into the code paths
that are most likely to be run every day.  The exception areas - places
you don't expect or want to go to except in extreme cases - are where
you spend more quality testing time.  The reason is simple:  Those less
traveled paths are there to handle the worst possible cases.   

But in the end, those important areas have much less test time applied
than the rest of the code.  How much is enough testing?

>  I know 
> I'll never get that answer from a commercial shop -- or only a 
> half-assed one.  

Define "half-assed."

> But I'm surprised that some Linux project somewhere 
> hasn't attempted to document testing done to Linux's various components. 

This is what OSDL is attempting to do, at least in some respects.

>   Would be interesting.  Lots of "consortiums" out there to come up with 
> standards like LFS, etc... between organizations that pay people to work 
> on Linux, but no standard testing benchmarks for the core "stuff". 

Benchmarks are meaningless in most cases because they can be tweaked to
show "best performance" under your ideal conditions.  However, test
harnesses that shake the bejeezers out of software can be very eye
opening.  A very long time ago I had a test lab of my own (just me, no
other helpers) at Dell where I ran, among other things, NFS tests with
PCNFS and Dell Unix using bonnie.  At the time, bonnie did a very nice
job kicking the crap out of both sides.  Another time, I ran DOS to spew
data as fast as I could out a serial port on one PC into a Dell Unix
box's serial port on the other to find out why the serial driver was
crapping out at one of NASA's simulators.  Turns out I could spew faster
using a DOS box than a Unix box because DOS got out of the way when I
ran my test, while Unix kept swapping the test tool.  So who needs
benchmarks?  You just need to kick the tires the right way.

That said, there are plenty of benchmarks for core systems.  The
graphical environment (specifically video device drivers) have all kinds
of benchmarks to show conformance and performance.  It's questionable if
these also prove stability.

> Layer 7, yes... benchmarks for databases, network I/O & throughput, 
> etc... kinda a "proof is in the results" approach, but no good way to 
> tell if the last 100 patches broke something major in an installer?  Do 
> you kinda see where I'm headed here?

Thorough development environments use regression testing techniques to
make sure that recent changes to the source repository do not break what
was not broken before.  Sanity tests are used to make sure the source is
stable enough at any point to even run the regression test suite.  GCC
has both built into it.  The kernel has some, but I think OSDL just
posted a position announcment for someone to do more work in this area
(also related to documenation issues, apparently).  I'm sure there are
other open source projects with large numbers of developers that have
similar environments.  But then, does an open source project with 3
developers need automated regression testing or can the three guys just
agree to not to do stupid things, like forget to test their changes
before checking them in?  Depends on the group, and the project.

> Heh.  Nice.  But only one guy, and no army of testers wanting to take 
> his spot?  

Typically there is one tester for 5-10 developers for a software
project.  How many test pilots were there for every engineer working on
the X-15?  An army's worth?  How many creash test pilots are cramming
their way into the test labs of Ford?  I bet more people want to be
software testers than crash test dummies, but you don't seem concerned
about the quality of your car.

> There's still a motivation problem underlying all of this 
> here, somewhere.  Mostly probably that software testing sucks and 
> doesn't include much to inspire.  

Testing is very unglamorous, but can pay quite nicely.  It doesn't suck
because you can, depending on your management, write test harnesses that
are only used internally, which means you're only customer is the people
you know (well, not always, but often).  Sure can reduce the pressure on
you.  Also, your test harness is a known cost center - people expect it
to cost money and (directly) generate none.  Again, less pressure.  But
the pressure builds back up because the meat of the testing (not the
planning for testing) occurs at the end of a development cycle and the
ship date tends not to move out as fast as the date for delivery to
test.  That means your test period is almost always squeezed down with
every delivery cycle.  You plan for 4 weeks of solid end user testing.
You're left (after development delays) with 3 days and oh, by the way,
we took away half of your hardware so developers could use them.
Annoying as all hell.  If only the developers had done unit testing
along the way....

But I enjoyed it.  Just about equal with being a build master, but you
don't get the accolades the developers get.  But hey, who needs the ego
trip?  Even so, I'm having more fun as a developer now.

> Flight testing on the other hand, is 
> monotonous and boring too, but you get to see a sunrise over the Mohave 
> once in a while.  :-)

At Dell, I got to watch the sunset over the foothills.  Dell had a big
building at the Arboretum in Austin and I was on the 6th floor with one
full wall of windows facing south.  Kinda cool.  Of course, then they
moved all the engineers to converted factory space, took away our
windows and gave the nice offices to marketing droids.  Dell went
downhill really fast after that (for engineers, that is).

> Okay, but is experimentation and goofing off really disciplined formal 
> testing?  I get it... I do, but I'm asking where the next level is. 
> Doesn't Linux *have* to go there, someday?  Or is it a lost cause, 
> meaning there's not much hope for stupid installer bugs, ever?  

You need to stop associating installer bugs with Linux.  Linux is a
kernel.  Installer bugs are the distributors problem.  Both need
testing.  Different parties are responsible for each, and they are only
marginally related from an engineering perspective.

> Does the 
> current situation define the next steps in growing up for Linux, or is 
> it the end game?

Depends on what your goals are.  Personally, I don't care if we ever
achieve desktop domination.  I'd prefer a Mac for my wife and kid.  I,
personally, have been Windows free since about 1990 (starting with Dell
Unix, pre-Linux) on my desktop.  As for embedded systems, Linux (the
kernel) and GNU (the utilities) are already grown up.

Michael J. Hammel                                    Senior Software Engineer
mjhammel at graphics-muse.org                           http://graphics-muse.org
The Dixie Chicks for President!
     -- Anyone but Bush in 2004 --

More information about the LUG mailing list