[lug] RAID installation on Fedora 6 Zod
nate at natetech.com
Wed May 16 21:21:32 MDT 2007
On May 16, 2007, at 2:27 PM, Sean Reifschneider wrote:
> On Mon, May 14, 2007 at 01:49:09PM -0600, Nate Duehr wrote:
>> How about, under most "normal" disk replacements, zero commands typed
> Sure, that's hardware. Linux also has hardware that does "zero
> rebuild". So, it seems that on the software side it really doesn't
> much easier than under Linux, and ditto on the hardware side.
> Sure, you
> can say that Linux software RAID sucks compared to Solaris hardware
> but that's just trolling...
I was careful to state that there are newer software implementations
(commercial) that are also zero-command rebuilds, or so I hear.
You seem very reluctant to admit that Linux software RAID up until
very recently wasn't very mature, and required -- for at least five
to seven years longer than the commercial versions -- a much larger
amount of manual intervention than commercial Unix software RAID
Today, it's better -- but it just "got there" only recently. People
starting with Linux RAID today have probably 90% of the features of
the commercial flavors, and slightly more complex setup and removal
if the RAID needs to be modified with no down-time.
5 years ago, you couldn't have said anything good other than it was
cheap, about Linux software RAID. You also couldn't get (easily
other than fine small companies like yours) 24/7/365 support for it,
if you were using it in a mission-critical environment. In fact, you
probably still can't -- from anyone "big enough" for most large
corporations to sign deals with -- from anyone other than RedHat.
We're not trolling -- we're stating our preference for hardware RAID
because we've seen how easy it makes things, long-term.
No offense to you or your organization, but most companies today
would prefer not to have to hire talent to set up RAID 5. They'd
rather buy a more expensive hardware RAID solution, that comes with a
24/7/365 800 number... plug it in, turn it on, format it and put
their data on it.
An example from my day job, just today... Sun Enterprise 440 with
four disks installed in it internally, one external StorEdge 3310
series JBOD. Two internal disks unused, available as spares.
Application installed on RAID 0 stripe of disks 1 & 2 internally
that's then RAID 1 mirrored to two disks also in a RAID 0 stripe in
Drive 2 internal failed today. (In Sweden, actually.)
A couple of Solaris "meta" commands later, internal Disk 3 was now
being used instead of Disk 2, and mirroring occurred online with no
downtime and little impact to performance.
But here's the kicker... that box was a Solaris 8 box. The OS and
all the commands to do that were available in February of 2000.
Linux software RAID in February of 2000 was atrocious.
The newer commercial stuff is even better, and smarter.
We're not saying Linux software RAID is "bad", or "hasn't gotten
better" -- we're saying we trust what we've been using (and has a
huge install base) since the beginning of the millennia, more than we
trust Linux's "stuff" which still seems to be a bit of a moving target.
And ZFS is flat-out brilliant. It's really too bad Sun's so
mismanaged these days... they still put out a very nice OS and lots
of tools from people that really understand a zero-downtime mentality.
nate at natetech.com
More information about the LUG