[lug] Server Partitioning Recommendation
blug-mail at duboulder.com
Fri Jan 19 21:18:00 MST 2007
Hugh Brown wrote:
> Daniel Webb wrote:
>> On Thu, Jan 18, 2007 at 06:44:03PM -0700, dio2002 at indra.com wrote:
>> I'm curious, though, what happens when you put LVM on RAID5 and then
>> add a
>> drive to it?
> I suspect the answer is that it depends. I know that there's a maximum
> LV size and that the size of the PE is dependent upon a variety of
> factors. I assume that if you don't exceed any parameters by adding the
> disk, then you'd just have more PEs available to allocate. I'd want to
> be sure and test this before doing it in production.
As far as I understand things, the fs drivers mostly work
with block devices that are registered by device drivers. Some device
drivers consume block devices (e.g. device mapper, sw raid) and register
a new block device.
A whole disk is different from a partition in that different drivers
register the block devices. I observed this when working out the
driver load order needed for an initramfs. This is the working sequence:
ide-core via82cxxx ide-disk
Kernel messages for /dev/hda and /dev/hdb showed when via82cxxx was loaded.
After ide-disk is loaded, messages for hda1,2... appear in the kernel log
but only if via82cxxx is already loaded.
So the point is that block devices can be created from other block devices.
The registration and notification mechanism depends on the driver load order.
For example in gentoo, this /etc/conf.d/rc parameter:
# RC_VOLUME_ORDER allows you to specify, or even remove the volume setup
# for various volume managers (MD, EVMS2, LVM, DM, etc). Note that they are
# stopped in reverse order.
RC_VOLUME_ORDER="raid evms lvm dm"
lets you control whether LVM volumes are built on raid volumes or
> Web Page: http://lug.boulder.co.us
> Mailing List: http://lists.lug.boulder.co.us/mailman/listinfo/lug
> Join us on IRC: lug.boulder.co.us port=6667 channel=#colug
More information about the LUG