[lug] cd-rw for backups - verify? fast compress?

Neal McBurnett nealmcb at avaya.com
Thu Aug 3 11:56:08 MDT 2000

I finally got a CD-RW drive (hp cdwriter plus 9210 e) attached to an
adaptec slimscsi 1480 (ultra, 50 pin high density) pccard for my NEC
Versa 6260 laptop (266 MHz Pentium), Redhat 6.1.  Seems to be working
just fine for the most part.

I have two basic issues.  First, how to verify what I've written,
and second, how to write a compressed backup in one pass.

One important use for it is backups.  I've had no trouble making and
writing uncompressed backups in real time (mkisofs piped into
cdrecord) as discussed at

I don't have space to hold an image, so creating the archive
on-the-fly is important.

What I would like to do, before wiping out what I've archived,
is to verify all the bits.  I hoped I could just read
the raw cd file /dev/scd0 and use `cmp` to check it, but unfortunately
mkisofs doesn't produce the same thing each time (different
time stamps??).  Is there any way to get reproducable results,
or to know what to savely ignore when comparing?

My system is also able to make a regular tar file on-the-fly and pipe
that into cdrecord.

But when I try to pipe tar using compression (so a 2 GB filesystem
would be likely to fit on one CD) I run out of horsepower and can't
keep up with the speed I'm writing at:

 GZIP="--fast" tar -c --one-file-system --atime-preserve \
	--exclude /proc --use-compress-program gzip / |
    cdrecord -v -data blank=fast speed=1 -

Actually cdrecord says it is runing at "speed 2" even though I ask for
speed=1.  Using "speed=0" like the howto suggests also doesn't help.
I bet the system could keep up if I could get it to write at single
speed.  Or if I had a faster compression command.  But compress,
zip, etc.  seem slower.  How does bzip2 compare in speed to
gzip --fast?

Any other tips or programs for doing full and/or incremental backups with
cd-rw's would be appreciated.  I saw http://cdbl.linuxave.net/ but it
is targeted to a particular scenario (runs in stand-alone mode, writes
to spare partition, later copy to cd-r's).

E.g. how about a backup scheme that splits filesystems into medium
size compressed tar archives suitable for writing in multi-session
ways to cd-r media.  That should require less free disk space
and should make the recovery of individual files faster.

Or taking advantage of rpm's --verify stuff - only backup changed
things or those that aren't from an rpm, after recording packages,
their locations and their md5's

Of course good automated recovery schemes for these more complicated
backups would be important.


Neal McBurnett <neal at bcn.boulder.co.us>  303-538-4852
Avaya Communication / Internet2 / Bell Labs / Lucent Technologies
http://bcn.boulder.co.us/~neal/      (with PGP key)

More information about the LUG mailing list