blug-mail at duboulder.com
Sat Sep 17 12:05:07 MDT 2005
Alan Robertson wrote:
> George Sexton wrote:
>>> -----Original Message-----
>>> From: lug-bounces at lug.boulder.co.us
>>> [mailto:lug-bounces at lug.boulder.co.us] On Behalf Of Alan Robertson
>>> Sent: Wednesday, September 14, 2005 10:50 PM
>>> To: Boulder (Colorado) Linux Users Group -- General Mailing List
>>> Subject: Re: [lug] Bacula
>>> A simple example:
>>> If you want to back up 10 million small files, that means
>>> probably 10 million inserts to a database - which is a HUGE
>>> amount of work - and incredibly slow when compared to writing a
>>> 10 million records to a flat file.
>>> And when you recycle a backup, that means 10 million deletions.
> Plus the 10 million inserts. If each averages a lightning fast 10ms
> (which it won't), then updating the indexes will take about 28 hours
10ms/row seems slow. I have seen DTS on old 4-year old hardware exceed
10000 rows/sec on MSSQL 2000 (dataset was larger than memory by at least a
factor of 20). Old perl code I did 2 years ago was able to insert 100 field
rows at >1000 rows/sec against postgresql. This was with transactions
enabled and dynamic SQL generation.
More information about the LUG