Subject Re: [IB-Architect] Next ODS change (was: System table change)
Author Dalton Calford
Hi Jason,

The blob operations that I have dealt with involved backing up files
over the network into a single database. We used MD5SUM signitures
combined with bit length to identify the unique files. This allowed us
to just use pointers so that a client machine would never insert a
duplicate file into the database even if the date/time/name is different
from what was already in the database. We found that backing up 293
machines in this manner, resulted in 5 GB of database size. We had a
recursive tree structure that would allow us to get any backup set for
any period in time. This resulted in a style of incremental backup but
allowed for any backup to give us a complete snapshot. We extended this
concept to allow for full security and web access. It was there that
the original replication concepts where developed (and over 80 different
styles of problems found and solved).

Overall, we found that we would get different performance ratio's
depending upon the block size of the underlying array, the block size of
the file system and the block size of the database.
We did all of our origional design using IB 4 for linux.
We also found a problem that did not go away and it was a mystery. I
left the project before it was solved and I do not know if a solution
was ever found.

Using IBO, the IB 5.1 client library and IB 4 running on linux, we found
that if the total size of all the blobs were greater than 100 MB in a
single transaction, that the connection would drop before the
transaction could commit.

In my current position, we rarely use blobs, as most of our records are
under 65 bytes in size, we just have billions of records. We actually
hit the maximum number of records in a table and had to split the table
into multiple tables and keep a separite table that kept information as
to what table held what range of data.

We have alot of I/O and I have through my many different positions have
had the opportunity to put IB through its paces.
(I have a metadata build script that takes 89 hours to complete on a 450
Mhz 500MB Ram IB 5.5 superserver/NT system - I was thinking of donating
this as part of the test suite)

A friend of mine has beat my record of the largest IB database I have
ever seen.
He built a small system in his basement - over a terabyte in scsi
storage.....
He has a operational 980 GB IB database that he uses to crack security
codes - he can reverse a MD5 xor hash in under 3 seconds with all
password possibities displayed. He can even grab a Run block encoded
stream - mid stream and use it to decrypt all remaining packets.

He has done some funky work with UDF's and OCTET character sets.

oh well, I am getting off topic, the long and the short of it is, I/O
bound applications and large databases are becomming the standard vs the
exception. I would suggest that if 6.1 of FB 1.0 is going to be
released, and that the larger block sizes are not that big a job, that a
16 KB and 32 KB block size be added (64 KB too if possible)

best regards

Dalton



Jason Chapman wrote:
>
> Dalton,
>
> Good to hear from you, I have a non-index related 2c worth. One of my
> engineers who shall remain nameless, but not sacked. He created a database
> with a 1KB pagesize on NT4 to table TIFF files only. The insertion rate was
> 32 per minute. By moving from 1KB to 8KB insert rate went up to 300per
> minute.
>
> I wonder what a 32KB pagesize would do.
>
> And as another interesting aside, by pre-allocating the space within the GDB
> it went from 300 to 600 per minute.
>
> JAC.
>
> Dalton,
>
> We are producing a db for 10GB per week growth, how big are your image db's
> jason@...
>
> JAC.
>