Subject Massive INSERT test
Author arbitragex <>
I am trying to build a one billion row Table, at least the biggest I
can for now on a desktop machine.

I am logging the time it takes to insert a 48 byte row plus three

I am using an 800 Mhz, 300MB+ RAM machine with a baracuda IDE drive.
Wink2K SP3. FireBird 1.0, Phoenix ODBC driver, SQL dialect 3.

I set the Database with forced write off and local connection, page
size 4096 matches my NTFS cluster size.

First let's note that I restarted the test after a few 100K rows and
emptied the table.

Adding 10,000 records took 34 seconds at first.

After emptying the table it took 50 seconds. I read some place that
it was more efficient for FireBird to reuse allocated disk space than
allocate new space. I cannot confirm it with my setup.

Insert semi-random data benchmark:

2,000,000 0:53 sec
2,200,000 1:08
2,300,000 1:19
2,400,000 1:48
2,500,000 1:51
2,600,000 2:07
2,700,000 2:07
2,800,000 2:19
2,860,000 4:51
2,900,000 4:47

ibserver.exe used 40 MB memory at that point.

I stopped the test and restarted the service.

3,000,000 1:28
3,100,000 1:43
3,200,000 1:50
3,300,000 2:14
3,400,000 2:06

ibserver.exe is up to 10 MB memory usage at this time.

I'd appreciate any comment on this test. Is there some type of
buffer that keeps growing and never gets reset? Is there a way to
dynamically flush certain buffers that would equate a service restart?

Particularly is there any tweak to possibly limit the size or a way
to dynamically reset that buffer.