Subject Re: [ib-support] Massive INSERT test
Author Svein Erling Tysvaer
At 12:36 11.02.2003 +0000, you wrote:
>I am trying to build a one billion row Table, at least the biggest I
>can for now on a desktop machine.

Good idea!

>I am logging the time it takes to insert a 48 byte row plus three
>indices.

How selective are your indexes? I.e. will there be lots of duplicates? If
you were doing an insert like this, I would assume your best bet would be
to turn off the indexes during the insert and then turn them back on
afterwards.

>I am using an 800 Mhz, 300MB+ RAM machine with a baracuda IDE drive.
>Wink2K SP3. FireBird 1.0, Phoenix ODBC driver, SQL dialect 3.
>
>I set the Database with forced write off and local connection, page
>size 4096 matches my NTFS cluster size.

On Windows, forced writes off is suicidal. Turn it back on - even if
performance suffers a bit.

>First let's note that I restarted the test after a few 100K rows and
>emptied the table.
>
>Adding 10,000 records took 34 seconds at first.
>
>After emptying the table it took 50 seconds. I read some place that
>it was more efficient for FireBird to reuse allocated disk space than
>allocate new space. I cannot confirm it with my setup.

10,000 records in 34 seconds sounds too slow - I'm not used to test massive
inserts, but selects can manage 10,000 records in 1 or 2 seconds. How are
you inserting? Parameterized dSQL I presume, but do you use ParamByName or
Params or whatever (I come from an IBO background)? And what about
transactions? I assume commit every 10,000 or 20,000 would be OK.

>Insert semi-random data benchmark:
>
>2,000,000 0:53 sec
>2,200,000 1:08
>2,300,000 1:19
>2,400,000 1:48
>2,500,000 1:51
>2,600,000 2:07
>2,700,000 2:07
>2,800,000 2:19
>2,860,000 4:51
>2,900,000 4:47
>
>ibserver.exe used 40 MB memory at that point.
>
>I stopped the test and restarted the service.
>
>3,000,000 1:28
>3,100,000 1:43
>3,200,000 1:50
>3,300,000 2:14
>3,400,000 2:06
>
>ibserver.exe is up to 10 MB memory usage at this time.

Sorry, I don't understand your benchmarks at all. Increasing numbers from
2,000,000 up to 3,400,000 followed by seconds that mostly, but not
exclusively increases as the other number increase??

>I'd appreciate any comment on this test. Is there some type of
>buffer that keeps growing and never gets reset? Is there a way to
>dynamically flush certain buffers that would equate a service restart?
>
>Particularly is there any tweak to possibly limit the size or a way
>to dynamically reset that buffer.

I've already recommended commit for every 10,000 inserts or so and to turn
off your indexes. Beyond that, I have no good suggestions at the moment.

Hopefully others will have more comments,
Set