Subject Re: [firebird-support] Occasional unexplained Slow down
Author Thomas Steinmaurer
>>> The problem is the size of the page buffers. The page buffers is set
>>> too high, for classic. Change the page buffers, ideally for classic,
>>> anywhere between 75 and 150 pages.
>
>> Thanks, I haven't gone there yet, as customer says memory usage is lower than normal when the slow down happens (but, being 32 bit, maybe there is some internal memory limitation occurring...)
>
>> I'll give it a try...
>
> Aha!! We changed the page size to 150 last night, and this morning (it's the busiest at the start of the day, and their slowest period) the speed was fine!
>
> NOW, they tell me: "And the paged memory is much less as well"
>
> So it WAS using a lot of memory!
>
> For lack of a more scientific method, they are now going to play around with various methods between 75 and 150, to see which is going to be optimal for them

A while ago, I did some tests regarding insertion speed when using GUID
as a primary key. Played with page buffers here a bit. Although there
were only one session connected etc, thus probably not really a real
world scenario. Anyway, the following is an email text in context of
this scenario to someone else.

- Prerequisites

* Firebird 2.5 SuperClassic 64-bit on Win7 Prof.
* Table with one primary key column CHAR(16) OCTETS
* Inserting 100.000 records within an EXECUTE BLOCK and using GEN_UUID()
* Data pages are pre-allocated, thus the database file won't grow during
the test
* In all tests, the fsql test session was the only one connected to the
database


- Result for SuperClassic

I tested with various page buffer values and came to the result, that a
higher page buffer value can improve execution time A LOT.

Page Buffers Execution time [s]
=======================================================
75 70.24
256 7.41
512 2.26
1024 1.86
2048 1.68
4096 1.75
10000 1.75
50000 2.19


As you can see, there is a huge step forward between a page buffers
value of 75 and 256. When a database is newly created (as in your case),
it pretty much uses the default one (e.g. 75 with SuperClassic). Thus,
my initial slowness as reported in the list was affected by this. With a
lower page cache one can also see, that CPU utilization is pretty low. I
guess it's pretty much busy reading from disk. CPU utilization increases
with a higher page cache like 256, 512.

At the same point, an even higher page cache won't make things better.
It's probably related to the number of records inserted. E.g. if I will
insert 1 million records, possibly even a higher page cache would help.

That's for GUID values in a primary key column. I guess it's pretty much
the same scenario with random numeric values, although possibly a bit
faster.

So, for that kind of batch operations with random values stored in an
indexed column, I suspect a higher page cache could help. The idea might
be to temporarily increase the page cache for such operations. I KNOW,
there should be a way to temporarily increase the page cache for the
connection without making this setting permanent in the header page due
e.g. the CACHE option in the CONNECT statement with isql, but for
whatever reason, this doesn't work as expected across all architectures
(SS, CS, SC) in 2.5 (haven't tried in previous versions). AFAIR, there
should be a way to do that.

It's interesting that CS is about 10 seconds faster than SC with a page
cache of 75. I don't know if this is the general case or if other
processes (antivir etc.) influenced the result. Result in different
architectures are getting pretty close with an increased page cache.

I hope this is useful for you.




--
With regards,

Thomas Steinmaurer
Upscene Productions
http://www.upscene.com
http://blog.upscene.com/thomas/

Download LogManager Series, FB TraceManager today!
Continuous Database Monitoring Solutions supporting
Firebird, InterBase, Advantage Database, MS SQL Server
and NexusDB!