Subject Re: FB performance
Author Marek Konitz
Thanks a lot for help!

After some changes it's a little bit better, but there's a work to do.

> Disk configuration ? it's very important piece in databases...
Should be sufficient. It's at least sata raid 1, or maybe even scsi,
not sure, it's customer's machine. Is it really so critical?

> do you commit frequently ? (not commit retain, I mean hard commits)
Yes, almost only hard commits. I do commit retaining only during data
import, and after the whole operation there is regular commit.
I have one main transaction, read_commited, nowait, used only for
reading data. And it's active only in 3-4 clients' apps, which are
logged as supervisor. I need it to read db events, and current
clients' status. All other apps have this transaction inactive, only
for reading/writing data create short-lived snapshot transactions.

> Show us the problem queries, the used plan, indices and database
statistics.
The big problem is not in single select, but rather update statement
(1 record affected), which can take several secs.

> Are you sure you are not indexing any columns with low selectivity?
Quite sure, I don't, checked with IBSurgeon. There was one foreign key
with very low selectivity, but I decided to throw it away.

> Note that, with
IBX, if you use the RecordCount property, you will be using select
count(*)....
Not really, unless you do fetchall. RecordCount stores number of
records already fetched and in for example for loop it inreases after
Next(), until eof of course.

> You mean ibx components, right? ibx doesn't have any controls.
Naturally you're right, my apologize :)

>>- after backup/restore works slightly better, but only for several
hours
>Then look closely at how transactions are being committed. The IBX
default for AutoCommit is to use CommitRetaining.
I do not use any default action, which is set, btw, to commit. I
always do commit/rollback intentionally by myself, even with default
transaction.

> Check how much RAM each client is getting (multiply page size
by the figure "Page Buffers" in the gstat -h readout.)
What does it mean, if I have Page Buffers 0? See statistics below.
Shouldn't it be 75 for Classic?

> I don't know whether you are aware that any dataset you are holding
on a client (TIBQuery, TIBTable) in a read-write transaction (the
default) will block garbage collection. You might like to look at how
you are displaying sets to users....if you are using explicit DML for
your operations anyway, in preference to dataset methods (Insert,
Edit, Delete) it would be worth considering running your SELECTs in a
read-only transaction.

I do not use TIBTable, which is said to be very unefficient for
client-server architecture. I do not overdose TIBQuery, which is used
only in few places and only for reading data. Any modifications I do
in DML with separate snapshot transactions.


> Format of disk? (cluster/RW-Block size)
> if it is default then i think it is 4096, this means
> when FB wants to read or write of any page to disk,
> OS transparently read and write 2 different cluster from disk.
> no one give you a guarantee they are on the same track and in wanted
> order. Any active disk will be fragmanted in some minutes.
Do you suggest changing the cluster size to 16kB? (I've changed db
page size to 16kB meanwhile).

> check your querys or TEMP directory. is there a SORT/file?
There is not sort file

TaskManager lists ca 25 fb_inet_server procs. and each consumes ca 6,
5MB of RAM (still not much memory usage...)

Changes made to firebird.conf:
-----------
SortMemBlockSize = 2097152
SortMemUpperLimit = 16777216
LockMemSize = 1048576
LockHashSlots = 911


DB Statistics, header and most used table:
-----------
Database header page information:
Flags 0
Checksum 12345
Generation 1628496
Page size 8192
ODS version 10.1
Oldest transaction 1626103
Oldest active 1626104
Oldest snapshot 1626104
Next transaction 1628475
Bumped transaction 1
Sequence number 0
Next attachment ID 0
Implementation ID 16
Shadow count 0
Page buffers 0
Next header page 0
Database dialect 3
Creation date Jan 1, 2006 22:46:15
Attributes

Variable header data:
Sweep interval: 20000
*END*


CONTACTS (142)
Primary pointer page: 170, Index root page: 171
Average record length: 131.58, total records: 7667504
Average version length: 28.74, total versions: 761, max versions:
3
Data pages: 163050, data page slots: 163050, average fill: 86%
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 1
60 - 79% = 0
80 - 99% = 163049

Index CONTACTS_IDX1 (1)
Depth: 3, leaf buckets: 8372, nodes: 7667526
Average data length: 2.00, total dup: 3920257, max dup: 1242
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 296
60 - 79% = 1
80 - 99% = 8075

Index CONTACTS_IDX2 (2)
Depth: 3, leaf buckets: 8372, nodes: 7667526
Average data length: 2.00, total dup: 3920257, max dup: 1242
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 296
60 - 79% = 1
80 - 99% = 8075

Index CONTACTS_IDX3 (3)
Depth: 3, leaf buckets: 6598, nodes: 7667504
Average data length: 1.00, total dup: 0, max dup: 0
Fill distribution:
0 - 19% = 0
20 - 39% = 1
40 - 59% = 0
60 - 79% = 0
80 - 99% = 6597

----> index throwed away (foreign key)
Index CONTACTS_IDX4 (4)
Depth: 3, leaf buckets: 5676, nodes: 7667504
Average data length: 0.00, total dup: 7631472, max dup: 527259
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 2
60 - 79% = 0
80 - 99% = 5674
<----

Index CONTACTS_IDX5 (6)
Depth: 3, leaf buckets: 6775, nodes: 7667505
Average data length: 1.00, total dup: 14698, max dup: 4
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 19
60 - 79% = 0
80 - 99% = 6756

Index CONTACTS_IDX6 (5)
Depth: 3, leaf buckets: 6780, nodes: 7667505
Average data length: 1.00, total dup: 14698, max dup: 4
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 28
60 - 79% = 1
80 - 99% = 6751

Index RDB$PRIMARY1 (0)
Depth: 3, leaf buckets: 6598, nodes: 7667504
Average data length: 1.00, total dup: 0, max dup: 0
Fill distribution:
0 - 19% = 0
20 - 39% = 1
40 - 59% = 0
60 - 79% = 0
80 - 99% = 6597