Subject Re: FB performance
Author Ali Gökçen
Hi Marek,

--- In firebird-support@yahoogroups.com, Marek Konitz <marekk@s...>
wrote:
>
>
> Thanks a lot for help!
>
> After some changes it's a little bit better, but there's a work to
do.
>
> > Disk configuration ? it's very important piece in databases...
> Should be sufficient. It's at least sata raid 1, or maybe even
scsi,
> not sure, it's customer's machine. Is it really so critical?
>
> > do you commit frequently ? (not commit retain, I mean hard
commits)
> Yes, almost only hard commits. I do commit retaining only during
data
> import, and after the whole operation there is regular commit.

Ok, your Transaction intervals looks fine. I think commit rataining
not so bad command, i don't think it reduces performance. releasing
and re-allocating of system resources will be more expensive.
commitretaining also increases transaction number.

>
> > Format of disk? (cluster/RW-Block size)
> > if it is default then i think it is 4096, this means
> > when FB wants to read or write of any page to disk,
> > OS transparently read and write 2 different cluster from disk.
> > no one give you a guarantee they are on the same track and in
wanted
> > order. Any active disk will be fragmanted in some minutes.

> Do you suggest changing the cluster size to 16kB? (I've changed db
> page size to 16kB meanwhile).

Yup,
OS uses disks via sektor(s)-block Read and Write.
if your partition has only FB DBFile, fragmantation may be minimum
but, there will be alot of active files, they will use(allocate
cluster, set free cluster) disk
as 4K blocks according to OS and disk will be fragmanted in a short
time. OS will do 4 diffrent cluster access to give your FB DBfile
page. as if it is a flat file. this mean 4 * 10ms = average 40
millisecond access time to disk sectors for each DBpage.
if you format it as 16KB then OS will do only one cluster
registration+operation on disc.

>
> > check your querys or TEMP directory. is there a SORT/file?
> There is not sort file

Ok, i hope you did look it at heavy load time. because
FB will kill this files after hard commits.

>
> TaskManager lists ca 25 fb_inet_server procs. and each consumes ca
6,
> 5MB of RAM (still not much memory usage...)
>
> Changes made to firebird.conf:
> -----------
> SortMemBlockSize = 2097152
> SortMemUpperLimit = 16777216
> LockMemSize = 1048576
> LockHashSlots = 911
>

2MB for each allocation and totally 16 MB heapsize?
I have no idea about Classic server, if is this paramaters for each
proccess or shared. Dimitry(Author) or documents may help.

>
> DB Statistics, header and most used table:
> -----------
> Database header page information:
> Flags 0
> Checksum 12345
> Generation 1628496
> Page size 8192
> ODS version 10.1
> Oldest transaction 1626103
> Oldest active 1626104
> Oldest snapshot 1626104
> Next transaction 1628475
> Bumped transaction 1
> Sequence number 0
> Next attachment ID 0
> Implementation ID 16
> Shadow count 0
> Page buffers 0
> Next header page 0
> Database dialect 3
> Creation date Jan 1, 2006 22:46:15
> Attributes
>
> Variable header data:
> Sweep interval: 20000
> *END*
>
>

Looks regular to me.

> CONTACTS (142)
> Primary pointer page: 170, Index root page: 171
> Average record length: 131.58, total records: 7667504
> Average version length: 28.74, total versions: 761, max
versions:
> 3
> Data pages: 163050, data page slots: 163050, average fill: 86%
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 1
> 60 - 79% = 0
> 80 - 99% = 163049
>
> Index CONTACTS_IDX1 (1)
> Depth: 3, leaf buckets: 8372, nodes: 7667526
> Average data length: 2.00, total dup: 3920257, max dup:
1242
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 296
> 60 - 79% = 1
> 80 - 99% = 8075
>
> Index CONTACTS_IDX2 (2)
> Depth: 3, leaf buckets: 8372, nodes: 7667526
> Average data length: 2.00, total dup: 3920257, max dup:
1242
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 296
> 60 - 79% = 1
> 80 - 99% = 8075
>
> Index CONTACTS_IDX3 (3)
> Depth: 3, leaf buckets: 6598, nodes: 7667504
> Average data length: 1.00, total dup: 0, max dup: 0
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 1
> 40 - 59% = 0
> 60 - 79% = 0
> 80 - 99% = 6597
>
> ----> index throwed away (foreign key)
> Index CONTACTS_IDX4 (4)
> Depth: 3, leaf buckets: 5676, nodes: 7667504
> Average data length: 0.00, total dup: 7631472, max dup:
527259
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 2
> 60 - 79% = 0
> 80 - 99% = 5674
> <----
>
> Index CONTACTS_IDX5 (6)
> Depth: 3, leaf buckets: 6775, nodes: 7667505
> Average data length: 1.00, total dup: 14698, max dup: 4
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 19
> 60 - 79% = 0
> 80 - 99% = 6756
>
> Index CONTACTS_IDX6 (5)
> Depth: 3, leaf buckets: 6780, nodes: 7667505
> Average data length: 1.00, total dup: 14698, max dup: 4
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 28
> 60 - 79% = 1
> 80 - 99% = 6751
>
> Index RDB$PRIMARY1 (0)
> Depth: 3, leaf buckets: 6598, nodes: 7667504
> Average data length: 1.00, total dup: 0, max dup: 0
> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 1
> 40 - 59% = 0
> 60 - 79% = 0
> 80 - 99% = 6597
>

I think tour problems are index centric.
remove all duplicated index and create unique-compound indexes
instead of them.
Don't use Foreign index if there is alot of dups on that field.
(Fixed in >FBv2)
Foreing indexes will be fine if there is no increment on duplicate
rows. for example:

INVOICE table: (master)
ID integer primary key,
...

INVOICE_ITEM table: (detail)
ID integer primary key,
INVOICE_ID integer references invoice(id),
...

you can give reference to invoice because invoice details for each
invoice will be exactly practical-limited.

Regards.
Ali