Subject Re: [firebird-support] Re: One table taking a long time to insert/update
Author Ann W. Harrison
robertgilland wrote:
>
>
> Database "E:\Data\DBASE.GDB"

This is the path to the database file.

> Database header page information:

The first elements are part of the common page header.

> Flags 0

All pages have a space for flags. Several flags are defined for
the header page. I don't know quite why the flag value is zero
if you have forced writes enabled. My guess is a problem with
gstat, but those features may now be handled through header
attributes. You'll notice holes in the flag bit lists. They
reflect features that were never completed.

hdr_active_shadow = 0x1; /* 1 file is an active shadow file */
hdr_force_write = 0x2; /* 2 database is forced write */
hdr_no_checksums = 0x10; /* 16 don't calculate checksums */
hdr_no_reserve = 0x20; /* 32 don't reserve space for versions */
hdr_SQL_dialect_3 = 0x100; /* 256 database SQL dialect 3 */
hdr_read_only = 0x200; /* 512 Database in RO. If not, RW */
/* backup status mask - see bit values in nbak.h */
hdr_backup_mask = 0xC00;
hdr_shutdown_mask = 0x1080;

// Values for shutdown mask
hdr_shutdown_none = 0x0;
hdr_shutdown_multi = 0x80;
hdr_shutdown_full = 0x1000;
hdr_shutdown_single = 0x1080;

> Checksum 12345

Checksums haven't been supported for more than 10 years,
but every page has the value 12345 in a known location.
I used that years ago to recreate a database that had
been badly copied - one extra byte per page... Don't ask.

> Generation 4526942

The number of times this page has been written. This is
the last of the common page header fields.

> Page size 16384

When a database is opened, Firebird reads only the
number of bytes in the smallest possible page - 1K.
It checks the page size (which must be in the first
part of the page) then re-reads the header at the
actual page size. 16,384 is the largest possible
page size.

> ODS version 11.0

This is a 2.0 database.

> Oldest transaction 4391647

The oldest transaction that is currently active.

> Oldest active 4391648

One number more than the oldest transaction
currently active. I've traced the code and
don't know why we keep this value.

> Oldest snapshot 4390774

The oldest transaction that was open when the oldest
transaction currently running started.

> Next transaction 4526929

The next available transaction number. You've got
a gap of 136,185 transaction between the next available
and the top of the list of transactions whose results can
be garbage collected. Nothing that was active when the
current oldest active started can be garbage collected.

> Bumped transaction 1

Left over from a failed project to add logging to
InterBase.

> Sequence number 0

This is the primary file in this database. Secondary
files have ascending sequence numbers.

> Next attachment ID 0

There are no active attachments at this instant.

> Implementation ID 16

Firebird databases use the native alignment and endianness
of the platform they run on, as well as native datatypes
(which hasn't been much of a concern since the death of the
VAX, but could be if someone finds a platform with non-IEEE
floating point). The implementation number indicates which
combination of alignment, endianness, and datatypes this
build supports.

> Shadow count 0

No shadows - shadowing has largely disappeared from use -
it was a poor-man's (slow) RAID.

> Page buffers 65535

That's a big cache. I believe the version your running
does not have the linear scan in the cache handling algorithms
that make anything over 10K pages slow.

> Next header page 0

A multi-file database has consistent page numbering - it
doesn't restart with each file. So if this were a multi-file
database, the next header page would have a non-zero number.

> Database dialect 3

Dialect 3. You're officially part of the modern era.

> Creation date Jun 15, 2008 3:19:06

This is a new database.

> Attributes force write

That looks good.
>
> Variable header data:

Following the fixed portion of the header are a string
of clumplets that describe other attributes. A clumplet
has a type byte, a length, and data. In theory, a
database manager that doesn't understand a particular
clumplet type can just skip it.

> Sweep interval: 0

Sweeping is off.

> *END*

And that's all for the header page
>
> Database file sequence:
> File E:\Data\DBASE.GDB is the only file

That final report is gathered from the earlier information.
>
> Analyzing database pages ...

And on to the meet of the issue.

> TABLE1 (130)

Table1 has an RDB$RELATION_ID (table number) of 130. User
table numbers start at 129. System tables start at 1.

> Primary pointer page: 311, Index root page: 312

You can check papers on IBPhoenix to find out more about the ODS.
This is saying that to read Table1, you start with a page of page
numbers on page 311. To find indexes, read page 312.

> Average record length: 126.03, total records: 31114063

Records are run-length compressed. Your records average 126.03 bytes,
and you have 31,114,063 in this table. Your page size is 16K, so you'll
get about 110 records on a page, counting the page header, plus the
record size, plus the page line index for each record and the record
header. That's assuming its all new data - no old versions, and
completely fill the page. If you have reserved space on pages (which
you do), each record reserves space for a second record header - about
16 bytes.

> Average version length: 9.00, total versions: 202075, max
> versions: 3

This is the report on old versions. You've got more than 200 thousand
of them, but compared to the total number of records, it's about .6%.
And they're deltas - meaning that the contain only enough information
to recreate the old version from the new - not a full copy of the
record. Old version are stored in the space reserved on page.

"max versions" is the number of record versions in the longest chain
of old versions. That's a strong indication of the amount of time
that garbage collecting the record will take. 3 is a pretty good
number. I've seen "max versions" in the hundred thousands.

> Data pages: 317927, data page slots: 317927, average fill: 86%

This table uses 317,927 data pages. My guess (I haven't checked the
code) is that data page slots and data pages are the same because no
data pages have been released from the table. Data Page Slots are
the page numbers stored on a pointer page.

You're actually storing about 99 records or record versions per page,
which is consistent with

> Fill distribution:
> 0 - 19% = 0
> 20 - 39% = 0
> 40 - 59% = 0
> 60 - 79% = 8
> 80 - 99% = 317919

This is a histogram of page fill levels. The vast majority of the
data pages are nearly full.


My suggestion would be to reduce the page cache to 20K pages and
see if that helps.


Good luck,


Ann