Subject | Re: [firebird-support] Re: One table become very slow |
---|---|
Author | Ann W. Harrison |
Post date | 2008-09-15T17:11:41Z |
marrtins_dqdp wrote:
record in one transaction, Firebird keeps the older version around
until all the transactions that were running before you committed
the update or delete have finished, so those older transactions
can maintain a stable view of the data.
For example:
Transaction 0: updates table1.record1.field1 to "A new day dawns"
Transaction 0: commits
Transaction 1: reads table1.record1.field1 sees "A new day dawns"
Transaction 2: updates table1.record1.field1 to "The sun at midday"
Transaction 2: commits
Transaction 3: updates table1.record1.field1 to "The sultry afternoon"
Transaction 3: commits
Transaction 4: updates table1.record1.field1 to "The evening breezes"
Transaction 4: commits
Transaction 5: updates table1.record1.field1 to "The dusk closes in"
Transaction 5: commits
Transaction 6: updates table1.record1.field1 to "The stars at midnight"
Transaction 6: commits
Transaction 1: reads table1.record1.field1 sees "A new day dawns"
What is meant by "a stable view of data" is that every time
Transaction 1 reads table1.record1.field1, it see the same value.
At this point, since Transaction 1 is still active the database
contains six versions of table1.record1. When Transaction 1 finally
commits, there's nobody around who cares about the first five
versions. They're garbage and can be removed. But before Transaction
1 finally commits (or rolls back) all versions must stay.
To make its point, this example uses a record that Transaction 1
actually read. In fact, all changes made after Transaction 1 started
keep old versions because those old versions might be related to
the value of table1.record1.field1.
So, if during the normal daily processing you have a long lasting
transaction, it blocks garbage collection, and Firebird keeps old
versions of records in case that old transaction wants to see them.
To the specific case at hand....
This is the table name and the tables RDB$$RELATION_ID
of no interest to you.
CLIENT table is 288.79 bytes and the table contains 9403 primary
records. Multiplying that out and including overhead, you should
be using about 710 4K pages for the table
nine bytes, but that you have almost 28 million back versions
(27,565,987)
back versions.
are using two hundred thousand pates, vs. the 710 required for primary
records.
we can probably describe a way to do it that doesn't cause this
problem.
do with frequent modifications and keeping transaction alive
for hours.
the back versions, so everything is fine, until you start running.
Then you've got a long running transaction somewhere, and lots of
modifications.
transaction and the next transaction ... which is pretty boring on
a recently shutdown system.
Cheers,
Ann
>Let me see if I can explain this. When you update or delete a
> I quite don`t understand what is "worst" record and what could be
> typical scenarious for those 3000 versions per record?
>
record in one transaction, Firebird keeps the older version around
until all the transactions that were running before you committed
the update or delete have finished, so those older transactions
can maintain a stable view of the data.
For example:
Transaction 0: updates table1.record1.field1 to "A new day dawns"
Transaction 0: commits
Transaction 1: reads table1.record1.field1 sees "A new day dawns"
Transaction 2: updates table1.record1.field1 to "The sun at midday"
Transaction 2: commits
Transaction 3: updates table1.record1.field1 to "The sultry afternoon"
Transaction 3: commits
Transaction 4: updates table1.record1.field1 to "The evening breezes"
Transaction 4: commits
Transaction 5: updates table1.record1.field1 to "The dusk closes in"
Transaction 5: commits
Transaction 6: updates table1.record1.field1 to "The stars at midnight"
Transaction 6: commits
Transaction 1: reads table1.record1.field1 sees "A new day dawns"
What is meant by "a stable view of data" is that every time
Transaction 1 reads table1.record1.field1, it see the same value.
At this point, since Transaction 1 is still active the database
contains six versions of table1.record1. When Transaction 1 finally
commits, there's nobody around who cares about the first five
versions. They're garbage and can be removed. But before Transaction
1 finally commits (or rolls back) all versions must stay.
To make its point, this example uses a record that Transaction 1
actually read. In fact, all changes made after Transaction 1 started
keep old versions because those old versions might be related to
the value of table1.record1.field1.
So, if during the normal daily processing you have a long lasting
transaction, it blocks garbage collection, and Firebird keeps old
versions of records in case that old transaction wants to see them.
To the specific case at hand....
This is the table name and the tables RDB$$RELATION_ID
> CLIENTS (132)This line is critical for the internal function of Firebird bu
of no interest to you.
> > Primary pointer page: 207, Index root page: 208This line says that the compressed length of a record in the
CLIENT table is 288.79 bytes and the table contains 9403 primary
records. Multiplying that out and including overhead, you should
be using about 710 4K pages for the table
> > Average record length: 288.79, total records: 9403This line says that the average old version of a record is only
nine bytes, but that you have almost 28 million back versions
(27,565,987)
> > Average version length: 9.00, total versions: 27565987,This line says that one particular record has over six thousand
back versions.
> > max versions: 6180This line says that the 9403 records and their 28 million back version
are using two hundred thousand pates, vs. the 710 required for primary
records.
> > Data pages: 209068, data page slots: 209849, average fill: 86%Yes, that field is a problem. If you describe what you're doing
>
> It will be difficult to catch the problem because I don`t fully
> understand how garbage is built/cleaned up. Recently I added field
> that is constantly updated - timestamped. Could it be the problem?
> It`s kinda silly... It is fairly basic Apache/PHP application.
we can probably describe a way to do it that doesn't cause this
problem.
>No, it has nothing to do with page size or buffers. It has to
> Maybe should I change something with page size or buffers?
do with frequent modifications and keeping transaction alive
for hours.
> ...did full metadata dump and recreated database "from scratch" - toAny time you use a data pump or a backup/restore, you eliminate all
> FB version 2.1.1 and pumped data back.
the back versions, so everything is fine, until you start running.
Then you've got a long running transaction somewhere, and lots of
modifications.
>What we wanted to find was the difference between the oldest active
>
> Yes I`ts the same, but I grabbed it after all clients were shut down,
> so I suppose nothing big has changed.
transaction and the next transaction ... which is pretty boring on
a recently shutdown system.
Cheers,
Ann