Subject | Re: One table become very slow |
---|---|
Author | marrtins_dqdp |
Post date | 2008-09-16T10:17:57Z |
Thank you all for such comprehensive explaining. I`ll try to monitor
database and see if something strage happens again and will inform.
Backup is finished (took more than 24h to finish during that it wrote
~4MB), database is restored and running for 24h now fine.
2) using ibase_pconnect() - always with same user, same password, same
role (PUBLIC)
3) never closing connection at the end of script - manual says I can
rely that PHP manages and closes all opened resources
(files,databases,etc). Doing this for years never coused problems.
4) do not use ibase_trans() (or extremly rare - cannot remeber if used
anywhere), so anything is done via single database connection.
5) Script, which timestamps CLIENTS records:
<?php
if($q2 = ibase_prepare("UPDATE CLIENTS SET LAST_EMAIL_DATE = ? WHERE
CLIENT_ID = ?"))
{
while($r = db_fetch_object($q)) // fetching from MySQL
ibase_execute($q2, date('Y-m-d H:i:s', $r->mm_timetosend),
$r->mm_iduser);
}
?>
Maybe, if using PHP+Apache with ibase_pconnect() and there are 2-3
people working simultaneously, actually they are using single
transaction and that couses problems? However, I was doing this style
and saw ~30 simultaneously users working w/o problems.
--- In firebird-support@yahoogroups.com, "Ann W. Harrison"
<aharrison@...> wrote:
database and see if something strage happens again and will inform.
Backup is finished (took more than 24h to finish during that it wrote
~4MB), database is restored and running for 24h now fine.
> Yes, that field is a problem. If you describe what you're doing1) it is PHP+Apache application
> we can probably describe a way to do it that doesn't cause this
> problem.
2) using ibase_pconnect() - always with same user, same password, same
role (PUBLIC)
3) never closing connection at the end of script - manual says I can
rely that PHP manages and closes all opened resources
(files,databases,etc). Doing this for years never coused problems.
4) do not use ibase_trans() (or extremly rare - cannot remeber if used
anywhere), so anything is done via single database connection.
5) Script, which timestamps CLIENTS records:
<?php
if($q2 = ibase_prepare("UPDATE CLIENTS SET LAST_EMAIL_DATE = ? WHERE
CLIENT_ID = ?"))
{
while($r = db_fetch_object($q)) // fetching from MySQL
ibase_execute($q2, date('Y-m-d H:i:s', $r->mm_timetosend),
$r->mm_iduser);
}
?>
Maybe, if using PHP+Apache with ibase_pconnect() and there are 2-3
people working simultaneously, actually they are using single
transaction and that couses problems? However, I was doing this style
and saw ~30 simultaneously users working w/o problems.
--- In firebird-support@yahoogroups.com, "Ann W. Harrison"
<aharrison@...> wrote:
>fill: 86%
> marrtins_dqdp wrote:
>
> >
> > I quite don`t understand what is "worst" record and what could be
> > typical scenarious for those 3000 versions per record?
> >
>
> Let me see if I can explain this. When you update or delete a
> record in one transaction, Firebird keeps the older version around
> until all the transactions that were running before you committed
> the update or delete have finished, so those older transactions
> can maintain a stable view of the data.
>
> For example:
> Transaction 0: updates table1.record1.field1 to "A new day dawns"
> Transaction 0: commits
> Transaction 1: reads table1.record1.field1 sees "A new day dawns"
> Transaction 2: updates table1.record1.field1 to "The sun at midday"
> Transaction 2: commits
> Transaction 3: updates table1.record1.field1 to "The sultry afternoon"
> Transaction 3: commits
> Transaction 4: updates table1.record1.field1 to "The evening breezes"
> Transaction 4: commits
> Transaction 5: updates table1.record1.field1 to "The dusk closes in"
> Transaction 5: commits
> Transaction 6: updates table1.record1.field1 to "The stars at midnight"
> Transaction 6: commits
> Transaction 1: reads table1.record1.field1 sees "A new day dawns"
>
> What is meant by "a stable view of data" is that every time
> Transaction 1 reads table1.record1.field1, it see the same value.
>
> At this point, since Transaction 1 is still active the database
> contains six versions of table1.record1. When Transaction 1 finally
> commits, there's nobody around who cares about the first five
> versions. They're garbage and can be removed. But before Transaction
> 1 finally commits (or rolls back) all versions must stay.
>
> To make its point, this example uses a record that Transaction 1
> actually read. In fact, all changes made after Transaction 1 started
> keep old versions because those old versions might be related to
> the value of table1.record1.field1.
>
> So, if during the normal daily processing you have a long lasting
> transaction, it blocks garbage collection, and Firebird keeps old
> versions of records in case that old transaction wants to see them.
>
>
> To the specific case at hand....
>
> This is the table name and the tables RDB$$RELATION_ID
> > CLIENTS (132)
>
> This line is critical for the internal function of Firebird bu
> of no interest to you.
> > > Primary pointer page: 207, Index root page: 208
>
> This line says that the compressed length of a record in the
> CLIENT table is 288.79 bytes and the table contains 9403 primary
> records. Multiplying that out and including overhead, you should
> be using about 710 4K pages for the table
> > > Average record length: 288.79, total records: 9403
>
> This line says that the average old version of a record is only
> nine bytes, but that you have almost 28 million back versions
> (27,565,987)
> > > Average version length: 9.00, total versions: 27565987,
>
> This line says that one particular record has over six thousand
> back versions.
> > > max versions: 6180
>
> This line says that the 9403 records and their 28 million back version
> are using two hundred thousand pates, vs. the 710 required for primary
> records.
> > > Data pages: 209068, data page slots: 209849, average
>
>
> >
> > It will be difficult to catch the problem because I don`t fully
> > understand how garbage is built/cleaned up. Recently I added field
> > that is constantly updated - timestamped. Could it be the problem?
> > It`s kinda silly... It is fairly basic Apache/PHP application.
>
> Yes, that field is a problem. If you describe what you're doing
> we can probably describe a way to do it that doesn't cause this
> problem.
> >
> > Maybe should I change something with page size or buffers?
>
> No, it has nothing to do with page size or buffers. It has to
> do with frequent modifications and keeping transaction alive
> for hours.
>
> > ...did full metadata dump and recreated database "from scratch" - to
> > FB version 2.1.1 and pumped data back.
>
> Any time you use a data pump or a backup/restore, you eliminate all
> the back versions, so everything is fine, until you start running.
> Then you've got a long running transaction somewhere, and lots of
> modifications.
> >
> >
> > Yes I`ts the same, but I grabbed it after all clients were shut down,
> > so I suppose nothing big has changed.
>
> What we wanted to find was the difference between the oldest active
> transaction and the next transaction ... which is pretty boring on
> a recently shutdown system.
>
>
> Cheers,
>
>
> Ann
>