Subject | Re: [firebird-support] database corruption without system restart |
---|---|
Author | Ann W. Harrison |
Post date | 2005-10-28T15:51:02Z |
Michal Rzewuski wrote:
1 4,424,805 4,424,805
2 4,310,775 4,435,591
3 4,300,972 4,440,072
4 4,288,493 4,440,163
5 4,270,984 4,440,120
6 4,258,194 4,440,151
unused slots pages +/- slots +/-
1 0
2 124,816 -114,030 10,786
3 139,100 -9,803 4,481
4 151,670 -12,479 91
5 169,136 -17,509 -43
6 181,957 -12,790 31
Case 1 is immediately after a backup restore and the
number of slots is the same as the number of data pages.
records actually creates a new record version. When
the delete is committed, and the commit is "mature"
and no active transactions need the old version, the
deleted stub and back version can be removed when next
it's visited. A massive delete will cause some
garbage collection, but it also creates new entries.
What were the conditions for cases 3-5?
Case 6 is after the sweep is complete. At that point
you've got 166,611 fewer data pages but 15,346 more
data page slots. I think that is very odd and need to
think about it.
Let me bore you a bit longer about pointer pages and slots.
Firebird finds records by decomposing the db_key into a
sequence, a slot, and an offset.
It uses the sequence to look up a pointer page for the
table in the RDB$PAGES table. That record gives it the
page number of a pointer pages for the table. Each pointer
page contains an array of 4 byte pages numbers of data
pages plus an array of 2bit quantities that indicate
whether the page is empty, nearly empty, nearly full, or
full. An 8192 byte pointer page points to 1920 data
pages.
The slot is the offset in the array of page numbers on
the pointer pages that holds the page number for the
record we're looking for. Firebird then reads that
data page and uses the offset to find the index entry
for the record on the data page. Every data page has
an index that holds the offset and length of records
(and blobs, and fragments and old record versions) stored
on the page.
or limbo. A sweep will move the oldest interesting transaction
forward as long as the oldest interesting is rolled back
and there are no active transactions that can see older records.
My thought is you might have a transaction in limbo. Gfix
can find it and you can change its state to committed or
rolled back - your choice. Committed probably makes sense.
be stored in an empty slot on an existing pointer page, if one can
be found. The system keeps a high water mark for each table that
contains the sequence number of the first pointer page with unused
slots. Without the high water mark, finding free slot becomes
increasingly expensive as the table gets larger.
When gstat counts slots, it counts unused slots on pointer pages if
there are slots in use further down on the page. So, if you started
with a full pointer page (1920 entries) and deleted entries 1 through
1000, and 1920, gstat would report that there were 1919 slots
on the page. The other slots would be available for reuse as new
pages were allocated to the table.
When a pointer page is completely empty, it continues to be attached
to the table because its sequence number is built into the db_keys
of records in the table that were stored after it was created. Db_keys,
in turn, are stored in indexes, so stability is important. The space
taken by pointer pages is not significant.
The number of unused slots in your database is a concern and I will
look at the algorithm for finding an unused slot. If the high water
mark is set incorrectly, we may not be seeing some unused slots.
ago - he's handled the problem of record number overflow for some
time and has good suggestions.
Regards,
Ann
>data pages data page slots
> so the history of changes looks as follow. the last stat
> is taken after sweep process is finished, the first is
> made on fresh copy of database file, just after restore.
>
1 4,424,805 4,424,805
2 4,310,775 4,435,591
3 4,300,972 4,440,072
4 4,288,493 4,440,163
5 4,270,984 4,440,120
6 4,258,194 4,440,151
unused slots pages +/- slots +/-
1 0
2 124,816 -114,030 10,786
3 139,100 -9,803 4,481
4 151,670 -12,479 91
5 169,136 -17,509 -43
6 181,957 -12,790 31
Case 1 is immediately after a backup restore and the
number of slots is the same as the number of data pages.
> after restore i deleted some milion of records and afterIs case 2 immediately after deleting records? Deleting
> that i started sweep process (before crash i had about
> 100e6 and i deleted about 20e6).
records actually creates a new record version. When
the delete is committed, and the commit is "mature"
and no active transactions need the old version, the
deleted stub and back version can be removed when next
it's visited. A massive delete will cause some
garbage collection, but it also creates new entries.
What were the conditions for cases 3-5?
Case 6 is after the sweep is complete. At that point
you've got 166,611 fewer data pages but 15,346 more
data page slots. I think that is very odd and need to
think about it.
Let me bore you a bit longer about pointer pages and slots.
Firebird finds records by decomposing the db_key into a
sequence, a slot, and an offset.
It uses the sequence to look up a pointer page for the
table in the RDB$PAGES table. That record gives it the
page number of a pointer pages for the table. Each pointer
page contains an array of 4 byte pages numbers of data
pages plus an array of 2bit quantities that indicate
whether the page is empty, nearly empty, nearly full, or
full. An 8192 byte pointer page points to 1920 data
pages.
The slot is the offset in the array of page numbers on
the pointer pages that holds the page number for the
record we're looking for. Firebird then reads that
data page and uses the offset to find the index entry
for the record on the data page. Every data page has
an index that holds the offset and length of records
(and blobs, and fragments and old record versions) stored
on the page.
>The oldest interesting transaction is either rolled back
> the other thing i'm warring about is the OIT, it's not
> changing and difference is huge - normaly in the past
> the transaction number was a small step behind OAT.
or limbo. A sweep will move the oldest interesting transaction
forward as long as the oldest interesting is rolled back
and there are no active transactions that can see older records.
My thought is you might have a transaction in limbo. Gfix
can find it and you can change its state to committed or
rolled back - your choice. Committed probably makes sense.
>When a new page is allocated for a database, its page number will
> Oldest transaction 25367
> Oldest active 244699
> Oldest snapshot 244699
> Next transaction 257758
>
> the problem is that it has to be working because of consumers
> that start to be very nervous. i'm waiting for the weekend
> to backup/restore but it has to be working for next 24h :(
> every day i get about 0.5 - 1e6 records.
>
> and my questions:
> -do new pages for fresh data are allocated from data page slots?
> (i mean are data page slots reused? why this space was not
> given to the database for any purpose and is always attached
> to the same table?).
be stored in an empty slot on an existing pointer page, if one can
be found. The system keeps a high water mark for each table that
contains the sequence number of the first pointer page with unused
slots. Without the high water mark, finding free slot becomes
increasingly expensive as the table gets larger.
When gstat counts slots, it counts unused slots on pointer pages if
there are slots in use further down on the page. So, if you started
with a full pointer page (1920 entries) and deleted entries 1 through
1000, and 1920, gstat would report that there were 1919 slots
on the page. The other slots would be available for reuse as new
pages were allocated to the table.
When a pointer page is completely empty, it continues to be attached
to the table because its sequence number is built into the db_keys
of records in the table that were stored after it was created. Db_keys,
in turn, are stored in indexes, so stability is important. The space
taken by pointer pages is not significant.
The number of unused slots in your database is a concern and I will
look at the algorithm for finding an unused slot. If the high water
mark is set incorrectly, we may not be seeing some unused slots.
> -should i start sweep process again to reduce the OIT-OATYes, but a backup and restore will work better.
> difference or do anything else?
> -how much space do i have in reserve before next databaseLook for posts from Dalton Calford on this list about three months
> corruption?
> -is it a good idea to turn on the 100% filling od data pages?
ago - he's handled the problem of record number overflow for some
time and has good suggestions.
>Gstat gives you the average record size, I think.
> the average size of record i can calculate:
> 4424805*8192/100e6=360 baytes (but in real *0.8 because of
> 80% of filling of data pages).
Regards,
Ann