Subject | Re: [firebird-support] Fb1.5 RC7 CS: Pointer page vanished from DPM_next (249); |
---|---|
Author | Helen Borrie |
Post date | 2004-01-12T11:27:17Z |
At 12:01 PM 12/01/2004 +0100, you wrote:
Your row_key uses a generator which pass the limit of a 32-bit integer
before the third pass through your source records is complete. Past that
point, you will overflow your integer primary key. The gbak messages seem
to be telling us that you have only half of your records stored.
Check: are you re-setting the generator to zero on each attempt?
Check: are you sure that the other 400 million rows were processed? or
that the server didn't crash?
800 million is waaaay too many inserts for one transaction. 2000, otoh, is
far too few. Blocks of 10,000 seem to be a good size, from general
experience.
The vanished index pointer page looks to be at least one source
of/indicator of your problems. I would want to drop that PK constraint
until after you have loaded the data. The index is probably a nightmare of
imbalance. And i really WOULD want that row_key to be a 64-bit integer.
Those are what I can think of at the moment...
/h
>(Sorry for crossposting)Reality checks:
>
>I have encountered a serious problem with Firebird 1.5 RC7 Classic
>Server. Inserting a large number of rows in a table corrupts the
>database. The problem is only detected once selections are made or when
>a backup is made.
>
>The table is filled by a client app using IBPP as connection software.
>It pumps roughly about 800.000.000 rows into the table (specs below).
>The final database size is appoximately 80GB.
>
>I have tried various options which made no difference:
>* One big transaction (took about 26 hours)
>* One transaction per 2000 rows (almost twice as slow)
>* ForceWrites on and off
>
>Below are details. Any help is welcome.
>
>Last few lines from the gbak operation which show the failure to backup
>the current database:
>
>gbak: 449200000 records written
>gbak: 449220000 records written
>gbak: 449240000 records written
>gbak: 449260000 records written
>gbak: ERROR: internal gds software consistency check (pointer page
>vanished from DPM_next (249))
>gbak: ERROR: gds_$receive failed
>gbak: Exiting before completion due to errors
>gbak: ERROR: internal gds software consistency check (can't continue
>after bugcheck)
>
>The tail of firebird.log shows the following:
>
>littlebeast (Client) Thu Jan 8 02:57:16 2004
> SCH_validate -- not entered
>
>littlebeast (Client) Thu Jan 8 02:57:16 2004
> SCH_validate -- not entered
>
>littlebeast Thu Jan 8 04:21:56 2004
> Database: /var/firebird/data/db.fdb
> internal gds software consistency check (pointer page vanished
>from DPM_next (249))
>
>The database looks like this: (Note that I don't require modelling
>support, because I need this data to convert it into a properly
>structured relational model. That's why I need the rowkey mentioned in
>the q table.)
>
>create database '/var/firebird/data/db.fdb' PAGE_SIZE 16384;
>
>default character set none;
>
>create table q
>(
> id integer not null,
> moment timestamp not null,
> l numeric(16,4),
> v integer,
> o numeric(16,4),
> h numeric(16,4),
> lo numeric(16,4),
> b numeric(16,4),
> a numeric(16,4),
> t integer,
> rowkey integer not null,
> primary key (rowkey)
>);
>
>create generator rowkey_gen;
>
>create trigger q_bi0 for q
>active before insert position 0
>as
>begin
> new.rowkey = gen_id(rowkey_gen, 1);
>end
>
>The system configuration on which Firebird is installed:
>* RedHat 8.0
>* Filesystem ext3
>* 1GB swapspace
>* 2.8GHz Pentium4
>* 512k cache
>* 512MB Ram
>* 250GB IDE disk
Your row_key uses a generator which pass the limit of a 32-bit integer
before the third pass through your source records is complete. Past that
point, you will overflow your integer primary key. The gbak messages seem
to be telling us that you have only half of your records stored.
Check: are you re-setting the generator to zero on each attempt?
Check: are you sure that the other 400 million rows were processed? or
that the server didn't crash?
800 million is waaaay too many inserts for one transaction. 2000, otoh, is
far too few. Blocks of 10,000 seem to be a good size, from general
experience.
The vanished index pointer page looks to be at least one source
of/indicator of your problems. I would want to drop that PK constraint
until after you have loaded the data. The index is probably a nightmare of
imbalance. And i really WOULD want that row_key to be a 64-bit integer.
Those are what I can think of at the moment...
/h