Subject | Re: Large row count in tables |
---|---|
Author | roar4711 |
Post date | 2009-08-04T11:43:55Z |
I've tried a bit different approach using bulk import (external file), with indexes deactivated.
Here is the result:
Starting transaction...
Preparing query: SELECT a.RID, a.RTIME, a.RDATA, a.RDB$DB_KEY
FROM EXT128 a
Prepare time: 00:00:00.
Field #01: EXT128.RID Alias:RID Type:INTEGER
Field #02: EXT128.RTIME Alias:RTIME Type:INTEGER
Field #03: EXT128.RDATA Alias:RDATA Type:FLOAT
Field #04: EXT128.DB_KEY Alias:DB_KEY Type:STRING(8)
PLAN (EXT128 NATURAL)
Executing...
Done.
83 fetches, 0 marks, 5 reads, 0 writes.
0 inserts, 0 updates, 0 deletes, 18 index, 0 seq.
Delta memory: 11372 bytes.
Execute time: 00:00:00.
Script execution finished.
Preparing query: INSERT INTO TYPE_128 (id, time_stamp, data)
SELECT rid, rtime, rdata FROM ext128
Prepare time: 00:00:00.
PLAN (EXT128 NATURAL)
Script execution finished.
Preparing query: INSERT INTO TYPE_128 (id, time_stamp, data)
SELECT rid, rtime, rdata FROM ext128
Prepare time: 00:00:00.
PLAN (EXT128 NATURAL)
Executing...
Done.
197255 fetches, 45140 marks, 25000 reads, 28780 writes.
9660 inserts, 0 updates, 0 deletes, 0 index, 0 seq.
Delta memory: 10964 bytes.
TYPE_128: 9660 inserts.
9660 rows affected directly.
Execute time: 00:01:23.
Script execution finished.
Record count: 19459137
As you can see, it still takes a minute to insert those 9660 rows, which is to slow for my needs.
I'm going to try with re-arranging the fields and increase the page size to see if its make any difference. If someone could comment on the output if they see something suspicious, I would be very grateful.
Here is the result:
Starting transaction...
Preparing query: SELECT a.RID, a.RTIME, a.RDATA, a.RDB$DB_KEY
FROM EXT128 a
Prepare time: 00:00:00.
Field #01: EXT128.RID Alias:RID Type:INTEGER
Field #02: EXT128.RTIME Alias:RTIME Type:INTEGER
Field #03: EXT128.RDATA Alias:RDATA Type:FLOAT
Field #04: EXT128.DB_KEY Alias:DB_KEY Type:STRING(8)
PLAN (EXT128 NATURAL)
Executing...
Done.
83 fetches, 0 marks, 5 reads, 0 writes.
0 inserts, 0 updates, 0 deletes, 18 index, 0 seq.
Delta memory: 11372 bytes.
Execute time: 00:00:00.
Script execution finished.
Preparing query: INSERT INTO TYPE_128 (id, time_stamp, data)
SELECT rid, rtime, rdata FROM ext128
Prepare time: 00:00:00.
PLAN (EXT128 NATURAL)
Script execution finished.
Preparing query: INSERT INTO TYPE_128 (id, time_stamp, data)
SELECT rid, rtime, rdata FROM ext128
Prepare time: 00:00:00.
PLAN (EXT128 NATURAL)
Executing...
Done.
197255 fetches, 45140 marks, 25000 reads, 28780 writes.
9660 inserts, 0 updates, 0 deletes, 0 index, 0 seq.
Delta memory: 10964 bytes.
TYPE_128: 9660 inserts.
9660 rows affected directly.
Execute time: 00:01:23.
Script execution finished.
Record count: 19459137
As you can see, it still takes a minute to insert those 9660 rows, which is to slow for my needs.
I'm going to try with re-arranging the fields and increase the page size to see if its make any difference. If someone could comment on the output if they see something suspicious, I would be very grateful.
--- In firebird-support@yahoogroups.com, Mark Rotteveel <Avalanche1979@...> wrote:
>
> roar4711 wrote:
> > I got a database with 5 different table, each containing around 19 million records. Each five minutes between 1500 and 2000 records are added to each table. Right now the database is 12 GB but I expect it to grow to 80-90 GB for a whole year worth of data.
> >
> > A table looks like this:
> >
> > integer ID
> > integer Time
> > float Data
> >
> > I have an asc and desc index on ID/Time for (time) ranged queries. Cant have a primary index since there is no unique data.
>
> I'd say you'd better reverse the order to Time/ID (or create it next to
> the existing) as that will probably give better selectivity (although
> that depends on your exact usage), and allow Firebird to use that index
> for your SELECT MAX(time)... as well.
>
> > Default page size is set to 4096 bytes.
>
> You might want to consider moving your database to a larger page size.
>
> --
> Mark Rotteveel
>