Subject | Re: Pinning Table data in memory |
---|---|
Author | Adam |
Post date | 2006-07-05T23:45:57Z |
> Is there any way in Firebird that will enable the database to hold / pin30 million records at what size per record? (You may be crossing the
> all records in memory so that disk I/O is reduced and performance is
> improved?
>
>
>
> Our application requires data to be fetched from 30 Million+ records in
> the table and the response time needs to be very good.
2GB process size limit of a 32 bit process if it is all in cache). For
classic server, it would be a really bad thing because the cache can
not be shared between connections, but with Superserver I imagine you
may get into trouble.
Holding records in memory does other bad things, like introduce the
possibility for loss of data.
The slowness of hard drives is NOT the speed it pulls in data. They
can do that very quickly. The problem is that the seek time (the time
to get the first byte) is slow.
Scanning through 30 million records is going to take a measurable time
on the fastest RAM, so as Milan pointed out, an index structure that
will assist the querying of the information you want out of it is what
will be needed.
Adam