Subject Re: [firebird-support] sweep performance
Author Aage Johansen
On Tue, 13 Jan 2004 21:07:22 +0000 (UTC), <lnd@...> wrote:

> KnowledgeBase Id: 132 of ibphoenix claims that sweep "complete table scan of
> every table in the database"
> Is this true for current versions of FB? How long usually sweep takes say
> for million records database - minutes, hours, days?

The SWEEP is quite fast if there is no 'garbage' in the database.


> KnowledgeBase Id: 111 lists some interesting cases when databases has to be
> restarted. What limitation limitations have been removed?

Is this the max 255 meta data changes on a table ?
No restart, but backup+restore. Maybe you're thinking of something else?


> In particular those (or what workarround other then backup/restore):
> * With record deletes the data pages will develop holes. The database will
> have several partially filled pages. The only way to fix this is to do a
backup
> and restore. The backup and restore will effectively defragment the data
pages.

There's no need to "fix" this. The empty space will (eventually) be used.
Some people think a 'compacted' database is nice so they do a
backup+restore to get rid of 'empty space'. And Firebird will then have to
spend time asking the OS for additional disk space ...


> * TIP (Table Information Page) growth - We will continually allocate
new TIP pages.
> As the # of transactions grows this increases the memory overhead as
each new
> tranaction needs a larger buffer to store TIP bits. The # of pages
> on disk grows. A gbak/restore is required to restore the number to TIP
> pages to a smaller, more efficient value.

Don't know.


> In general, what is the largest more or less 24h FB database known?

I've heard about databases of 200GB, and one close to 1TB (maybe not 24x7).


--
Aage J.