Subject sweep performance
Author lnd@hnit.is
KnowledgeBase Id: 132 of ibphoenix claims that sweep "complete table scan of
every
table in the database"
Is this true for current versions of FB?
How long usually sweep takes say for million records database - minutes,
hours, days?

KnowledgeBase Id: 111 lists some interesting cases when databases has to be
restarted.
What limitation limitations have been removed?
In particular those (or what workarround other then backup/restore):

* With record deletes the data pages will develop holes. The database will
have
several partially filled pages. The only way to fix this is to do a backup
and
restore. The backup and restore will effectively defragment the data pages.

* TIP (Table Information Page) growth - We will continually allocate new
TIP
pages. As the # of transactions grows this increases the memory overhead as
each new tranaction needs a larger buffer to store TIP bits. The # of pages
on
disk grows. A gbak/restore is required to restore the number to TIP pages to
a
smaller, more efficient value.


In general, what is the largest more or less 24h FB database known?




[Non-text portions of this message have been removed]