Subject | Re: [firebird-support] Slow Sweep speed on large database - Long post |
---|---|
Author | Alexandre Benson Smith |
Post date | 2005-08-04T19:14:42Z |
Hi Eduardo,
Eduardo A. Salgado wrote:
Windows with multiple processors tends to slow down, thats the reason to
use CPU Affinity, in Linux SS runs much better on SMp machines than
Windows, but is far fro optimal, the definitive resolution for SMp
machines will be Vulcan/FB 3.0
In SMP machines CS performs better, did you tried it ?
tables will anser a lot of your problems (sweep taking too long).
The main reason why sweep takes long are long duplicate chains of values
on indices, FB 2.0 will solve this issue, another approach is to not use
a declarative FK and enforce the FK relationship using triggers and no
index for that columns.
row should be checked to see if it's "visible" by your active
transaction so every page of that table should be read. Take a look on
the list archives for a long and detailed discussion about this topic.
indices are remove those indices that have a lot of duplicates (> 10000)
diferent way to achieve maximum performance. Or better, you reached a
limit that I will comment bellow (page buffers).
managing the pages in buffer (that's is solved in FB 2.0 !) but for FB
1.5 use at most 10000 pages buffers, if you use bigger numbers FB will
slows down
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda.
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.0/63 - Release Date: 03/08/2005
Eduardo A. Salgado wrote:
>Longish question that deals with speed, of course.In the statistcs I see d:\data wich indicates a Windows install. SS on
>
>Firebird 1.5.0.4306 running in an 8 processor machine with 2 Gig
>RAM. Affinity, after reading here that it was academic to use all
>the processors, has been set to the first processor suing
>SuperServer. Again, can we gain anything from letting SS use all the
>processors?
>
>
Windows with multiple processors tends to slow down, thats the reason to
use CPU Affinity, in Linux SS runs much better on SMp machines than
Windows, but is far fro optimal, the definitive resolution for SMp
machines will be Vulcan/FB 3.0
In SMP machines CS performs better, did you tried it ?
>Have a large database of 4.5+ Gig. Has 5 tables. One has 4.1Assuming that you have FK between the the larger and the very small
>million records, the others are very small. Max Index Depth: 3 with
>some 8 Indexes (see header stats below).
>
>
tables will anser a lot of your problems (sweep taking too long).
The main reason why sweep takes long are long duplicate chains of values
on indices, FB 2.0 will solve this issue, another approach is to not use
a declarative FK and enforce the FK relationship using triggers and no
index for that columns.
>Doing a "Select Count(*)" for the one large table, takes over 10Select count(*) will never be fast in a 4.1 million row tables, every
>minutes!!! Reports 4.1 million records. The records consist of a
>few fields and a blob each.
>
>
>
row should be checked to see if it's "visible" by your active
transaction so every page of that table should be read. Take a look on
the list archives for a long and detailed discussion about this topic.
>The database gets about 100,000 records (50,000 twice a day -morningBecause of long duplicate chains of values in indices, run the statis on
>and around noon) from external data input (not users). The database
>gets some 30,000 records a day deleted.
>
>Validates take about 1 hour to do and report no errors. Just in case
>ran a Mend in about 1 1/2 hour and still no errors.
>
>However, running a Sweep, with no users attached, last night was
>stopped after 5 hours and did not complete! Now, as I write this
>note, I have had a Sweep of a copy of the database going for 3 hours
>and still not completed.
>
>
indices are remove those indices that have a lot of duplicates (> 10000)
>What causes that slowness in Sweep? Could there be something damagedsee above
>that is not reported by Validate nor Mend?
>
>
>Now doing a back up and restore takes too long. We do this overnightsee above
>and it is time for the users to need access to the database and
>backup alone has not finished.
>
>What do we have to do to speed up the back up and restore? The
>Sweep?
>
>
>
>Are we getting to some limits on FB? (not that we can see in Ms.no, you are not reaching a limit of FB, but you must use it in a
>Borrie's book)
>
>
>
diferent way to achieve maximum performance. Or better, you reached a
limit that I will comment bellow (page buffers).
>Would we benefit from going to the latest 1.5.x server?Bug fixes, some speedy improvements on permissions for complex procedures
>
>
>
>Thanks!You have the page buffers defined as 16000, FB 1.5 have a poor algorithm
>
>-Eduardo
>
>===
>
>08/03/2005 10:59 PM 4,730,314,752 My.fdb
>08/04/2005 03:02 AM 876,576,768 My.gbk
>08/04/2005 12:40 AM 1,486,667,776 Daily.FDB
>08/04/2005 12:39 AM 409,600 My_sox.fdb
> 4 File(s) 7,093,968,896 bytes
> 3 Dir(s) 486,281,519,104 bytes free
>
>Database "d:\data\My.fdb"
>
>Database header page information:
> Flags 0
> Checksum 12345
> Generation 9674
> Page size 8192
> ODS version 10.1
> Oldest transaction 2069
> Oldest active 9585
> Oldest snapshot 9585
> Next transaction 9667
> Bumped transaction 1
> Sequence number 0
> Next attachment ID 0
> Implementation ID 16
> Shadow count 0
> Page buffers 16000
> Next header page 0
> Database dialect 3
> Creation date Jul 1, 2005 8:04:36
> Attributes force write
>
> Variable header data:
> Sweep interval: 20000
> *END*
>
>
>
>
>
managing the pages in buffer (that's is solved in FB 2.0 !) but for FB
1.5 use at most 10000 pages buffers, if you use bigger numbers FB will
slows down
see you !
--
Alexandre Benson Smith
Development
THOR Software e Comercial Ltda.
Santo Andre - Sao Paulo - Brazil
www.thorsoftware.com.br
--
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.0/63 - Release Date: 03/08/2005