Subject | Re: [firebird-support] Re: Getting IBServer to release memory |
---|---|
Author | Helen Borrie |
Post date | 2003-07-10T14:17:48Z |
At 01:26 PM 10/07/2003 +0000, you wrote:
Instead of
start transaction
for 1 million rows do
{
update a row
move foward
}
commit transaction
Do this in a procedure:
declare counter
while not end of table do
{
set counter to 0
start transaction
while counter < 10000 do
{
update a row
move forward
set counter = counter + 1
}
retrieve key of next row
commit the transaction
}
This way, PROVIDED you don't use Commit with Retain, the obsolete back
versions will become uninteresting and they will get deleted from disk and
db cache next time the garbage collector goes around. This will save up to
90% of cache growth, as well as free up page space on disk.
that affects performance on all platforms.
However, if you are using asynchronous writes, it IS a Windoze issue and,
pre-1.5, you have no option but to shut down the server service in order to
make Windows flush the filesystem cache.
From 1.5 onward, you will be able to make a config setting that forces
Windoze to flush the f/s cache while the server is running, and also
controls the frequency of the flushes...the default is to let it continue
misbehaving the way it always did.
If you MUST use asynch writes NOW, then moving to Linux will cure this
particular problem. Mind you, there are lots of other good reasons for
running your dbserver on Linux.
Could you please ensure you edit your replies. Top-posting with a
mile-long trail of alphabetti is not fashionable in our forums. Thanx.
^heLen
>I know that the update of a large number of records is not ideal.Alan and Svein have already told you what to do.
>Unfortunately we have a user who likes to update once a month and so
>this situation arises.
>What I really want to know is if the behaviour of ibserver in terms
>of not releasing memory is 'normal' and whether anything can be done
>to release the memory back to the OS, other than stopping and
>restarting the service.
Instead of
start transaction
for 1 million rows do
{
update a row
move foward
}
commit transaction
Do this in a procedure:
declare counter
while not end of table do
{
set counter to 0
start transaction
while counter < 10000 do
{
update a row
move forward
set counter = counter + 1
}
retrieve key of next row
commit the transaction
}
This way, PROVIDED you don't use Commit with Retain, the obsolete back
versions will become uninteresting and they will get deleted from disk and
db cache next time the garbage collector goes around. This will save up to
90% of cache growth, as well as free up page space on disk.
>Is this perhaps a Windows issue and might beInhibiting garbage collection is a "multi-generational architecture" issue
>solved by moving the database onto a Linux server?
that affects performance on all platforms.
However, if you are using asynchronous writes, it IS a Windoze issue and,
pre-1.5, you have no option but to shut down the server service in order to
make Windows flush the filesystem cache.
From 1.5 onward, you will be able to make a config setting that forces
Windoze to flush the f/s cache while the server is running, and also
controls the frequency of the flushes...the default is to let it continue
misbehaving the way it always did.
If you MUST use asynch writes NOW, then moving to Linux will cure this
particular problem. Mind you, there are lots of other good reasons for
running your dbserver on Linux.
Could you please ensure you edit your replies. Top-posting with a
mile-long trail of alphabetti is not fashionable in our forums. Thanx.
^heLen