Subject | Re: Disable garbage collection per connection |
---|---|
Author | Alexander V.Nevsky |
Post date | 2003-11-07T14:12:20Z |
--- In firebird-support@yahoogroups.com, Thomas <thomas@l...> wrote:
disabling this feature and I follow FB evolution very carefully.
application really produce much garbage (lot of updates and deletes)
you'll see at the begining linear slowdown (server should check all
versions of each row at any access) and catastrophic degradation when
amount of garbage reaches some magic value (can't determine exactly).
And note if it occurs the only one you can do with database is backup
without GC and restore into new file. If your problem is really
related to mass updates/deletes and long running transactions I
recommend to re-design application. But I'm not sure it is really GC
problem. I encountered similar behaviour when migrated my production
database from FB1 on Red Hat 7.1 to FB1.5 and changed OS to RH 8 with
kernel 2.4.20 to support it. Still investigating this disabling some
(not really needed for me) RH system daemons step by step. If it
will'nt help I'll try to downgrade kernel to 2.4.18 to get rid of
kscand, I suspect new OS cache management strategy is the reason.
Best regards,
Alexander
> Hi all,garbage
>
> While browsing the knowledge base on ibphoenix I came across a
> Code-Sample in C on how to connect to Firebird/Interbase with
> collection disabled.Thomas, AFAIK, yes. At least I can't remember any mentions on
> The Code-Sample was for InterBase 5.
> Does it still make sense for FB1.5?
disabling this feature and I follow FB evolution very carefully.
> I'm asking because I experienced performance problems (occasionalhangs
> from 8-24 seconds) with Firebird and PHP under heavy load.thus
> I suspect Firebird to be garbage collecting during these hangs, and
> not answering to any queries.I would be very cautious about this. If you switch GC off and your
>
> Would it help using connections with garbage collection disabled?
application really produce much garbage (lot of updates and deletes)
you'll see at the begining linear slowdown (server should check all
versions of each row at any access) and catastrophic degradation when
amount of garbage reaches some magic value (can't determine exactly).
And note if it occurs the only one you can do with database is backup
without GC and restore into new file. If your problem is really
related to mass updates/deletes and long running transactions I
recommend to re-design application. But I'm not sure it is really GC
problem. I encountered similar behaviour when migrated my production
database from FB1 on Red Hat 7.1 to FB1.5 and changed OS to RH 8 with
kernel 2.4.20 to support it. Still investigating this disabling some
(not really needed for me) RH system daemons step by step. If it
will'nt help I'll try to downgrade kernel to 2.4.18 to get rid of
kscand, I suspect new OS cache management strategy is the reason.
Best regards,
Alexander