Subject | Re: internal gds software consistency check (cannot start thread) |
---|---|
Author | Adam |
Post date | 2006-03-02T03:42:36Z |
> No. It's exactly the same for Firebird Superserver. A 32-bitI must admit I had not considered this option. Having little
> process is a 32-bit process. However, the cure for Firebird has
> another option - use Classic.
experience with Classic (just installed now on my laptop), what are
the gotchas I need to know about? I know I may need to disable
Guardian. I have read the release notes and nothing stands out. The
absence of local connection is fine, it doesn't work under terminal
server anyway.
Will I need to adjust the default conf file settings or will the
default values be sensible?
>More information on the first server, I discovered it only had a tiny
> >I know
> >there is a 2GB limit but I don't think I am near it at this stage on
> >these servers (and no reason why they would suddenly become near it
> >within a week of each other either).
amount of VM (300MB). Given that the server only has 1.25GB RAM, I
have advised them to increase it.
The second server has about twice that much RAM, enough VM and less
active users so I am a little confused about that one.
>This makes sense and supports the usual amount of memory we see these
> > There are 30 (relatively
> > small) databases hosted on that machine, each has the default 4K page
> > size, so by my math this is 8MB RAM, a long way from 2GB.
>
> There is one page cache maintained for each database. 30 * 8Mb =
> 240Mb. (And, note, if you have even one database in the mix that has
> a larger cache configured, then all databases will use that size of
> cache.) Then, each thread uses ~ 2 Mb. (total connections * 30 * 2)
> will give an approximate (probably low) assessment of memory being
> used for connections. Add the lock table, which is dynamic, and
> roughly proportionate to #databases * #users * #operations. Then,
> count in sort memory for each select using orderings/groupings and
> for undo logs for multiple inserts per transaction, TIP copies
> maintained for Read Committed transactions, et al. On top of all
> that comes memory-intensive ops - optimization, building bitmaps and
> streams for joins, etc...
processes occupying. But this does not explain why it must have been
using significantly more than normal.
>Not really. Most of the processes that were occuring at the time the
> >If this is unavoidable, is it possible to have the service close to
> >be restarted by guardian rather than remain open but refuse
> >connections?
>
> Throw the baby out with the bathwater? If you shut down the server,
> you abort all uncommitted work in 30 databases. You could build a
> middle tier that queues connection requests.
process crashed were automated processes rather than user driven
items. The connections tend to be open for only small periods of time
(seconds).
But after all of these connections had been closed, the server was
stuck in its connection refusal loop. No user or service could connect
to the database after this error occured and after a few seconds none
was connected either. Restarting the service after this point would
not abort anything.
>Our application does not tend to require the sorting of large datasets
> >Also, which settings should I be changing. [..] I should
> >mention that all settings in the conf file are default.
>
> I doubt that you could go lower with page cache. You could look at
> reducing the amount of RAM used by sorts.
by its nature.
> Try to reduce (orWe never do this :)
> eliminate) application workflows that eat too much memory, e.g. don't
> use ReadCommitted in read-write transactions,
> don't keep SELECTSPossibly a bit guilty of this. Not too bad, but I can think of a few
> alive in read-write transactions for long periods, etc.
>
places where the transaction waits for a user to click something. The
terminal server session will close on them though which will exit and
abort their changes.
> 30 is a lot of databases for one Superserver if most are doing writesThey are small databases, mostly between 10 - 150MB
> most of the time.
> Add another server?In the pipes, but the load has not increased for a few weeks and both
servers have handled it up to this week without the slightest hint of
a problem.
> (A 2 Gb Linux box withThis is a goal of ours, however it currently uses a UDF that I would
> nothing else to do?)
need to port to Kylix which may not be as easy as that sounds.
> Or throw more RAM at the machine and runHence my first question about gotchas.
> another Superserver process on it, or go to Classic?
Thanks
Adam