Subject | questions on firebird, mainly obscure |
---|---|
Author | Carrell Alex |
Post date | 2007-11-08T11:48:58Z |
Hi,
Apologies several questions that part answer themseleves.
Before going any futher thanks,
Why ask / Background?
I have a system that runs all year round, middling of size, normally 10-100
users logged on, 70-80+ users for 50% of a 14 hour period. Currently the
system works well for 3 months and then hits buffers due to lack of house
cleaning.
Last week things went slow, The oldest transaction was 660000 odd, the
oldest active 790000 odd and current 980000 odd (the database began to
stumble).
Below is the gstat -h today, I swept it early last Friday.
As for maintenance.
I have tried forcing everyone off with gfix -sh -at 100 (for example) and
then running scripts on Sunday mornings. Fails to terminate all connections,
thus sweep and restore are not done. Some applications are permantly
connected which I have no control over, plus several logins use SYSDBA.
We have 780MB of data in this database and increases at 2-3mb per week.
(large files and data are in separate database or file system)
I have pushed the lock hashes to the maximum, pushed the caching up and
down, to see effects on 4gb of ram. The server is memory and file io tied
only. I just make sure it never uses swap and keep unneccessary paging down.
Each user averages 60-50mb virtual memory on default settings. Only two
dozen users hammer the system at any time.
I am not worried about indexes, the data creation is primarily in those
areas where queries go slow.
The database is a linux server 2.4 kernel, I plan too move to a 2.6 kernel
by new year. Running firebird and sshd as only services/software on a bare
bones linux install.
The server is fb1.5 rc12 Classic (don't ask why rc12 but I am definently
moving to 1.5x soon)
So questions :
What is an estimate of good transaction handling from looking at
gstat?
- I am wondering about a rate of transaction churn, the
difference between oldest/active latest transaction, what are resonable
differences as percent, or by heavy/middling/light usage.
Does using other username/passwords than SYSDBA mean that when SYSDBA asks
them to terminate they terminate.
What benefits for moving to firebird 2.0?
-asked mainly to help with a business case for change.
Any ideas for forcing users off?
- currently I use gfix, but may take futher steps.
- I do use xinetd restart /stop as a futher step. But this leaves
existing connections open (transactions that should not be open imo)
-what about being nasty and switching runlevels up and down, kicking
users off indiscrimantly after running both of the above. It'll be like a
reboot but quicker and easier by a script.
- I am tempted to rewrite all the various software to log off at
midnight on a Sunday,
Change page size to 8192 from 4092 (this slows the system down but extends
the period between restores), is this thought strange?
- I have a good many users, with large queries, 4gb of ram and
server swaps with default and lower settings at 8192. I use restores to
refresh indexes.
Does anyone drop and recreate indexes ad-hoc at quite periods to refresh
them? (at certain times I know that the users are not using the areas that
perform badly/deteriate the most).
- Would it be a strange idea to add another index that works, so
that while rebuilding the best index queries work but are slow?
Plus can anyone define good transaction handling.
- I'd like opinion and reason if you differ to my idea : Mine is,
transactions should be committed immediatly after fetching data, and stay
uncommitted until during Create/update/delete process (and should be a
succint and swift as possible)
Is it worth while recompiling firebird to allow higher lock hashes?
- 2081 is not enough, the database I have still has chains upto 50
long at times (better than the 1000+ before I started looking, the speed
increase was noticeably at user end). How much effort is involved in this?
(I have written beginner/journeyman C and C++ in mylife)
What about the locksemcount in firebird.conf? What effect does it have?
Thanks, alex
System after 6 days of use. Gstat -h
Database header page information:
Flags 0
Checksum 12345
Generation 1005046
Page size 4096
ODS version 10.1
Oldest transaction 981528
Oldest active 996523
Oldest snapshot 996507
Next transaction 1005019
Bumped transaction 1
Sequence number 0
Next attachment ID 0
Implementation ID 19
Shadow count 0
Page buffers 15000
Next header page 0
Database dialect 1
Creation date Aug 4, 2007 14:32:56
Variable header data:
Sweep interval: 0
*END*
Apologies several questions that part answer themseleves.
Before going any futher thanks,
Why ask / Background?
I have a system that runs all year round, middling of size, normally 10-100
users logged on, 70-80+ users for 50% of a 14 hour period. Currently the
system works well for 3 months and then hits buffers due to lack of house
cleaning.
Last week things went slow, The oldest transaction was 660000 odd, the
oldest active 790000 odd and current 980000 odd (the database began to
stumble).
Below is the gstat -h today, I swept it early last Friday.
As for maintenance.
I have tried forcing everyone off with gfix -sh -at 100 (for example) and
then running scripts on Sunday mornings. Fails to terminate all connections,
thus sweep and restore are not done. Some applications are permantly
connected which I have no control over, plus several logins use SYSDBA.
We have 780MB of data in this database and increases at 2-3mb per week.
(large files and data are in separate database or file system)
I have pushed the lock hashes to the maximum, pushed the caching up and
down, to see effects on 4gb of ram. The server is memory and file io tied
only. I just make sure it never uses swap and keep unneccessary paging down.
Each user averages 60-50mb virtual memory on default settings. Only two
dozen users hammer the system at any time.
I am not worried about indexes, the data creation is primarily in those
areas where queries go slow.
The database is a linux server 2.4 kernel, I plan too move to a 2.6 kernel
by new year. Running firebird and sshd as only services/software on a bare
bones linux install.
The server is fb1.5 rc12 Classic (don't ask why rc12 but I am definently
moving to 1.5x soon)
So questions :
What is an estimate of good transaction handling from looking at
gstat?
- I am wondering about a rate of transaction churn, the
difference between oldest/active latest transaction, what are resonable
differences as percent, or by heavy/middling/light usage.
Does using other username/passwords than SYSDBA mean that when SYSDBA asks
them to terminate they terminate.
What benefits for moving to firebird 2.0?
-asked mainly to help with a business case for change.
Any ideas for forcing users off?
- currently I use gfix, but may take futher steps.
- I do use xinetd restart /stop as a futher step. But this leaves
existing connections open (transactions that should not be open imo)
-what about being nasty and switching runlevels up and down, kicking
users off indiscrimantly after running both of the above. It'll be like a
reboot but quicker and easier by a script.
- I am tempted to rewrite all the various software to log off at
midnight on a Sunday,
Change page size to 8192 from 4092 (this slows the system down but extends
the period between restores), is this thought strange?
- I have a good many users, with large queries, 4gb of ram and
server swaps with default and lower settings at 8192. I use restores to
refresh indexes.
Does anyone drop and recreate indexes ad-hoc at quite periods to refresh
them? (at certain times I know that the users are not using the areas that
perform badly/deteriate the most).
- Would it be a strange idea to add another index that works, so
that while rebuilding the best index queries work but are slow?
Plus can anyone define good transaction handling.
- I'd like opinion and reason if you differ to my idea : Mine is,
transactions should be committed immediatly after fetching data, and stay
uncommitted until during Create/update/delete process (and should be a
succint and swift as possible)
Is it worth while recompiling firebird to allow higher lock hashes?
- 2081 is not enough, the database I have still has chains upto 50
long at times (better than the 1000+ before I started looking, the speed
increase was noticeably at user end). How much effort is involved in this?
(I have written beginner/journeyman C and C++ in mylife)
What about the locksemcount in firebird.conf? What effect does it have?
Thanks, alex
System after 6 days of use. Gstat -h
Database header page information:
Flags 0
Checksum 12345
Generation 1005046
Page size 4096
ODS version 10.1
Oldest transaction 981528
Oldest active 996523
Oldest snapshot 996507
Next transaction 1005019
Bumped transaction 1
Sequence number 0
Next attachment ID 0
Implementation ID 19
Shadow count 0
Page buffers 15000
Next header page 0
Database dialect 1
Creation date Aug 4, 2007 14:32:56
Variable header data:
Sweep interval: 0
*END*