Subject | Re: Write into a .fdb file without the overhead of API's |
---|---|
Author | Adam |
Post date | 2005-07-02T01:49:35Z |
Hello Revathy,
I would like to add my concern to what has already been stated. Unless
you have insert triggers doing nasty things and a whole host of
indices, insert performance is likely to be limited by the speed of
the Hard disk(s) in the server. Re-writing your own RAW format has a
couple of big risks.
Firstly, the format WILL change over time. It is referred to as the on
disk structure (or ODS), and as new features are added or as existing
features are improved, the ODS will change. While if you keep strictly
to the ODS for a set version, it will behave normally with that
version, the smallest mistake may turn into a costly data recovery
operation.
Secondly, if the system is I/O bound, then rewriting the API will not
make a shred of improvement.
You mention there is real time data, how much data are you talking
about? Before attempting to re-invent the wheel, it will pay to check
whether Firebird is already capable of sustaining adequate
performance. Remember that the databases Firebird is based on has 20
years of evolution and refinement, so things in the API that may seem
inefficient may well be the very features that save your butt if there
is a hardware failure.
There are a couple of things you can try.
Firstly, tweak the system a bit for performance. Firebird ships with
ForcedWrites on. This is essential if you care about recoverability
with power failures etc, but if need to squeeze a bit more
performance, and data safety is not a prime concern, and you have a
UPS installed, you can gain a bit by switching it off.
Secondly, Firebird supports external tables. You should read about
them because they sound to me precisely what you are after. You can
dump the real time data into a file, and simply insert them into your
database at regular intervals, but it really depends what your need it.
Thirdly, learn what really matters with performance. Short
transactions using hard commits (not commit retaining). This minimises
back versions and allows the garbage collection to run. Learn about
the differences between classic, superserver and embedded, and see
which ones it is possible to use, and which ones will give you the
best performance with your hardware / software configuration. Stored
procedures minimise over the wire traffic. Having sensible indices
really help update and delete operations.
Perhaps you could post your intended hardware / OS configuration and
your inserts / updates / deletes per second (average and peak) and
some folks on this list could save you a lot of time.
Adam
I would like to add my concern to what has already been stated. Unless
you have insert triggers doing nasty things and a whole host of
indices, insert performance is likely to be limited by the speed of
the Hard disk(s) in the server. Re-writing your own RAW format has a
couple of big risks.
Firstly, the format WILL change over time. It is referred to as the on
disk structure (or ODS), and as new features are added or as existing
features are improved, the ODS will change. While if you keep strictly
to the ODS for a set version, it will behave normally with that
version, the smallest mistake may turn into a costly data recovery
operation.
Secondly, if the system is I/O bound, then rewriting the API will not
make a shred of improvement.
You mention there is real time data, how much data are you talking
about? Before attempting to re-invent the wheel, it will pay to check
whether Firebird is already capable of sustaining adequate
performance. Remember that the databases Firebird is based on has 20
years of evolution and refinement, so things in the API that may seem
inefficient may well be the very features that save your butt if there
is a hardware failure.
There are a couple of things you can try.
Firstly, tweak the system a bit for performance. Firebird ships with
ForcedWrites on. This is essential if you care about recoverability
with power failures etc, but if need to squeeze a bit more
performance, and data safety is not a prime concern, and you have a
UPS installed, you can gain a bit by switching it off.
Secondly, Firebird supports external tables. You should read about
them because they sound to me precisely what you are after. You can
dump the real time data into a file, and simply insert them into your
database at regular intervals, but it really depends what your need it.
Thirdly, learn what really matters with performance. Short
transactions using hard commits (not commit retaining). This minimises
back versions and allows the garbage collection to run. Learn about
the differences between classic, superserver and embedded, and see
which ones it is possible to use, and which ones will give you the
best performance with your hardware / software configuration. Stored
procedures minimise over the wire traffic. Having sensible indices
really help update and delete operations.
Perhaps you could post your intended hardware / OS configuration and
your inserts / updates / deletes per second (average and peak) and
some folks on this list could save you a lot of time.
Adam