Subject | Re: [IBO] Re: Drop off support... My plans for the future. |
---|---|
Author | Jason Wharton |
Post date | 2010-12-21T21:00:50Z |
Perhaps some of you have worried that EDO will have the same problems and
deficiencies that the BDE had.
First of all, the BDE was a local database engine that tried to virtualize a
remote client/server database. It was terribly inefficient because rather
than work out the precise SQL statement to keep the work taking place on the
server, it would often pull the server's data to its own internal buffers
and then carry out those operations locally, because that is the only way it
knew how to do most of what it did. Thus, people had to learn all the ways
to carefully tip-toe around the BDE's deficiencies to avoid having
substantial workloads pulled locally. It's ability to work efficiently with
a remote n-tier data server was very close to NIL. It's problems were due to
is core architecture.
The architecture of IBO is written from the ground up to work efficiently
with a remote database server. It does not have any propensity to pull large
quantities of data locally to perform data access operations upon them. It
has extensive parsing capabilities such that it can always put together the
precise SQL statement that keeps all of the work taking place on the server
and brings only the resultant data of interest to the client. Thus, all of
the capabilities you currently enjoy with InterBase and Firebird via the IBO
architecture will simply become available to other servers that have a good
implementation of SQL. There will be no loss of functionality. Most of the
complexity of the drivers will be to handle the underlying buffering and the
differences in their SQL implementation.
The only downside that I can see that will impact performance is having the
driver layer itself to go through. If I work hard and do the job right, this
should impact things minimally. My intentions are to use as much "by
reference" passage of data as I possibly can. This way, there will not be
any overhead copying data from your native dataset buffers into the buffers
the database's API will be using to make the calls with. It is already this
way with IBO and I plan to keep it this way as much as possible. My affinity
will be for InterBase/Firebird and so they will likely never need to have
data copied during the API call. Other database systems may have this
requirement but I don't yet foresee any problems.
Adding in one additional function call to get from the driver to the native
client API should have virtually no impact to the performance. It will be a
virtual/dynamic function call, which isn't absolutely ideal, but the tax
here has more to do with the amount of memory is used for the
virtual/dynamic function's reference tables than the actual performance of
calling the function. I have even contemplated using an additional bank of
static function references, which is what I am currently doing with the API,
to handle the extra driver layer I'll be using.
Anyway, hopefully this will dispell any concerns you may have about EDO not
having the IBO performance edge.
Thanks,
Jason Wharton
deficiencies that the BDE had.
First of all, the BDE was a local database engine that tried to virtualize a
remote client/server database. It was terribly inefficient because rather
than work out the precise SQL statement to keep the work taking place on the
server, it would often pull the server's data to its own internal buffers
and then carry out those operations locally, because that is the only way it
knew how to do most of what it did. Thus, people had to learn all the ways
to carefully tip-toe around the BDE's deficiencies to avoid having
substantial workloads pulled locally. It's ability to work efficiently with
a remote n-tier data server was very close to NIL. It's problems were due to
is core architecture.
The architecture of IBO is written from the ground up to work efficiently
with a remote database server. It does not have any propensity to pull large
quantities of data locally to perform data access operations upon them. It
has extensive parsing capabilities such that it can always put together the
precise SQL statement that keeps all of the work taking place on the server
and brings only the resultant data of interest to the client. Thus, all of
the capabilities you currently enjoy with InterBase and Firebird via the IBO
architecture will simply become available to other servers that have a good
implementation of SQL. There will be no loss of functionality. Most of the
complexity of the drivers will be to handle the underlying buffering and the
differences in their SQL implementation.
The only downside that I can see that will impact performance is having the
driver layer itself to go through. If I work hard and do the job right, this
should impact things minimally. My intentions are to use as much "by
reference" passage of data as I possibly can. This way, there will not be
any overhead copying data from your native dataset buffers into the buffers
the database's API will be using to make the calls with. It is already this
way with IBO and I plan to keep it this way as much as possible. My affinity
will be for InterBase/Firebird and so they will likely never need to have
data copied during the API call. Other database systems may have this
requirement but I don't yet foresee any problems.
Adding in one additional function call to get from the driver to the native
client API should have virtually no impact to the performance. It will be a
virtual/dynamic function call, which isn't absolutely ideal, but the tax
here has more to do with the amount of memory is used for the
virtual/dynamic function's reference tables than the actual performance of
calling the function. I have even contemplated using an additional bank of
static function references, which is what I am currently doing with the API,
to handle the extra driver layer I'll be using.
Anyway, hopefully this will dispell any concerns you may have about EDO not
having the IBO performance edge.
Thanks,
Jason Wharton