Subject | Re: [Firebird-Architect] RFC: Cross database queries |
---|---|
Author | Jim Starkey |
Post date | 2007-08-06T10:46:19Z |
Vlad Horsun wrote:
table and field names. Either they go through ODBC or JDBC and get
pseudo system result sets (you should take a look at the ODBC driver to
see how this is done) or go directly at the OSRI system tables. In
either case, you will need to merge remote metadata into the local metadata.
There are at least two ways to handle this. One is to populate local
system tables at the remote create table command time. This has a
potential problem of version skew. Another way is to fetch remote
metadata on reference to local system tables. This will have a
noticeable performance hit. A third scheme is to fetch on first
reference and cache locally. This is less of a performance hit, but
still has a version skew problem. Whatever the scheme, however, you
have to solve the problem somehow or break most tools and the ODBC and
JDBC drivers.
be standard conforming.
the engine rather than making it a component in its own right.
If you think about it a bit more, I think you will realize the a) you
can borrow the Vulcan encapsulated parser, b) the optimizer for network
wide queries is radically different than the local table-only optimizer,
and c) the common part of the execution engine is too trivial to worry
about.
do? Why was it hard to use? What didn't work? Was it slow? Got the
wrong answer?
Have you investigated the MySQL federated storage engine? It does
approximately what you are suggestion, though in a radically different
architecture (MySQL, for example, doesn't have system tables).
>> How do you propose to handle metadata and system tables? Existing toolsThe preparation a query, yes. But intelligent tools need metadata like
>> are going to expect that RDB$RELATIONS, RDB$FIELDS, and
>> RDB$RELATION_FIELDS are appropriately populated?
>>
>
> At my understanding, existing tools expect correctly filled SQLDA after
> isc_dsql_prepare call. I see no reason for tool makers to expect stored
> metadata describing external objects.
>
table and field names. Either they go through ODBC or JDBC and get
pseudo system result sets (you should take a look at the ODBC driver to
see how this is done) or go directly at the OSRI system tables. In
either case, you will need to merge remote metadata into the local metadata.
There are at least two ways to handle this. One is to populate local
system tables at the remote create table command time. This has a
potential problem of version skew. Another way is to fetch remote
metadata on reference to local system tables. This will have a
noticeable performance hit. A third scheme is to fetch on first
reference and cache locally. This is less of a performance hit, but
still has a version skew problem. Whatever the scheme, however, you
have to solve the problem somehow or break most tools and the ODBC and
JDBC drivers.
> In-memory metadata for remote relations will be created at query prepare timeToo little too late. You have to solve the general metadata problem to
> when engine will able to extract query string and prepare it at remote data source
>
be standard conforming.
>How?
>> This may be more
>> difficult than you expect since there can be name collisions between the
>> remote and local RDB$RELATION_FIELDS.
>>
>
> I hope we will avoid it ;)
>
>There isn't any difference except that you are co-mingling the code into
>> The last dozen or two times this has come up, the architecture has been
>> to make a provider (the "mega-database manager") that handles cross
>> database stuff rather than trying to integrate the mega-database manager
>> into the existing Firebird database engine. Starting over with a clean
>> implementation designed to support the specific problem at hand would
>> probably be easier and better than trying to get a 23 year old database
>> engine to handle something it was never intended to do.
>>
>
> I think not about "mega-database manager" but about more or less smart
> client (or gateway) built in to the core engine and parser\optimizator\executor
> extensions allowed to deal with this gateways.
>
the engine rather than making it a component in its own right.
If you think about it a bit more, I think you will realize the a) you
can borrow the Vulcan encapsulated parser, b) the optimizer for network
wide queries is radically different than the local table-only optimizer,
and c) the common part of the execution engine is too trivial to worry
about.
> Perhaps i not understand yours meaning of "mega-database manager".Now that's a brilliant argument! Could you elaborate? What did they
> I've read about Ingress's one and found it ugly, hard to use and failed
>
>
do? Why was it hard to use? What didn't work? Was it slow? Got the
wrong answer?
Have you investigated the MySQL federated storage engine? It does
approximately what you are suggestion, though in a radically different
architecture (MySQL, for example, doesn't have system tables).