Subject | RE: [IB-Architect] Trigger Templates |
---|---|
Author | Claudio Valderrama C. |
Post date | 2000-07-08T05:11:54Z |
> -----Original Message-----Joseph: can you narrow your expectations and proposed solutions to a degree
> From: Joseph Alba [mailto:jalba@...]
> Sent: Jueves 6 de Julio de 2000 21:04
>
> With regards to your question of what happens if the templates change, I
> think the "instantiated" triggers of all tables that use this template
> should reflect the change (without tree based parsing please). I think it
> can easily be implemented using late-binding (lookups instead of parsing).
>
> -----------------------
> I would really hope that this community can start an object-oriented
> analysis of the things involved in Interbase, so that we can see
> opportunities for object-orientation, and move IB toward the
> object-oriented
> world - which would be a very smart move Coding-wise and Marketing-wise.
that doesn't demand 15 years of future effort? I have nothing special
against your ideas, but again let's go back to the horse-before-cart level:
what are your immediate and most tiresome activities that drive you nuts and
that can be addressed maybe in a few months of development?
For example, if you do a task from 40 triggers that do the same thing and
you would be able to write code like this in every trigger:
execute procedure generic(current_relation, old, new)
maybe you'll have part of the work done. In this example, current_relation
would be a system defined variable that takes the value of the relation or
view that fired the trigger and you are able to pass "old" and "new" vectors
as parameters. Okay, you still have a mess of 40 triggers, but a really
little mess that defers all the other mess to a generic procedure.
As Fabricio Araujo wrote in IB-Conversions, MsSql procedures are already
somewhat unreadable. IB procedures have a clean and easy syntax instead.
They can be learned by newbies in short time. Please, don't destroy their
beauty with a lot of radical changes in the syntax. One point is to remain
backwards compatible. I'm not against progress, I'm not calling to freeze
the engine as it's. If I was in that mood, I wouldn't be subscribed to this
list.
Each time a product receives a ton of new features in a row, disaster
happens. Borland had to learn that lesson the hard way when they compromised
the stability and credibility of Delphi (gained with D3) due to a really
horrible D4 release. Thanks to that, I lost 2 good client companies; they
migrated to VB because D4 was virtually unusable when it was released => bug
over bug. We don't need to repeat the story. Being cautious is not
necessarily the same than being conservative and obsolete.
In other post, you wrote:
> Lastly, other database servers are going object-oriented to eliminate theThe last time I checked Oracle's OO features (v7.X) they were more
> impedance between object-oriented analysis (like UML) and database
> implementations like IB. (Oracle already has object-oriented features).
marketing than reality. I can't comment on v8.X, however.
It seems like you haven't worked with OO databases. In early 1996, I
abandoned IB to pursue the mythical aim of developing and making my life
selling turnkey systems with an OODMBS. Since ObjectStore was for large
corporations (maily because high price) I chose the shrunken commercial
version known as PSE Pro for C++. It was for single user or multiple
readers. No problem, I was targeting very special needs. I worked through
two years on that model without counting the 3 previous years I spend
partially to read theory on the same subject. Since it was C++, I was more
than happy. C++ templates were supported, bravo!. The result was awesome in
speed and power. Since the same model applied to both the stored objects and
the in-memory objects, I was working with the same data structures on disk
and RAM. However, let me warn you that the data impedance in RDBMS engines
is what saves you from having to learn more than you would want to write
your applications: tough problems stem from the fact you're dealing with a
model that needs to remain consistent in RAM and disk in OO databases,
starting with the fact you need to be in charge of many implementation
details, know the layout of your objects, be sure some pages are in memory
before you attempt some operations, understand a good degree of C++ and be
prepared to deal with VTBLs directly in some cases, mainly when you write
dynamic libraries. I won't bore you on more details, but since your model is
the same on disk and RAM, a change in your program's definitions requires a
rebuilt of the stored information, sometimes writing your own upgrade
routine with the aim of some basic low-level API calls. Since your objects
can be anything you want, a generic upgrade tool won't work in most cases. I
enjoyed no doubt those tasks and I went further, providing two small patches
for the modified STL library the PSE's maker company provided plus a small
PSE-aware string library.
After writing C++ code for my OO db in 1996 and 1997, I returned in May
1998 to IB. Reasons where the company providing the OO engine changed to a
royalty model and that for multiuser access, I was in charge of writing the
concurrency control or to buy the full OO db that is very expensive.
Ultimately, ObjectDesign changed its name to become ExcelonCorp maybe
because the company couldn't make anough money only from its OO engine for
C++ and Java so they now also are targeting XML with a tool just called
Excelon.
The same data impedance in IB and other relational engines is the one that
makes possible a recompilation of your structures in the program without
changing the database and vice versa, a change of domain's definition in
your schema doesn't necessarily force a rewrite of your application. In the
most "cohesive" OODBMS engines like the one I worked with, the addition of a
field to an object requires the upgrade of the stored data even if you don't
use that field because the engine brings the object layout from disk to
memory directly, only fixing pointers and VTBLs, so the layout must be the
same. If you want to be able to define arbitrarily complex persistent
objects in IB, you would need a tool to extract XDR definitions of such
objects and pass that information to gbak so it knows how to backup and
restore those objects. Otherwise, you should implement the equivalent of
Delphi's streaming system with all the typinfo's overhead. If your objects
are only volatile as local variables, then probably you can avoid most of
the changes to the engine and the tools.
To finalize: if a group of evolutionary changes in the sproc language is
not enough, then for those tasks, a separate and alternative language should
be set and connected with the engine to get context information so routines
in that language can be invoked from procedures or directly as triggers'
code. That language would be Java probably (I would prefer C++, but...)
unless Jim wants to write BLR in XML. In that case, Bill could push Perl,
someone will prefer Python, another person will claim for an extended
version of PHP and so on.
Disclaimer: I'm not saying Joseph's ideas are useless or totally out of
context. But I want to draw a line among features on the engine and features
on external tools.
C.