Subject | Re: [IBO] Yet Another BDE emulation issue. Transactional Isolation issue I think. |
---|---|
Author | Randal W. Carpenter |
Post date | 2007-07-11T04:04:48Z |
On Sat, 30 Jun 2007, Helen Borrie wrote:
written in ibx as of late.
learned to avoid the situation sorta. Also I did find one genuine bug in
the way ibo works in that area.
with bde as the database engine that started on commercial ib 5 (then 5.6
then ... to 1.0 firebird so far). Some is in modules, some is thankfully
in sub-applications written in ibx. But most is in one big application
that was written for the bde. Now I am re-writing it as I can but I am a
lone programmer supporting a rather large single source application
supporing driver dispatch, payroll, customer billing...you name it, even
integrated document imaging. I started in 1998 for year 2K compatibility
reasons...it was all in a dos database engine called teamup way back then.
I need at least a fair emulation of the bde for now...and while ibo
provides that I have as you said had to make it jump through hoops to
actually do it.
fine, and the bde, its ibo I am having trouble getting my head around :).
At any rate I have the application working in ibo now, and its running
faily well live now despite ibo not really doing such a good emulation of
the bde...or perhaps I should say the way I used the bde.
profiles of both bde and ibo using debug versions of the database, and
they are using different plans...my question is why? I know the bde tends
to re-write some sql before it sends it to the server...so perhaps thats
helping with the plan somehow...but at any rate I am having to provide
plans to my ibo queries to get the now acceptable performance.
to dialect 3...the database is just dialect 1. I have followed dialect 3
writing conventions since ib 6 beta and firebird pre 1.0 days (back when I
had to bootstrap firebird myself).
now...I thought I said that somewhere...maybe not.
root of invovation :P
quick though actually once I passed sensible plans to the server in the
query.
get anything favorable out of it...it was like problem #1 I think.
around. I have used dbase though...anything similar?
did get me past keylinks and the like though.
Its already working favorably now...I have already worked around ibos
quirks in cached update support..I just wanted to know if my emails were
getting out. I already have some old versions of IBO...3.x and 4.x
ones...this is just the first time I bit bullet and actually got rid of
the bde. I am just running everything through my seperate profiler now
and spoon feeding it to the server in ibo...seems to work ok that way...I
just wasn't thrilled about going through hundreds of queries...but hey
whatever works right?
thanks
Randal
> At 12:35 AM 30/06/2007, you wrote:Yup, thats the plan...but its a major application. All my others are
>> Are these getting to the list?
>
> Yes, but I have to confess it's rather like trying to swat a fly in a
> room with a high ceiling. There are problems within problems here
> (as often occurs when an old BDE app has been converted: the BDE is
> really NOT a good environment for learning how datasets from
> client/server databases work....)
>
>
>> At any rate, I changed the start trans, try to commit or rollbacks to
>> start trans, try commitretaining or rollbacks, then did a if in trans
>> rollback when I closed out the list. I did this several places and it
>> seems to better emulate the bde.
>
> Can't make any sense out of this...but, basically, you should regard
> "emulating the BDE" as something you do initially to get your app
> functioning, not an objective in itself.
>
written in ibx as of late.
>> I think what ibo does is put everythingYes, I agree that ibo is retaining after the commitupdates. I have
>> under the same transaction once you start a new one on a cached
>> update...the bde left the data before you starting the transaction in the
>> old transaction state somehow...that is ibo is refreshing the data too
>> well :).
>
> This indicates confusion. What Cached Updates does is to separate
> the dataset from its "live" connection to the underlying cursor set -
> regardless of whether you are using the BDE, IBO, IBX or
> whatever. When you call ApplyUpdates, you're actually asking the
> interface to post any changes. The Commit happens when the
> *transaction* is committed. And then, confusingly, CommitUpdates
> clears the cache. Where IBO behaves differently to the others is
> that it retains the cache after the commit. You'll get a fresh cache
> when you refresh.
learned to avoid the situation sorta. Also I did find one genuine bug in
the way ibo works in that area.
>Sure, I have a rather large codebase I have written in c++ builder (1-6)
> You don't really have much use for cached updates with IBO. It's
> there for compatibility and, in the longer term, in case you have a
> special need for it. From what you (seem to) have described, it's
> just extra overhead and confusion...but you might like to explain why
> you're needing to use it.
>
with bde as the database engine that started on commercial ib 5 (then 5.6
then ... to 1.0 firebird so far). Some is in modules, some is thankfully
in sub-applications written in ibx. But most is in one big application
that was written for the bde. Now I am re-writing it as I can but I am a
lone programmer supporting a rather large single source application
supporing driver dispatch, payroll, customer billing...you name it, even
integrated document imaging. I started in 1998 for year 2K compatibility
reasons...it was all in a dos database engine called teamup way back then.
I need at least a fair emulation of the bde for now...and while ibo
provides that I have as you said had to make it jump through hoops to
actually do it.
>> Both the bde and ibo were set to read commit so I am at a lossI understand but they are acting different despite this, I use ibx just
>> as to why they act different...maybe because bde has more middleware in
>> there or something?
>
> You should understand that Read Committed is a transaction
> isolation. What it does is to make the latest committed state of the
> database available to the transaction. It is setting a parameter
> that the *server* responds to with the appropriate behaviour....it
> doesn't somehow tell your datasets to refresh themselves if database
> state changes after the transaction starts! You have to ask for that
> - by calling the Refresh method of the dataset. You can implement a
> technique known as DML Caching to make this happen automatically, but
> I'd recommend getting your head around what the various transaction
> isolation settings mean, before attempting that.
>
fine, and the bde, its ibo I am having trouble getting my head around :).
At any rate I have the application working in ibo now, and its running
faily well live now despite ibo not really doing such a good emulation of
the bde...or perhaps I should say the way I used the bde.
>I always thought that too, but my testing has proved otherwise. I did
>> At any rate this also sped up my application somewhat too on the remote
>> sites, now updates that took 8-10 seconds are back down to about a 1/2
>> second. Once I go through and make sure ibo is using the best plan (which
>> it does not do so well where the bde guess good...
>
> Neither the BDE nor IBO (nor Java, nor PHP, nor Ruby, nor whatever)
> has anything to do with the plan. It is created by the server's
> optimizer, on the basis of the SQL statement you give it. And what
> the optimizer generates by way of a plan is governed a lot (although
> not by any means completely) by the indexes available.
>
profiles of both bde and ibo using debug versions of the database, and
they are using different plans...my question is why? I know the bde tends
to re-write some sql before it sends it to the server...so perhaps thats
helping with the plan somehow...but at any rate I am having to provide
plans to my ibo queries to get the now acceptable performance.
>> probably because I amSorry to lead you to that conclusion, actually 100% if my sql is friendly
>> using firebird 1.0 on dialect 1, and its optimized for the 2.0 engine on
>> dialect 3 maybe), this thing should really fly.
>
> More wrong assumptions....however, I'm pretty certain that your SQL
> will need some repair work to give correct output and handling with
> IBO 4.7. The fact that your improper syntax seems to work (or has
> done so in the past) has everything to do with the fact that
> InterBase and that very old Firebird version you are using were more
> tolerant of improper syntax.
>
to dialect 3...the database is just dialect 1. I have followed dialect 3
writing conventions since ib 6 beta and firebird pre 1.0 days (back when I
had to bootstrap firebird myself).
> IBO 4.7 has had extensive refactoring to take into account theagreed, but as I said the bde was holding my hand on that at times.
> increasing restrictions on syntax anomales that have occurred with
> each successive release of Firebird, particularly affecting relation
> aliases. So, there's probably some potential for your existing app
> to go pear-shaped with IBO 4.7 unless you set about finding and
> fixing SQL. Myself, I'd want to do that, so that I could be certain
> that I'm getting the results I expect. The "GIGO" principle applies
> to SQL just as much to any logical process.
>
>> See my other post "My IBOyes, thanks, I already re-wrote it that way...and I get bde like speed
>> port is too slow...need performance advice" as this seems to help with
>> that too. I would still appreciate some general performance advice
>> though.
>
> From one of your postings, I noticed you thought that your view
> should be using an index. It won't, because the supposed "key" is
> not associated with an index, due to the fact that SELECT DISTINCT is
> an aggregating operation. You would get far better performance from
> that query by doing the join directly, table-to-table, across those keys.
>
now...I thought I said that somewhere...maybe not.
> DO supply output names for your derived fields - always. DifferentUmm thanks...I know that now, but you know in my defense laziness is the
> versions of IB and Firebird handle unidentified derived fields in
> different ways and assigning identifiers to them specifically is not
> just good practice, it ensures that your client code is stable and
> unequivocal when managing these fields.
>
root of invovation :P
> Another misapprehension I noted in your postings was your belief thatI use that too now...that helped a little. the connection is pretty
> the client side can somehow do something with indexes to make queries
> run faster...it can't...that's for Paradox and some other datastorage
> systems that store indexes in files that the client application can
> read (via the driver layer for that system). You *can* have
> client-side caching of metadata with IBO, which has a high cost in
> terms of the initial connection but which IBO can refer to for some
> operations. It can be handy for very slow connections though,
> especially if the metadata doesn't change often.
>
quick though actually once I passed sensible plans to the server in the
query.
> IBO *does* manage a number of data arrays for datasets, though. AI already did that a while back...that was a must before I could even
> highly important one, for both performance and data integrity, is the
> KeyFields array, which is built from the KeyLinks of the set. IBO
> can do this automatically (if KeyLinksAutoDefine is set) with a query
> over a single table that has a primary key. With joined sets, you
> need to turn off KeyLinksAutoDefine and define the KeyLinks yourself,
> because the client side has no way to determine the uniqueness of
> such sets. When IBO has nothing to go on, it will use the DB_Key
> (rdb$db_key, which is present on all SELECT output, and always unique
> within a transaction context).
>
get anything favorable out of it...it was like problem #1 I think.
> When you have a sound query and your KeyLinks are correct, IBO doesI never used paradox...dont know anything about it though I have seen it
> miracles with fetching data from sets that are being treated as
> bi-directional, through a magical technique with smoke, mirrors,
> KeyLinks and up to three live cursors on the row buffer at the
> server. It's nothing like the stumpy, lumpy stuff that the BDE does
> with Paradox indexes, either, so throw away all those old Paradox assumptions.
>
around. I have used dbase though...anything similar?
> I could go on and on if I wasn't up-to-me-ears in stuff but, really,Already did...but they really didn't tell me what I needed to know...they
> you'll do yourself a lot of favours just by taking yourself through
> the various tech info sheets at the IBO website. You are far from
> being the first person who ever needed to get past old BDE warts in
> order to make optimal use of IBO...if you were, those tech info
> sheets would not exist.
>
did get me past keylinks and the like though.
> If you bump into some insoluble problems with the new-generationThat will defeat my purposes of phasing out the bde over time will it not?
> parsing in IBO 4.7, contact me privately and I'll try to set you up
> with an older, more tolerant version of IBO.
>
> Helen
>
Its already working favorably now...I have already worked around ibos
quirks in cached update support..I just wanted to know if my emails were
getting out. I already have some old versions of IBO...3.x and 4.x
ones...this is just the first time I bit bullet and actually got rid of
the bde. I am just running everything through my seperate profiler now
and spoon feeding it to the server in ibo...seems to work ok that way...I
just wasn't thrilled about going through hundreds of queries...but hey
whatever works right?
thanks
Randal