Subject RE: [IB-Java] all java driver has autocommit and standalone DataSource.
Author Paulo Gaspar
Hi David,

> -----Original Message-----
> From: David Jencks [mailto:davidjencks@...]
> Sent: Friday, January 04, 2002 12:34 AM
> On 2002.01.03 18:12:57 -0500 Paulo Gaspar wrote:
> > Answer inline:
> >
> > > -----Original Message-----
> > > From: David Jencks [mailto:davidjencks@...]
> > > Sent: Thursday, January 03, 2002 5:51 AM
> > >
> > > ...
> > >
> > Being autocommit typically active by default and when you are talking
> > about
> > reading data, I have to disagree.
> autocommit = true by default is required by the jdbc spec. The jca spec
> relies on explicitly started transactions (either local or xa) so using
> them it is unnecessary to unset autocommit.

It is safe to assume that you know the spec better than me.

I did only read parts of it and most of what I say is based on my experience
with other drivers (most of it with Oracle).

> > I will never expect that the simplest possible code I can use to read a
> > dataset will provide me with the "inconvenience" of loading a
> > million row ResultSet into the memory of my app.
> >
> > And I do not expect such crazy behavior whatever is the autocommit
> > setting, either if it was set by default or manually.

I still think this is an issue.

From your final remarks I assume you also agree that there is something
here to be solved.

> > > ...
> >
> > It is not simpler. You also have the option of refusing to have
> > more than one ResultSet open for the same Connection. I am 90% sure
> > JDBC allowed that to ease concurrency control problems.
> I don't think the jca spec allows this restriction, and I don't see the
> need.

I know nothing about the JCA spec. I was just trying to provide a simple
alternative that I find more predictable.

> > If I have an exception blowing in my face immediately when I try to open
> > the 2nd ResultSet (with a nice error message) I immediately know I
> > can not do it.
> >
> > But if the driver remains silent and seems to comply when I test my app
> > with a couple of hundreds or even a few thousands of records and then,
> > one day, when the data grows the app suddenly as "not enough
> > memory" problems
> > I could spend a load of time to find out that it is the f*%$ing
> > DB driver that messed the whole thing up!

I still think that predictability is very important.

> > ...
> >
> > I am sorry about the rough answer but there is nothing worse (on IT) for
> > me than unpredictable software, by being just buggy or just having
> > something fancy that can blow up in my face.
> Well, maybe you won't mind a rough response ;-).

No problem, of course!

> Why are you contemplating
> putting an app into production where you don't have explicit transaction
> control ...

I always care about transaction control when UPDATING the database, but not
when READING. Never had to.

Also keep in mind that a lot of people just rely on autoupdate=true when
they are performing one update at the time, and the spec allows for that.

> ...and cannot guarantee that all result sets are small enough to be
> worked with in a reasonable time and memory frame?

The reason I cannot guarantee that my result sets are small enough is
because I do a lot of data exports, some of them HUGE.

I can not have those monsters in memory. I just open the ResultSet in
CONCUR_READ_ONLY / TYPE_FORWARD_ONLY - the defaults for a Statement
created by Connection.createStatement() - and fetch one row at the time.

I do not expect a CONCUR_READ_ONLY / TYPE_FORWARD_ONLY ResultSet to cache
all its data in memory!!!

I am sure I am not the only person having to do this kind of thing.

> > ...
> Well, if I get some free time (unlikely I fear) I will look into a switch
> between throwing an exception or the current behavior.

I will try to get more familiar with the code and see if I can help, but I
am afraid I have a similar (lack of) free time problem.

Have fun,
Paulo Gaspar