Subject | Re: [Firebird-Java] Various transactions for one connection |
---|---|
Author | Jim Starkey |
Post date | 2004-10-19T16:37:55Z |
Roman Rokytskyy wrote:
done at the driver level. It is better for everone involved if it is
done at the database level. While this is infeasible in Firebird 2 and
earlier, it is the basis for the Vulcan request architecture.
I am not a fan of connection pooling. Again, I think it's a solution at
the wrong level. Connection pooling is important only when the cost of
database attachment is high. Reducing the connection cost gives more
bang for the buck, particularly given a trend towards increased
connection state. Current SQL, as you know, uses a pathetically weak
single role module for security. If anybody on the SQL committee ever
learns about three tier application or the web, it will be discovered
that a flexible role model is a better solution. Supporting a let of
roles, some active and some latent, increases the complexity of
connection pooling. So does the programmable namespace required support
inheritence through an application hierarchy. History is against
connection pooling. But of course, it is now the accepted workaround
for database performance problems. The Netfrastructure web modules
don't use connection pooling but still outperform jsp solutions by at
least an order of magnitude.
All that said, fewer connections are better than more connections, which
discourages programmers from using multiple independent
transactions/connections when program logical would otherwise call them
them. I've always had a lot of sympathy for application programmers,
which is why I built multiple transactions per attachment into
DSRI/OSRI. At that time, blobs, triggers, UDFs, and even dates were
non-standard, so we've had some progress.
But there is no doubt in my mind that the SQL committee blew it on
transactions and the JDBC designed amplified the blunder. If it can be
fixed within the context of the standard, I think it makes sense to do so.
the physical atttachment should not be reused until the parent
connection and all clones have been closed. I have cloneConnection in
Netfrastructure, but I as I said, I don't reuse connections, so I'm not
going to claim a complete solution, and in any case, I have no intention
of implementing it. But I don't seen any unsolveable solutions.
The primary usage of cloneConnection in Netfrastructure is in a
replication trigger to kick of a thread to perform synchronization. One
of the benefits of cloneConnection is that database properties --
dbname, user creditentials, and the like -- are not necessary, which is
good because application context within a trigger is extremely limited.
[Non-text portions of this message have been removed]
>My experience (that can be confirmed with AS3AP tests) shows that connectionI agree. But I don't agree that prepared statement pooling should be
>pooling brings much less than prepared statement pooling. Prepared statement
>pooling itself does not bring a lot when connection pooling is not enabled
>(i.e. caching between getConnection/close is useful only if people execute
>multiple prepared statements before returning connection to the pool).
>
>
done at the driver level. It is better for everone involved if it is
done at the database level. While this is infeasible in Firebird 2 and
earlier, it is the basis for the Vulcan request architecture.
I am not a fan of connection pooling. Again, I think it's a solution at
the wrong level. Connection pooling is important only when the cost of
database attachment is high. Reducing the connection cost gives more
bang for the buck, particularly given a trend towards increased
connection state. Current SQL, as you know, uses a pathetically weak
single role module for security. If anybody on the SQL committee ever
learns about three tier application or the web, it will be discovered
that a flexible role model is a better solution. Supporting a let of
roles, some active and some latent, increases the complexity of
connection pooling. So does the programmable namespace required support
inheritence through an application hierarchy. History is against
connection pooling. But of course, it is now the accepted workaround
for database performance problems. The Netfrastructure web modules
don't use connection pooling but still outperform jsp solutions by at
least an order of magnitude.
All that said, fewer connections are better than more connections, which
discourages programmers from using multiple independent
transactions/connections when program logical would otherwise call them
them. I've always had a lot of sympathy for application programmers,
which is why I built multiple transactions per attachment into
DSRI/OSRI. At that time, blobs, triggers, UDFs, and even dates were
non-standard, so we've had some progress.
But there is no doubt in my mind that the SQL committee blew it on
transactions and the JDBC designed amplified the blunder. If it can be
fixed within the context of the standard, I think it makes sense to do so.
>>I don't see the problem at all. The behavior of the clonedA cloned connection could outlive the original connection, but perhaps
>>connection would be the same as the parent connection. Clearly you
>>couldn't clone a connection before the attachment is complete, and a
>>failure after attachment is an error to be reported.
>>
>>
>
>Do you mean something like "auxilary connection", lifetime of which is not
>greater than the lifetime of the "main" connection (also in terms of pool)?
>It would be relatively easy (compared to the next case) to implement. Does
>such case make sense?
>
>
the physical atttachment should not be reused until the parent
connection and all clones have been closed. I have cloneConnection in
Netfrastructure, but I as I said, I don't reuse connections, so I'm not
going to claim a complete solution, and in any case, I have no intention
of implementing it. But I don't seen any unsolveable solutions.
The primary usage of cloneConnection in Netfrastructure is in a
replication trigger to kick of a thread to perform synchronization. One
of the benefits of cloneConnection is that database properties --
dbname, user creditentials, and the like -- are not necessary, which is
good because application context within a trigger is extremely limited.
[Non-text portions of this message have been removed]