Subject | Re: Can we, can we, can we????... |
---|---|
Author | johnson_dave2003 |
Post date | 2005-06-15T17:21:02Z |
A co-worker of mine suggested this:
If the the kill is implemented as Jim suggested, where a broken
connection terminates the query, then timeout can be implemented at
the client side as a socket timeout. This put timeout under the
control of the application.
A relatively minor extension to the user definition on the server,
and to the connection on the client, will allow the gds layer to
implement the timeout per user ID as a blocked socket timeout +
disconnect.
This way, Jim's suggestion buys four levels of protection - abandoned
session, application controlled user kill, application controlled
timeout, and optional timeout by userid - with a minimum of coding
changes.
The final level of protection is much more invasive. Before a DBA
kill can be factored in, you need a way to monitor running queries.
A DBA monitor/kill is a prime example of a use for a memory only
table. Relatively few queries are likely to run concurrently (less
than a few thousand), the overheads of writing to DASD are definitely
not desired, and if the server dies the data is garbage.
At the start of a user query, insert and commit a row into a
wipTransaction table that contains query token, start time, and PID.
At the end of the query, delete and commit the row for the
transaction. The DBA monitor app becomes a wrapper around a select
statement against the new (memory only) system table.
Kill then has the option of signalling the wait loop to terminate, or
signalling the user's connection to terminate.
I can think of some other uses for memory only tables - as the back
end for a message server for example (IBM's MQ is a prime example - a
generational by record DBMS should outperform DB2 for this
application).
If the the kill is implemented as Jim suggested, where a broken
connection terminates the query, then timeout can be implemented at
the client side as a socket timeout. This put timeout under the
control of the application.
A relatively minor extension to the user definition on the server,
and to the connection on the client, will allow the gds layer to
implement the timeout per user ID as a blocked socket timeout +
disconnect.
This way, Jim's suggestion buys four levels of protection - abandoned
session, application controlled user kill, application controlled
timeout, and optional timeout by userid - with a minimum of coding
changes.
The final level of protection is much more invasive. Before a DBA
kill can be factored in, you need a way to monitor running queries.
A DBA monitor/kill is a prime example of a use for a memory only
table. Relatively few queries are likely to run concurrently (less
than a few thousand), the overheads of writing to DASD are definitely
not desired, and if the server dies the data is garbage.
At the start of a user query, insert and commit a row into a
wipTransaction table that contains query token, start time, and PID.
At the end of the query, delete and commit the row for the
transaction. The DBA monitor app becomes a wrapper around a select
statement against the new (memory only) system table.
Kill then has the option of signalling the wait loop to terminate, or
signalling the user's connection to terminate.
I can think of some other uses for memory only tables - as the back
end for a message server for example (IBM's MQ is a prime example - a
generational by record DBMS should outperform DB2 for this
application).