Subject Re: [Firebird-Architect] Re: Can we, can we, can we????...
Author Jim Starkey
Aleksey Karyakin wrote:

>Nothing has to be redesigned, really. In theory, there should appear
>new message type in remote protocol, but the existing one for
>isc_free_statement () could fit. Overall client interface change is
>indeed adding a new constant in ibase.h as Borland did. BTW, as far
>as I understand, IB6.5 still opens separate connection to cancel
>queries.
>
>
>
A socket can be shared on the transmitting side with a mutex, which is
no big deal. The problems are on the receive side.

Under the current implementation of the remote, there is a read
outstanding on the primary socket only when the remote interface is
expecting a reply to a remote request. To multiple independent data
streams across a single socket, there would need to be a thread
dedicated to listening on the socket. As you are probably away, the
primary protocol is stream, not message, based, depending on XDR to
serialize and deserial structures. Since there is no way to find the
end of a primary protocol message other than executing the XDR receive
procedure, the remote interface listen would have to receive a message,
sniff the contents to determine whether it was a primary protocol
message or an event, then do an XDR receive or handle an event packet as
required. While this could be done by refactoring the code and
sprinkling synchronization objects where required, it does induct a
thread switch for every XDR packet received which, on Linux at least, is
a full context switch.

Is the cost of the second socket sufficent to justify the additonal
context switch per message? Or, even more importantly, is the ability
to share a socket worth the context switch even if events are not being
used?

Something that I've always tried to be sensitive to is the cost of a
features existence to a user who isn't using that feature. I think you
need to think about the cost/benefits of your suggestion on people who
will never use a multiplexed socket.

>>precludes extending the interruption mechanism to let a dba kill
>>selected requests.
>>
>>
>
>Interruption mechanism in the engine is the same
>(gds_cancel_operation) and it is operational for a few years,
>however, only for SuperServer. The absent part is the means of
>delivery of the cancel request from a client to server. Cancelling
>users' own queries is easy and can be done over a weekend.
>
>In short:
>
>-- it solves the problem of killing long-running queries by users
>themselves
>-- it is easy to implement in SS code
>-- it has no security implications
>-- it does not require a protocol to exchange a token
>-- it can be easily integrated with third-party tools and interfaces
>
>
As far as I can tell from the code, isc_cancel_operation was never
implemented for client use in any mode of operation. It is also no
indication that it was ever turned on or tested. Futhermore, the design
of the remote interface, remote server, and wire protocol make it
impossible to implement with the current design.

>The main problem I see is the lack of Classic (single-threaded
>server) support. Isn't Classic doomed to extinction? Anyway, Classic
>can't merely be stopped properly if it executes a query.
>
>
Yes, it could. I outlined the implementation yesterday.

>The case of admin killing arbitrary queries is quite different and it
>requires different approach. The main problems are identification of
>those queries and security. Suppose there is a token for every
>running query (there certainly is one). How does admin get to know
>what its value is?
>
A database info item could be used by a suitable privileged user to
return a list of running requests include:

* SQL string
* User account
* Elapsed time
* Fetches, marks, etc.
* Request token for subsequent kill

Like the rest of my proposal, this could be implemented without change
to the plumbing.

>The real problem is not in the cost but the diffuculty to
>consistently manage two different communication channels between a
>single client and a single server. The original events implementation
>clearly shows its problems - it works bad with firewalls, NAT,
>several network interfaces, subject to blocking and so on.
>
>
I think you are over-stating your case. The client uses the same
connect string, albeit with a different port, that it used to make the
primary connection. The client is perfectly happy behind a NAT,
firewall, or whatever. As long as the server is able to listen on a
dynamically created port, there is no problem. If the server, or the
network path between the server and client, does not support TCP/IP
semantics, there will be a problem, but the problem isn't with the
Firebird code.

>>Handles don't cross layers.
>>
>>
>
>Hmm, their values don't. Object identifiers represented by them do.
>The idea was that a client can identify its own request by providing
>statement handle to the client interface.
>
>
>
Only by going backwards through the plumbing to the server, which
requires additional servers to a dozen different components and then
only on the original connection. A handle could never be returned
across an info call, for example, or even safely used in a different
thread (handles are reused FIFO in the Vulcan Y-valve).

A request token can't be forged, and be passed safely across thread,
procecess, and network boundaries, and becomes harmless if the
originating object is destroyed.

--

Jim Starkey
Netfrastructure, Inc.
978 526-1376