Subject Re: [IB-Architect] Re : License Question
Author Jim Starkey
At 05:38 PM 3/24/00 -0500, Emil Briggs wrote:
>From: Emil Briggs <emil@...>
>
>
>Thanks for the info -- not sure I understand you completely here but
>when I make multiple connections to a Linux box running classic
>I see one instance of the lock manager and multiple instances of
>gds_inet_server. (One for each connection). So I am assuming that
>each instance of gds_inet_server communicates with the lock manager
>via Sys 5 IPC (Assumption based on the output of strace
>on gds_lock_mgr which shows a bunch of IPC calls).
>

As it turns out, bad assumption.

The Interbase generic lock manager for Unix is based on Sys-V
shared memory (ugh) and semaphores (double ugh). However, it
still needs to signal processes that are holding a lock blocking
somebody in hopes they will down grade it. Unix protection
limits to whom a process can signal. If a process finds it
is unable to signal somebody, it asks the "gds_lock_mgr",
running as root, to send the signal instead. A bit clunky,
but the best that can be done on an operating system designed
in the 1970s.

In other words, a single process application or an application
running as part of the same group never communicates with
gds_lock_mgr.

A much better way to implement this would be to have a separate
thread watching the shared memory/semaphores in the lock manager
for a blocking event and do the right thing*. When the code
was written, Sun's thread examples didn't even compile. Now
that Unix has moved into the 80's, maybe it's time to use
pthreads. My own personal opinion, and reasonable people
can and do differ, is that the superserver code base should
be freed from the constraints of the past and allowed to
fly. Can't get that Harrison person to agree (yet).

>My interest comes from wanting to implement a read mostly database
>that would be scalable by adding nodes to a cluster. Writes (and
>the need to do any locking) would be infrequent.
>

Interbase classic tries to get an exclusive lock on the database,
and if successful, doesn't express page locking whether read or
write. If there is another process sharing, then page locks must
be expressed, always. A cluster environment means locks would
always be required, not for consistency control, but to maintain
cache consistency.

I think you would be better off with a super-server (which never,
ever needs to lock) and fast IPC than the architectural overhead
of a clustered environment. Remember that the main reason that
DEC implemented clusters is that they couldn't design a fast
machine. Change the mix of cpu speed to disk bandwidth and
clusters (for databases) lose their appeal. At least to me.
Probably.


* - Eric Raymond, Mr. Open Source, would not agree.

Jim Starkey