Subject Re: [IB-Architect] Re : License Question
Author Emil Briggs
Jim Starkey wrote:
> From: Jim Starkey <jas@...>
> The Interbase generic lock manager for Unix is based on Sys-V
> shared memory (ugh) and semaphores (double ugh). However, it
> still needs to signal processes that are holding a lock blocking
> somebody in hopes they will down grade it. Unix protection
> limits to whom a process can signal. If a process finds it
> is unable to signal somebody, it asks the "gds_lock_mgr",
> running as root, to send the signal instead. A bit clunky,
> but the best that can be done on an operating system designed
> in the 1970s.
> In other words, a single process application or an application
> running as part of the same group never communicates with
> gds_lock_mgr.

OK. Now things are starting to make sense. I would argue though
that a process that holds a lock that it doesn't need is a bug
that needs fixing. Unless there is some other reason for doing it
that way?

> A much better way to implement this would be to have a separate
> thread watching the shared memory/semaphores in the lock manager
> for a blocking event and do the right thing*. When the code
> was written, Sun's thread examples didn't even compile. Now
> that Unix has moved into the 80's, maybe it's time to use
> pthreads. My own personal opinion, and reasonable people
> can and do differ, is that the superserver code base should
> be freed from the constraints of the past and allowed to
> fly. Can't get that Harrison person to agree (yet).

My vote (not that we're actually voting) is for pthreads.

> I think you would be better off with a super-server (which never,
> ever needs to lock) and fast IPC than the architectural overhead
> of a clustered environment. Remember that the main reason that
> DEC implemented clusters is that they couldn't design a fast
> machine. Change the mix of cpu speed to disk bandwidth and
> clusters (for databases) lose their appeal. At least to me.
> Probably.
> * - Eric Raymond, Mr. Open Source, would not agree.

It's more a question of price/performance. If I can use 6 dual CPU
machines in a cluster it's a lot more cost effective than a
single 8 CPU machine. That's not always possible even in HPC
applications but it's nice if you can get it.