Subject | Classic vs. superserver (long, very very long) |
---|---|
Author | Ann W. Harrison |
Post date | 2002-10-08T15:40:19Z |
Here's my take on the issue of classic vs. superserver.
First, I think that maintaining two architectures is a
gross waste of everyone's time and a serious risk to the
reliability of Firebird. There are over 500 ifdefs that
separate the code paths for the two. But the amount of
conditional code is much less important than the nature
of that code.
In the classic architecture, each connection is a process
that includes the database software as a shared library.
In Classic, different connections communicate in one of
two ways: through the lock manager or through the disk.
In superserver, connections are threads and can communicate
through shared data structures. Handling the shared
data structures requires interlocking, but it is much less
expensive than signalling from one process to another or
reading and writing pages.
To give one example.
Currently, Firebird starts a transaction for each connection
that requires stable db_keys. That transaction inhibits all
garbage collection for the database for the duration of the
connection, which is gross overkill. All that's required is
that deleted record stubs (12 bytes?) be left behind while
getting rid of all the large debris, index entries, etc.
The catch is that as soon as one connection requests db_key
stability, all concurrent connections have to honor the
request.
To implement this change in superserver, one would just
add a field to the database block that kept a list of
of the connections that want db_key stability. If the
list is empty, deleted stubs go away. Otherwise,
not. When the server runs down a dead connection
it clean up its request for stability.
In classic, every process that accesses the database has
to be notified and keep track of the state independently.
Changes to the state need to be recorded in the lock table
and signalled to all processes. It's a cumbersome and
expensive process - complicated code and a performance
hit. That's why the change has never been made.
Another case is foreign key creation. At the moment, it
requires single user access to the database because there's
no mechanism to notify separate processes in classic of the
existence of a new constraint. Many of the other metadata
update conflicts (object is in use) stem from the complexity
of notifying separate processes and could be eliminated if
we abandon the classic architecture.
Certainly classic has three well known advantages: SMP,
failure safety, and the ability to kill a runaway process.
It's a cheap way to get SMP - but it doesn't scale. In
particular, classic puts more load on the I/O channels -
there comes a point where the database is simply I/O bound
and adding more processors isn't going to help.
A catastrophic error in a single connection doesn't
affect other connections, while a server crash brings down
all connections. The server shouldn't crash. Ever.
Earlier versions of InterBase did not isolate UDF's, and
allowed user code to crash the server. That's been fixed
for nearly four years. Now, it's up to us to find and
fix the places in the server where bad inputs can cause
crashes.
Borland has found a way to kill runaway queries in SuperServer -
we can too.
Yes, we could add shared page and metadata caching to classic,
but that's redoing work that's already been done in superserver,
and it still requires fine granularity locking if it's going
to run well on an SMP system. What we'd be doing, if we tried
to build from the classic base, is redoing all the work that's
already gone into superserver.
Regards,
Ann
www.ibphoenix.com
We have answers.
First, I think that maintaining two architectures is a
gross waste of everyone's time and a serious risk to the
reliability of Firebird. There are over 500 ifdefs that
separate the code paths for the two. But the amount of
conditional code is much less important than the nature
of that code.
In the classic architecture, each connection is a process
that includes the database software as a shared library.
In Classic, different connections communicate in one of
two ways: through the lock manager or through the disk.
In superserver, connections are threads and can communicate
through shared data structures. Handling the shared
data structures requires interlocking, but it is much less
expensive than signalling from one process to another or
reading and writing pages.
To give one example.
Currently, Firebird starts a transaction for each connection
that requires stable db_keys. That transaction inhibits all
garbage collection for the database for the duration of the
connection, which is gross overkill. All that's required is
that deleted record stubs (12 bytes?) be left behind while
getting rid of all the large debris, index entries, etc.
The catch is that as soon as one connection requests db_key
stability, all concurrent connections have to honor the
request.
To implement this change in superserver, one would just
add a field to the database block that kept a list of
of the connections that want db_key stability. If the
list is empty, deleted stubs go away. Otherwise,
not. When the server runs down a dead connection
it clean up its request for stability.
In classic, every process that accesses the database has
to be notified and keep track of the state independently.
Changes to the state need to be recorded in the lock table
and signalled to all processes. It's a cumbersome and
expensive process - complicated code and a performance
hit. That's why the change has never been made.
Another case is foreign key creation. At the moment, it
requires single user access to the database because there's
no mechanism to notify separate processes in classic of the
existence of a new constraint. Many of the other metadata
update conflicts (object is in use) stem from the complexity
of notifying separate processes and could be eliminated if
we abandon the classic architecture.
Certainly classic has three well known advantages: SMP,
failure safety, and the ability to kill a runaway process.
It's a cheap way to get SMP - but it doesn't scale. In
particular, classic puts more load on the I/O channels -
there comes a point where the database is simply I/O bound
and adding more processors isn't going to help.
A catastrophic error in a single connection doesn't
affect other connections, while a server crash brings down
all connections. The server shouldn't crash. Ever.
Earlier versions of InterBase did not isolate UDF's, and
allowed user code to crash the server. That's been fixed
for nearly four years. Now, it's up to us to find and
fix the places in the server where bad inputs can cause
crashes.
Borland has found a way to kill runaway queries in SuperServer -
we can too.
Yes, we could add shared page and metadata caching to classic,
but that's redoing work that's already been done in superserver,
and it still requires fine granularity locking if it's going
to run well on an SMP system. What we'd be doing, if we tried
to build from the classic base, is redoing all the work that's
already gone into superserver.
Regards,
Ann
www.ibphoenix.com
We have answers.