Subject | Re: [Firebird-Architect] Re: Why did Interbase lose out to Oracle? |
---|---|
Author | Jim Starkey |
Post date | 2010-04-23T15:00:57Z |
mckinnemon77 wrote:
memory pools, though the implementation was different. The issue was
how objects were deleted.
My original Interbase implementation used request specific memory
pools. When the request was finished, the pool was delete and all
blocks in it disappeared in one fell swoop. That was just fine for C --
small code, very faster, and a guarantee (well, sort of) against memory
leaks.
Vulcan, however, refactored the code base into encapsulated C++
classes. The C++ object paradigm requires that object destructors be
called when the object is deleted to release any resources held by the
object, unlink it from other data structures, etc. This did not mesh at
all well with the delete-everything-at-once pool model. It could have
been implemented by requiring that all objects inherit from a single
ur-class with a virtual destructor, but this was both cumbersome and
obviated the performance benefit of just disappearing the pool and
everything in it. Vulcan adopted C++ semantics that required individual
objects be explicit deleted and were responsible for their own cleanup
while the Firebird branch continued with C semantics.
An area of particular frustration with Firebird is to be the upstart
radical arguing against technology created in a different era by an old
foggy (also me). C++ encapsulation is one area, fine grain
multi-threading is another, security model a third, a switch from GDML
to SQL for internal system table management a fourth, and a SQL based
engine a fifth. If I wrack my brain, I can come up with a dozen.
Interbase was created in the era of MC68010 and MicroVaxen, machines
that maxed out physical memory at 3 or 4 MBs and had processors almost
as slows as disks. Even servers maxed out at 7 MB. The whole classes
architecture was pretty much an artifact of the ponderously slow VMS
context switch time that gave a huge gain to putting code in process
rather than as a separate server process. And let's not forget that
nobody expected to see an SMP machine on the desktop.
Memory pools are very useful to avoid fragmentation when managing
objects of radically different lifetimes. The Vulcan pool manager came
from Netfrastructure. In continued to evolve in Falcon and now
NimbusDB. It now uses an AVL tree to manage large free objects and has
per-thread memory cache that is a *huge* win, making memory allocations
almost free.
As for a Linux database system, I think NimbusDB is the best solution
(which shouldn't be a surprise). A modern database must be
mostly-memory, using the disk for archival storage and back fill.
Netfrastructure / Falcon did this, but at the record level, which wasn't
quite right.
--
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376
> Why didn't you fork Vulcan? Were you planning on using memory pools in Vulcan? In your opinion, do you think PostgreSQL would be a better choice for implementing a Linux database server?The issue wasn't memory pools. Both Vulcan and the main branch had
>
memory pools, though the implementation was different. The issue was
how objects were deleted.
My original Interbase implementation used request specific memory
pools. When the request was finished, the pool was delete and all
blocks in it disappeared in one fell swoop. That was just fine for C --
small code, very faster, and a guarantee (well, sort of) against memory
leaks.
Vulcan, however, refactored the code base into encapsulated C++
classes. The C++ object paradigm requires that object destructors be
called when the object is deleted to release any resources held by the
object, unlink it from other data structures, etc. This did not mesh at
all well with the delete-everything-at-once pool model. It could have
been implemented by requiring that all objects inherit from a single
ur-class with a virtual destructor, but this was both cumbersome and
obviated the performance benefit of just disappearing the pool and
everything in it. Vulcan adopted C++ semantics that required individual
objects be explicit deleted and were responsible for their own cleanup
while the Firebird branch continued with C semantics.
An area of particular frustration with Firebird is to be the upstart
radical arguing against technology created in a different era by an old
foggy (also me). C++ encapsulation is one area, fine grain
multi-threading is another, security model a third, a switch from GDML
to SQL for internal system table management a fourth, and a SQL based
engine a fifth. If I wrack my brain, I can come up with a dozen.
Interbase was created in the era of MC68010 and MicroVaxen, machines
that maxed out physical memory at 3 or 4 MBs and had processors almost
as slows as disks. Even servers maxed out at 7 MB. The whole classes
architecture was pretty much an artifact of the ponderously slow VMS
context switch time that gave a huge gain to putting code in process
rather than as a separate server process. And let's not forget that
nobody expected to see an SMP machine on the desktop.
Memory pools are very useful to avoid fragmentation when managing
objects of radically different lifetimes. The Vulcan pool manager came
from Netfrastructure. In continued to evolve in Falcon and now
NimbusDB. It now uses an AVL tree to manage large free objects and has
per-thread memory cache that is a *huge* win, making memory allocations
almost free.
As for a Linux database system, I think NimbusDB is the best solution
(which shouldn't be a surprise). A modern database must be
mostly-memory, using the disk for archival storage and back fill.
Netfrastructure / Falcon did this, but at the record level, which wasn't
quite right.
--
Jim Starkey
Founder, NimbusDB, Inc.
978 526-1376