Subject | Re: [Firebird-Architect] Table structure in Firebird? |
---|---|
Author | Jim Starkey |
Post date | 2005-03-02T15:18:43Z |
Dimitry Sibiryakov wrote:
that a separate transaction journal is not necessary. When a record and
back versions are on the same page (the normal case), no additional I/O
is required to support transaction backout.
In a peer-to-peer file sharing architecture like classic, record locks
must be expressed in an external lock table implemented with shared
memory and controlled by semaphores. Furthermore, the lock table itself
must be managed with offset addressing to handle the likely case that
the lock table will be mapped in different addresses in different
processes. Also, because the lock table requires mapped memory,
extending it is problematical, further stressed by a need to support an
arbitrary number of record locks.
The advantage of record locking is that it can prevent (unfortunately,
not reliably) deadlocks with multi-generational can only report them.
A problem that record locking handles poorly is phantoms -- records
created by concurrent, committed process. To handle phantoms correct,
it is necesary to lock a mythical "end of table" marker, which blocks
all sequential scans. The telling aspect of most record locking systems
is that they almost never operate in full consistency mode, do most work
with autocommit.
So, likely almost everything else, neither record locking or
multi-generational concurrency control is faster or simpler or better.
The two schemed just choose different tradeoffs.
--
Jim Starkey
Netfrastructure, Inc.
978 526-1376
>On 1 Mar 2005 at 5:35, Fabricio Araujo wrote:The performance advantage of multi-generational concurrency control is
>
>
>
>>Silly question to the collection:
>>Why other dbs just waited almost twenty years to implement
>>record versioning?
>>
>>
>
> IMHO, locking architecture is simplier to implement and faster for
>most oerations. BTW, Oracle didn't wait twenty years, but their
>versioning is (was?) not complete - they keep only two versions.
>
>
that a separate transaction journal is not necessary. When a record and
back versions are on the same page (the normal case), no additional I/O
is required to support transaction backout.
In a peer-to-peer file sharing architecture like classic, record locks
must be expressed in an external lock table implemented with shared
memory and controlled by semaphores. Furthermore, the lock table itself
must be managed with offset addressing to handle the likely case that
the lock table will be mapped in different addresses in different
processes. Also, because the lock table requires mapped memory,
extending it is problematical, further stressed by a need to support an
arbitrary number of record locks.
The advantage of record locking is that it can prevent (unfortunately,
not reliably) deadlocks with multi-generational can only report them.
A problem that record locking handles poorly is phantoms -- records
created by concurrent, committed process. To handle phantoms correct,
it is necesary to lock a mythical "end of table" marker, which blocks
all sequential scans. The telling aspect of most record locking systems
is that they almost never operate in full consistency mode, do most work
with autocommit.
So, likely almost everything else, neither record locking or
multi-generational concurrency control is faster or simpler or better.
The two schemed just choose different tradeoffs.
--
Jim Starkey
Netfrastructure, Inc.
978 526-1376