Subject | Re: [Firebird-Architect] WAL and JOURNAL |
---|---|
Author | Jim Starkey |
Post date | 2004-01-19T12:55:58Z |
Marius Popa wrote:
1. There was a separate journalling process (journal server) that
could journal for an arbitrary number of databases
2. When journalling was enabled for a database, the database dumped
itself to its journal server
3. The engine logged pages changes (up to the size of the page)
4. When a page was flushed, the page changes (or page itself) were
sent to the journal server
5. There was a synchronization dance at commit time
The philosophy was that journalling meant that every bit in a
transaction resided on two pieces of oxide before a transaction was
reported committed so a single point of failure could not result in loss
of data.
The WAL was based on the idea that transactional changes could be
flushed to a linear journal with very fast disk operations at commit
time, obviating the need to flush actual database pages back to the disk
before reporting commit. Many database systems (particularly Sybase and
presumably SQL Server) do this. Done well, it is excellent. But the
Borland guys forgot that they had introduced a single point of failure,
the journal itself, canceling the benefit of the original journalling
system, which they had written off. They had thought they were getting
something for free, the benefit of journalling and relaxing the
requirement for a cache flush before commit. When it was recognized
that the WAL system was no longer a optional redundancy option but the
gating factor to database reliability (a WAL bug made recovery of
committed transactions impossible) and they had lost the redundancy of a
separate journal, they lost interest and dropped the feature.
For reasons that I don't understand, the WAL had tentacles all over the
systems, where the original journalling tools were limited to the
modules that managed pages (DPM, BTR, PAG) and the cache manager itself.
There were at least two problems with journalling. One was a measurable
performance hit necessary to send page changes to the journalling
server. The other, more serious, is that journal tapes fill up, so
using journalling pretty much dictated hiring someone to watch the tape
spin waiting for it to fill up. The availability of a 2.8 GHz super
scalar processor with 100 MB ethernet, 512MB memory, and a 120 GB disk
(yesterday's Sunday flyer's bargain) might have changed the tradeoffs.
The other factor what the implementation of shadowing, not as robust of
journalling to a different server, was almost as good and a lot easier
to use.
[Non-text portions of this message have been removed]
>What was the ideea of the WAL (and what went wrongThe original journalling worked like this:
>with it)
>can it be rewriten ?
>or there is a need for something like that now ?
>
>ps: just curious (have seen the cvs wal/journal
>cleanup...)
>
>
>
1. There was a separate journalling process (journal server) that
could journal for an arbitrary number of databases
2. When journalling was enabled for a database, the database dumped
itself to its journal server
3. The engine logged pages changes (up to the size of the page)
4. When a page was flushed, the page changes (or page itself) were
sent to the journal server
5. There was a synchronization dance at commit time
The philosophy was that journalling meant that every bit in a
transaction resided on two pieces of oxide before a transaction was
reported committed so a single point of failure could not result in loss
of data.
The WAL was based on the idea that transactional changes could be
flushed to a linear journal with very fast disk operations at commit
time, obviating the need to flush actual database pages back to the disk
before reporting commit. Many database systems (particularly Sybase and
presumably SQL Server) do this. Done well, it is excellent. But the
Borland guys forgot that they had introduced a single point of failure,
the journal itself, canceling the benefit of the original journalling
system, which they had written off. They had thought they were getting
something for free, the benefit of journalling and relaxing the
requirement for a cache flush before commit. When it was recognized
that the WAL system was no longer a optional redundancy option but the
gating factor to database reliability (a WAL bug made recovery of
committed transactions impossible) and they had lost the redundancy of a
separate journal, they lost interest and dropped the feature.
For reasons that I don't understand, the WAL had tentacles all over the
systems, where the original journalling tools were limited to the
modules that managed pages (DPM, BTR, PAG) and the cache manager itself.
There were at least two problems with journalling. One was a measurable
performance hit necessary to send page changes to the journalling
server. The other, more serious, is that journal tapes fill up, so
using journalling pretty much dictated hiring someone to watch the tape
spin waiting for it to fill up. The availability of a 2.8 GHz super
scalar processor with 100 MB ethernet, 512MB memory, and a 120 GB disk
(yesterday's Sunday flyer's bargain) might have changed the tradeoffs.
The other factor what the implementation of shadowing, not as robust of
journalling to a different server, was almost as good and a lot easier
to use.
[Non-text portions of this message have been removed]