Subject | Re: [firebird-support] Re: Corruption of firebird db |
---|---|
Author | Helen Borrie |
Post date | 2006-03-29T02:08:38Z |
At 11:39 PM 28/03/2006, you wrote:
example, that your system apparently never has enough RAM to allocate
even the specified amount of buffer space. At some point you
configured your database cache to be 4000 pages, i.e. twice the
default. Possibly you also ramped up the page size from the default
4 Kb to something absurd for a resource-deficient system, e.g. 16
Kb? You're running ordered queries, without configuring any sort
space on another disk? Your pagfile.sys is using another disk but
something else is eating all the virtual RAM?
You haven't provided a lot of the essential detail but, if you are
also storing your database on a FAT32 or ext2 filesystem, there is an
absolute limit of 2 Gb for any shared-access file. A database is a
shared-access file. In these conditions, it is a fairly fatal
assumption to ignore database growth. You should have planned
against hitting this limit well in advance of the point where the DB
file size hit the wall, since the database engine has no way of
knowing the limits of your filesystem.
The facility to cater for this eventuality is splitting the database
into multiple files. The cost of ignoring it is that the database
engine will reach EOF and then start overwriting the file from the
beginning....until it needs something that it overwrote...and *then*,
if it can, it will deliver an exception. By that time, it's too
late: you have already corrupted the database header and, probably, some data.
until after it has done it. The operating system manages the
filesystem; the DB server operates on chunks of disk (known as
pages) that it manages itself. The DB server requests more space
from the OS whenever it needs it and only *if* it needs it.
It keeps an inventory of its pages as transactions come and go and as
garbage collection frees up unused pages. Until the moment of the
actual write, it won't know whether it needs to request more space
from the OS, or whether the required number of free pages is already
available, or a mix of both.
accounting keeps a record of what has happened and what is expected
to happen next - for hundreds, sometimes thousands, of operations the
the engine must consider as "interesting".
It is precisely because of the constantly shifting state of page
writes and page availability that the engine does not (cannot) take
upon itself the job of monitoring the external filesystem and the
OS's memory management system. It doesn't know or care how the OS
manages things: its job is to request disk space and RAM when it
needs them and to assume that it has that resources unless the OS
refuses to deliver them. It has to trust that the OS and the
filesystem are capable of delivering what it has been configured to expect.
For updates, a read-write transaction will hit the disk in
two's: copying the delta of the currently committed row elsewhere
and then overwriting that space with the updated row. If multiple
rows are being updated by a transaction and/or if blobs are involved,
it will be hitting multiple pages. In the meantime, garbage
collection may be freeing up page space.
committed state of the database - *each* time any transaction rolls
back. But the engine does not itself roll back transactions. If you
request it to write, it will try to do so and throw an exception if
it cannot carry out the request. But you will have big problems if
you allow external conditions to compromise what the db engine knows
about its state - such as misconfiguring the page cache, starving
memory, failing to configure sort space, allowing database files to
grow past filesystem limits, and so on.
The DB server is an application, not an operating system. It knows
what to do with the resources that it has control over. This control
is largely dependent on its having been configured with realistic
expectations of the resource capabilities of the host system.
Therefore, if you put your databases into an environment where
external conditions are capable of compromising the resources that
the DB engine is configured to assume it controls, then read "danger,
volatile gasesI". t's not going to be a question of *whether* the
database explodes, but *when*. If you include asynchronous writes
(Forced Writes off) in this explosive soup then the bomb is already ticking.
./heLen
> > With databases, doesn't really matter whichNo, it shouldn't. Something else is playing here. We notice, for
> > database software your using, if you run out of disk space, then all
> > bets are off.
> >
>
>Of course, a full hdd is bad, but should that blow up the database?
example, that your system apparently never has enough RAM to allocate
even the specified amount of buffer space. At some point you
configured your database cache to be 4000 pages, i.e. twice the
default. Possibly you also ramped up the page size from the default
4 Kb to something absurd for a resource-deficient system, e.g. 16
Kb? You're running ordered queries, without configuring any sort
space on another disk? Your pagfile.sys is using another disk but
something else is eating all the virtual RAM?
You haven't provided a lot of the essential detail but, if you are
also storing your database on a FAT32 or ext2 filesystem, there is an
absolute limit of 2 Gb for any shared-access file. A database is a
shared-access file. In these conditions, it is a fairly fatal
assumption to ignore database growth. You should have planned
against hitting this limit well in advance of the point where the DB
file size hit the wall, since the database engine has no way of
knowing the limits of your filesystem.
The facility to cater for this eventuality is splitting the database
into multiple files. The cost of ignoring it is that the database
engine will reach EOF and then start overwriting the file from the
beginning....until it needs something that it overwrote...and *then*,
if it can, it will deliver an exception. By that time, it's too
late: you have already corrupted the database header and, probably, some data.
>The Server should shutdown the database and return some eror messagesThe server has no way of knowing that it is corrupting the DB file
>to the clients, but not corrupt the DB file!
until after it has done it. The operating system manages the
filesystem; the DB server operates on chunks of disk (known as
pages) that it manages itself. The DB server requests more space
from the OS whenever it needs it and only *if* it needs it.
It keeps an inventory of its pages as transactions come and go and as
garbage collection frees up unused pages. Until the moment of the
actual write, it won't know whether it needs to request more space
from the OS, or whether the required number of free pages is already
available, or a mix of both.
>Why are there transactions etc.?Transactions isolate *database* tasks from one another. Transaction
accounting keeps a record of what has happened and what is expected
to happen next - for hundreds, sometimes thousands, of operations the
the engine must consider as "interesting".
It is precisely because of the constantly shifting state of page
writes and page availability that the engine does not (cannot) take
upon itself the job of monitoring the external filesystem and the
OS's memory management system. It doesn't know or care how the OS
manages things: its job is to request disk space and RAM when it
needs them and to assume that it has that resources unless the OS
refuses to deliver them. It has to trust that the OS and the
filesystem are capable of delivering what it has been configured to expect.
For updates, a read-write transaction will hit the disk in
two's: copying the delta of the currently committed row elsewhere
and then overwriting that space with the updated row. If multiple
rows are being updated by a transaction and/or if blobs are involved,
it will be hitting multiple pages. In the meantime, garbage
collection may be freeing up page space.
>Not to throw away what failed and restart at the last good point?The engine *does* throw away what failed and revert to the last
committed state of the database - *each* time any transaction rolls
back. But the engine does not itself roll back transactions. If you
request it to write, it will try to do so and throw an exception if
it cannot carry out the request. But you will have big problems if
you allow external conditions to compromise what the db engine knows
about its state - such as misconfiguring the page cache, starving
memory, failing to configure sort space, allowing database files to
grow past filesystem limits, and so on.
The DB server is an application, not an operating system. It knows
what to do with the resources that it has control over. This control
is largely dependent on its having been configured with realistic
expectations of the resource capabilities of the host system.
Therefore, if you put your databases into an environment where
external conditions are capable of compromising the resources that
the DB engine is configured to assume it controls, then read "danger,
volatile gasesI". t's not going to be a question of *whether* the
database explodes, but *when*. If you include asynchronous writes
(Forced Writes off) in this explosive soup then the bomb is already ticking.
./heLen