Subject Re: [firebird-support] Blob write to Embedded on Linux performance
Author Mike Ro
On 09/06/14 17:42, Olivier Mascia om@... [firebird-support] wrote:
 


> gfix -write async test.fdb -user sysdba -password secret

It, of course, has a tremendous impact on such tests. Though, everybody should carefully consider wether the risks of data corruption are acceptable (or not) using sync writes off. I know most people here do know, but I wouldn't like you to get mislead by this recent topic. There are a large number of past discussions in this list archive about this very topic. May I recommend reading them even if not all do apply as severely to recent Firebird versions as older Interbase(R) editions.

At the very least, to make it short, Firebird uses an ordered-writes technique to help preserve its on-disk structure in case of unexpected interruption. As soon as the OS is allowed to delay writes as it sees fit, that integrity security is lost. And the OS can go quite far in not following the initial order of the writes: plain-old common scatter-gather techniques can optimize the writes in such an unwanted way. So it is not just a matter of how much data may be lost by loosing computing resources right in the middle of some transactions, but how far the whole data is still meaningful after such an event, knowing that the ACID rules might have been broken by the OS re-ordering individual writes which did belong to various transactions.

Careful backup plans or more advanced storage architectures (async DB on synced volumes exposed from heavily caching SAN subsystems, for instance) are required to overcome the risks of integrity loss. The path to follow depends on the amount of unavailability of data (during restoration) or data loss which is acceptable, and of course the amount of money available for implementation.

On simple setups, using FB sync writes on SSD might be decent tradeoff between integrity assurance and speed.
.



Thank you for pointing out the perils of disabling forced writes. I took your advice and spent some hours reading up on the subject, but found as many new questions as answers!

So another solution (for ext4 on Linux) which is to set barrier=0 in /etc/fstab , but this exposes the whole partition to peril rather than just the database.

We are only using the database for internal purposes (i.e. its not widely deployed) but the data is important. Unfortunately I don't have control over the hardware which is generally workstations and some old laptops so I am not sure SSD would be possible, but I agree it is very desirable.

As I mentioned earlier I am curious why the UDF is so much quicker. This is still a possible solution for me, but I haven't worked out how to show a progress whilst the file is 'uploading' yet.