Subject AW: [firebird-support] Firebird in automation
Author Alexander Gräf
> -----Ursprüngliche Nachricht-----
> Von: Adomas Urbanavicius [mailto:adomas@...]
> Gesendet: Mittwoch, 1. Dezember 2004 15:07
> An: firebird-support@yahoogroups.com
> Betreff: Re: [firebird-support] Firebird in automation
>
> If it is 24hrs work, then make 20 threads for example to do
> inserts of 50 records for each. (or any other desired number)
> If it is not 24hrs work and you dont like many threads :))),
> it should be ok to make 2 theads : one to write to local que,
> other to deploy data from que to database.(FIFO) When system
> stops filling data to que, "reader" thread will empty
> que.(for example during the night) If it is 24h work, then
> this que would become bigger an bigger.... :), never see the end.
> Adomas
>

Yes, the problem is, that after several hours, the queue could fill up the whole RAM -> The program fails to allocate more memory, it crashes, and data from several hours is gone away. I think what Michelle wants is a way to get the records fast enough to disk. SQLite, for example, is able to insert thousands of records per second when done in one big transaction with cached writes. However, if the server fails while inserting the records, all data will be gone, because the transaction would be rolled back when restarting the server.

Another option:
Open up a file, and simply write down one struct after each other when data arrives. A second thread can then read the records and insert them into the database, in big chunks. If the application fails, the MAX(Table.PK) can be used to seek to the first record in the file not inserted yet into the database, and inserting until the last, complete record.

A memory queue would be necessary in any way, because file enlarging could freeze the database for several msecs and thus prevent values from the sensor to be read. The main problem with Windows and Firebird is the missing Real Time capability. There is no guarantee on how long a write or read will take. Loosing several values per second would be normal without a dedicated queue which caches the entry, and even with this queue, there can be losses.

Hope that helps, Alex