Subject RE: [firebird-support] RE: At my wits end
Author Alan McDonald
> As for my use of tps, I meant transactions per second. I guess I should
> really be saying Inserts per second. I personally have tried commits from
> around 10-10000. The C++ test code was committing every 1000 records.
>

OK Ed, I still have to make some assumptions:
1. The data is inserted from one client only and not any of the clients
(e.g. an administrators desktop)
2. The data has a relatively short life-cycle (it will possibly be deleted
after only a short while and new data will take it's place).
3. The data being inserted undergoes relatively minor modification
during/after insertion and the life of this data is spent being dispensed to
other clients to report/view in specified criteria rather than endlessly
edited and transformed.
Correct me if I'm wrong.

If indeed, some of these assumptions are correct, then I would have to
wonder why you have chosen a full trasnaction based SQL engine. I don't mean
to belittle your choice, but if there is little in the way of permanent data
manipulation and/or permanency, I would have probably chosen MySQL classic
ISAM tables to do this job. No transaction overhead, rapid insertion
possibilities and plenty of VB support.

If, on the other hand, you do need long term storage with continuing data
manipulation, then transactions may be necessary and you may well need
plenty of disk space after 12 months of insertions.
Dan has already given you some stats which hold true. Might I also suggest
the use of CleverComponents datapump which uses OLEDB as one type of source
which I would think can be a CSV filetype provider. It is configurable and
the config is saved and you can manipulate the number of insertions per
commit, it's always been fast for me but I haven't timed it. It is third
party but you are already using third party stuff I see and I doubt that if
it does not run at the desired speed for you then you will have to re-think
your model.
Alan