Subject | Re: [firebird-support] Embedded Firebird : Transaction |
---|---|
Author | Nando Dessena |
Post date | 2005-09-20T07:08:24Z |
Boris,
BHS> I was looking for the SQL syntax, not API calls (similar to BEGIN/COMMIT
BHS> in Oracle, SQL Server, MySQL, ...).
no way. In Firebird you start transactions with API calls.
BHS> Alternatively, is there a way to enable write caching (what I'm
BHS> basically trying to achieve, is faster bulk insert), through
BHS> CachedUpdates or RequestLive ?
Aren't CachedUpdates and RequestLive API calls? :-)
BTW, if you need to achieve greate performance for bulk inserts you
need to:
- use a prepared parametric insert statement.
- fill the param values accessing them by index/direct reference and
not by name (this might apply or not depending on what tools you are
using to get to the metal).
- commit every "n" inserts, where "n" is usually in the thousands.
BHS> Also, I ran into a problem when creating indexes.
I'm not sure what gain you expect from such an index.
BHS> When inserting, I
BHS> get no errors so I think there's no duplicate. However, when creating
BHS> the index, I get :
BHS> ** IBPP::SQLException ***
BHS> Context: Statement::Execute( CREATE INDEX myindex2 ON LOGS (Id, Timekey) )
BHS> Message: isc_dsql_execute2 failed
BHS> SQL Message : -607
BHS> This operation is not defined for system tables.
BHS> Engine Code : 335544351
BHS> Engine Message :
BHS> unsuccessful metadata update
BHS> STORE RDB$INDICES failed
BHS> attempt to store duplicate value (visible to active transactions) in
BHS> unique index "RDB$INDEX_5"
BHS> Any ideas ?
Yes. The index is already defined.
Ciao
--
Nando Dessena
http://www.flamerobin.org
BHS> I was looking for the SQL syntax, not API calls (similar to BEGIN/COMMIT
BHS> in Oracle, SQL Server, MySQL, ...).
no way. In Firebird you start transactions with API calls.
BHS> Alternatively, is there a way to enable write caching (what I'm
BHS> basically trying to achieve, is faster bulk insert), through
BHS> CachedUpdates or RequestLive ?
Aren't CachedUpdates and RequestLive API calls? :-)
BTW, if you need to achieve greate performance for bulk inserts you
need to:
- use a prepared parametric insert statement.
- fill the param values accessing them by index/direct reference and
not by name (this might apply or not depending on what tools you are
using to get to the metal).
- commit every "n" inserts, where "n" is usually in the thousands.
BHS> Also, I ran into a problem when creating indexes.
>> CREATE INDEX myindex ON LOGS (Id, Timekey);BHS> Id is primary key, and timekey is a unique integer.
I'm not sure what gain you expect from such an index.
BHS> When inserting, I
BHS> get no errors so I think there's no duplicate. However, when creating
BHS> the index, I get :
BHS> ** IBPP::SQLException ***
BHS> Context: Statement::Execute( CREATE INDEX myindex2 ON LOGS (Id, Timekey) )
BHS> Message: isc_dsql_execute2 failed
BHS> SQL Message : -607
BHS> This operation is not defined for system tables.
BHS> Engine Code : 335544351
BHS> Engine Message :
BHS> unsuccessful metadata update
BHS> STORE RDB$INDICES failed
BHS> attempt to store duplicate value (visible to active transactions) in
BHS> unique index "RDB$INDEX_5"
BHS> Any ideas ?
Yes. The index is already defined.
Ciao
--
Nando Dessena
http://www.flamerobin.org