Subject | Firebird/Internet performance questions |
---|---|
Author | David Johnson |
Post date | 2004-01-14T03:17:28Z |
Hello all. Thanks in advance for your help.
As an experienced IT person in a large company, I am accustomed to having an entire department whose concern is the performance of the corporate DBMS. However, I am working on a project of my own that will use Interbase/Firebird as the back end persistence layer for an enterprise scaled distributed system.
Current environment: Delphi 6 build 6.163, SQLExpress, Firebird 1.5 RC8 with default configurations
My first operational persistence class for the framework uses Borland's SQLExpress. The test case code follows
procedure TTestContact_001_Persistor.Persist;
var
aTransDesc: TTransactionDesc;
begin
aTransDesc := TContact_001_Persistor.getInstance.getTransactionDesc;
TTestBed.getInstance.SQLConnection.StartTransaction(aTransDesc);
try
TContact_001_Persistor.getInstance.Persist (FTestMemento);
TTestBed.getInstance.SQLConnection.Commit(aTransDesc);
except
TTestBed.getInstance.SQLConnection.Rollback(aTransDesc);
raise
end;
// Test is presumed to have succeeded if no exception is raised
end;
procedure TTestContact_001_Persistor.SimplePerformance;
const
TEST_ROW_COUNT = 4000000;
var
i: Integer;
var
StartTime: TDateTime;
FinishTime: TDateTime;
begin
FTestMemento.FVoid_Stamp.User_OID := EmptyGUID;
FTestMemento.FVoid_Stamp.User_SID := 0;
FTestMemento.FVoid_Stamp.Intermediary_OID := EmptyGUID;
FTestMemento.FVoid_Stamp.Intermediary_SID := 0;
FillChar (FTestMemento.FVoid_Stamp.Time, sizeof (TSQLTimeStamp), 0);
FTestMemento.FVoid_Stamp.MethodName := '';
FTestMemento.FVoid_Stamp.ProgramName := '';
StartTime := now;
for i := 0 to TEST_ROW_COUNT do
begin
CreateGUID (FTestMemento.FOID);
FTestmemento.FCreate_Stamp.Time := DateTimeToSQLTimeStamp (now);
Persist;
end;
FinishTime := now;
check (false,
Format ('Time to insert %d rows: %d days %s',
[ TEST_ROW_COUNT,
trunc (FinishTime-StartTime),
FormatDateTime ('hh:mm:ss.zzz', FinishTime-StartTime)]));
end;
Observations:
1. Completion of the insert performance test for 4,000,000 rows into a clean test database required 16:16:12 (roughly 68 rows per second) on an 800 MHz Win98 box with 512 MB RAM. Every row insert was committed. This is slower than I expected. I initially thought that my disappointment was because I am used to my DBMS running on big iron (DB2 on large MVS systems), but even with M$ Access I have seen far more complex opertations perform row inserts at twice that rate. I am certain that performance tuning would improve the performance gap. How mch improvement can I expect to see?
2. Even though every row insert was committed, when the test case was completed the DBMS maintains that the table is still "in use" as long as the connection is held. Is this a "bug" or a side effect of the optimization? I am curious.
It is a very low impact issue since it is rare that you are first inserting rows into a table and then dropping the same table within milliseconds. The "table drop" is actually part of the test bed maintenance, to keep clean up the test database for the run of the test suite, so disconnecting then reconnecting for the table drop is an acceptable solution.
3. I am accustomed to a "select count (*) from table" query taking virtually no time, regardless of the table size. Firebird appears to do a table space scan when you run this query. Is this a configurable option in Firebird?
4. I applied an index to the create stamp time field so I could readily identiy the first and last records from the test run. After indexing, the first record was found rapidly, but the last record took several minutes to identify. With 4,000,000 records in a B+ tree on 8k Page boundaries, I believe that I should have seen a maximum of 40 milliseconds (3 pages I/O on the index, then 1 page I/O on the table space). Does anyone haveany comments or suggestions?
5. What sort of performance improvement can I expect to see if I switch to embedded SQL? I am targeting this system to support 8,000 concurrent users in an active environment. Much of the framework acts to hide the DBMS performance limits, but there are limits to how much it can hide. :o)
Thanks
[Non-text portions of this message have been removed]
As an experienced IT person in a large company, I am accustomed to having an entire department whose concern is the performance of the corporate DBMS. However, I am working on a project of my own that will use Interbase/Firebird as the back end persistence layer for an enterprise scaled distributed system.
Current environment: Delphi 6 build 6.163, SQLExpress, Firebird 1.5 RC8 with default configurations
My first operational persistence class for the framework uses Borland's SQLExpress. The test case code follows
procedure TTestContact_001_Persistor.Persist;
var
aTransDesc: TTransactionDesc;
begin
aTransDesc := TContact_001_Persistor.getInstance.getTransactionDesc;
TTestBed.getInstance.SQLConnection.StartTransaction(aTransDesc);
try
TContact_001_Persistor.getInstance.Persist (FTestMemento);
TTestBed.getInstance.SQLConnection.Commit(aTransDesc);
except
TTestBed.getInstance.SQLConnection.Rollback(aTransDesc);
raise
end;
// Test is presumed to have succeeded if no exception is raised
end;
procedure TTestContact_001_Persistor.SimplePerformance;
const
TEST_ROW_COUNT = 4000000;
var
i: Integer;
var
StartTime: TDateTime;
FinishTime: TDateTime;
begin
FTestMemento.FVoid_Stamp.User_OID := EmptyGUID;
FTestMemento.FVoid_Stamp.User_SID := 0;
FTestMemento.FVoid_Stamp.Intermediary_OID := EmptyGUID;
FTestMemento.FVoid_Stamp.Intermediary_SID := 0;
FillChar (FTestMemento.FVoid_Stamp.Time, sizeof (TSQLTimeStamp), 0);
FTestMemento.FVoid_Stamp.MethodName := '';
FTestMemento.FVoid_Stamp.ProgramName := '';
StartTime := now;
for i := 0 to TEST_ROW_COUNT do
begin
CreateGUID (FTestMemento.FOID);
FTestmemento.FCreate_Stamp.Time := DateTimeToSQLTimeStamp (now);
Persist;
end;
FinishTime := now;
check (false,
Format ('Time to insert %d rows: %d days %s',
[ TEST_ROW_COUNT,
trunc (FinishTime-StartTime),
FormatDateTime ('hh:mm:ss.zzz', FinishTime-StartTime)]));
end;
Observations:
1. Completion of the insert performance test for 4,000,000 rows into a clean test database required 16:16:12 (roughly 68 rows per second) on an 800 MHz Win98 box with 512 MB RAM. Every row insert was committed. This is slower than I expected. I initially thought that my disappointment was because I am used to my DBMS running on big iron (DB2 on large MVS systems), but even with M$ Access I have seen far more complex opertations perform row inserts at twice that rate. I am certain that performance tuning would improve the performance gap. How mch improvement can I expect to see?
2. Even though every row insert was committed, when the test case was completed the DBMS maintains that the table is still "in use" as long as the connection is held. Is this a "bug" or a side effect of the optimization? I am curious.
It is a very low impact issue since it is rare that you are first inserting rows into a table and then dropping the same table within milliseconds. The "table drop" is actually part of the test bed maintenance, to keep clean up the test database for the run of the test suite, so disconnecting then reconnecting for the table drop is an acceptable solution.
3. I am accustomed to a "select count (*) from table" query taking virtually no time, regardless of the table size. Firebird appears to do a table space scan when you run this query. Is this a configurable option in Firebird?
4. I applied an index to the create stamp time field so I could readily identiy the first and last records from the test run. After indexing, the first record was found rapidly, but the last record took several minutes to identify. With 4,000,000 records in a B+ tree on 8k Page boundaries, I believe that I should have seen a maximum of 40 milliseconds (3 pages I/O on the index, then 1 page I/O on the table space). Does anyone haveany comments or suggestions?
5. What sort of performance improvement can I expect to see if I switch to embedded SQL? I am targeting this system to support 8,000 concurrent users in an active environment. Much of the framework acts to hide the DBMS performance limits, but there are limits to how much it can hide. :o)
Thanks
[Non-text portions of this message have been removed]