Subject Very fast read only transactions
Author Jeff Dunlap
Last year I wrote my first FB application (a web application) and it suffers a short delay each time the application  selects data to display.

The bottleneck was due to the fact that I would connect to the database, select my data, then disconnect.

Yesterday, I began writing a test application that obtains data the very same way as described above.  I noticed that the connect process is very expensive.  Too expensive to use in a web application when nearly instant results are desired.

Today I wrote a second test which (in order):
  - application is executed and
  - creates a single instance of the database then
  - connects (stays connected indefinately)  
  - creates a single instance of a transaction
  - starts the transaction
  - creates a statement instance
  - prepares the sql statement:

    Then in a continous loop of 1 million:
  - sets the 'PRIMARYKEY=?' to the next record to select
  - prepares the sql statement
  - executes the sql statement
  - fetches the data and assigns it to application variables

The loop above periodically spits out verification data and the results are good.  What's impressive is that the system accurately performs about 3000 unique selects per second on my Pentium D workstation, meaning that a web application should have instant access to data and no longer have to wait for the connect process.

Here is the C++ code that demonstrates the test:

class ro_db_server



  ro_db_server(const string server_name, const string database_name, const string user_name, const string user_password)
    db = IBPP::DatabaseFactory(server_name, database_name, user_name, user_password);
    tr = IBPP::TransactionFactory(db, IBPP::amRead, IBPP::ilReadCommitted);
    st = IBPP::StatementFactory(db, tr);


  IBPP::Statement st;

  IBPP::Database db;
  IBPP::Transaction tr;


void class_test(int iterations, bool display_records = false)
  string sFirstName, sLastName;

  ro_db_server roServer("CORE-DUO", "test", UserName, Password);

                     "WHERE PRIMARYKEY=?";>Prepare(sSQL);

  int n1 = 0;
  int n2 = 0;

  for (int i = 1; i < iterations; i++)
    sFirstName = "";
    sLastName  = "";>Set(1, ++n2);>Execute();

    if (>Fetch())
    {>Get(1, sFirstName);>Get(2, sLastName);
    if (display_records)
      cout << n2 << ". " << sFirstName << " " << sLastName << endl;

    // display the very last 257 recs to make sure results are accurate
    if (i>iterations-257)
      cout << n2 << ". " << sFirstName << " " << sLastName << endl;

    if (++n1>1000)
      cout << "Processed: " << i << endl;
      n1=0; // reset

    if (n2==257)
      n2=0; // reset


Please share your thoughts about this.  Although the routine above is very processor friendly and does not leak memory or cause FB to leak memory, I would like to know if there are any problems that may occur with read-only connections that are kept open indefinitely.

Also, I'd like to know if the steps above (the pseudo-code for those not familiar with C++) will not cause unforeseen problems.

[Non-text portions of this message have been removed]