Subject Very fast read only transactions
Author Jeff Dunlap
I apologize for sending this message earlier to the wrong group, I meant to post it here.

Last year I wrote my first FB application (a web application) and it suffers a short delay each time the application selects data to display.

The bottleneck was due to the fact that I would connect to the database, select my data, then disconnect.  This was the problem.

Yesterday, I began writing a test application that obtains data the very same way as described above.  I noticed that the connect process is very expensive.  Too expensive to use in a web application when nearly instant results are desired.

Today I wrote a second test which (in order):
  - application is executed and
  - creates a single instance of the database then
  - connects (stays connected indefinately)  
  - creates a single instance of a transaction
  - starts the transaction
  - creates a statement instance
  - prepares the sql statement:
    SELECT FIRSTNAME, LASTNAME FROM LEADS WHERE PRIMARYKEY=?

    Then in a continous loop of 1 million:
  - sets the 'PRIMARYKEY=?' to the next record to select
  - prepares the sql statement
  - executes the sql statement
  - fetches the data and assigns it to application variables

The loop above periodically spits out verification data and the results are good.  What's impressive is that the system accurately performs about 3000 unique selects per second on my Pentium D workstation, meaning that a web application should have instant access to data and no longer have to wait for the connect process.

Here is the C++ code that demonstrates the test:

class ro_db_server
{

public:

  ro_db_server()
  {
  }

  ro_db_server(const string server_name, const string database_name, const string user_name, const string user_password)
  {
    db = IBPP::DatabaseFactory(server_name, database_name, user_name, user_password);
    db->Connect();
    tr = IBPP::TransactionFactory(db, IBPP::amRead, IBPP::ilReadCommitted);
    tr->Start();
    st = IBPP::StatementFactory(db, tr);
  }

  ~ro_db_server()
  {
    db->Disconnect();
  }

  IBPP::Statement st;

private:
  IBPP::Database db;
  IBPP::Transaction tr;

};

void class_test(int iterations, bool display_records = false)
{
  string sFirstName, sLastName;

  ro_db_server roServer("CORE-DUO", "test", UserName, Password);

  std::string sSQL = "SELECT FIRSTNAME, LASTNAME FROM LEADS "
                     "WHERE PRIMARYKEY=?";

  roServer.st->Prepare(sSQL);

  int n1 = 0;
  int n2 = 0;

  for (int i = 1; i < iterations; i++)
  {
    sFirstName = "";
    sLastName  = "";

    roServer.st->Set(1, ++n2);
    roServer.st->Execute();

    if (roServer.st->Fetch())
    {
      roServer.st->Get(1, sFirstName);
      roServer.st->Get(2, sLastName);
    }
    if (display_records)
      cout << n2 << ". " << sFirstName << " " << sLastName << endl;


    // display the very last 257 recs to make sure results are accurate
    if (i>iterations-257)
      cout << n2 << ". " << sFirstName << " " << sLastName << endl;

    if (++n1>1000)
    {
      cout << "Processed: " << i << endl;
      n1=0; // reset
    }

    if (n2==257)
    {
      n2=0; // reset
    }

  }
}

Please share your thoughts about this. 

Although the routine above is very processor friendly and does not leak memory or cause FB to leak memory, I would like to know if there are any problems that may occur with read-only connections that are kept open indefinitely.

Since I don't know a whole lot about SQL and FB, I would sincerely appreciate someone telling me if the firebird steps above (the pseudo-code for those not familiar with C++) are valid and will not cause unforeseen problems. 

In a nutshell, can I go ahead and do what I am doing, that is to create a single database, a single transaction, and a single statement and reuse them indefinately?  Like I said, I did not experience any adverse effects in my test.





[Non-text portions of this message have been removed]