Subject performance expectations and interpreting statistics
Author Bernard Devlin
I'm basically a novice when it comes to SQL. However, I've
pre-populated the db behind my web app with test data (the largest
table has about 31,000,000 records). When I do a fairly complex
analysis (involving aggregation by day) using this table and several
others, I get response times in isql of about 0.6 seconds. Typically,
these kind of stats:

Records affected: 181
Current memory = 84896768
Delta memory = 0
Max memory = 85038184
Elapsed time= 0.70 sec
Cpu = 0.01 sec
Buffers = 10000
Reads = 0
Writes = 0
Fetches = 238910

SQL> select count(*) from events;

COUNT 31182225

Records affected: 1
Current memory = 84905984
Delta memory = 9216
Max memory = 85041240
Elapsed time= 116.81 sec
Cpu = 6.88 sec
Buffers = 10000
Reads = 152925
Writes = 0
Fetches = 62670274

I don't do any count(*) in my code - this is just here to show the
results when I do so. Am I right in thinking that in the first set of
stats above, the fetches is the number of rows found in the cache?

This is on a Celeron 2.0ghz machine, running Red Hat 7.1 - the db
cache is about 80mb. The db size (over 2 files) is approx 2.5gb -
forced writes are on.

Getting a result set such as the one described in my first paragraph,
only takes about 15% of the time in constructing the page for the user
containing these results. So, I don't see this as an area where I'm
going to get much improvement. But I'm just wondering if I should be
expecting more performance from my queries. Or does this look like
this kind of query is already performing fairly optimally?

Strangely enough, I'm using ODBC to connect to the db, and the
connection time is basically insignificant (the db and web server are
on the same box). This flies against everything I've heard in terms
of the expensiveness of connecting to relational databases.

Regards,
Bernard