Subject optimizing table size, is it worth it?
Author car.toper
I am new both my employer and the database they use, Interbase. I know this is a Firebird forum, but I tried posting to the Borland newsgroup yesterday and than realizaed this morning that it is not very active, no posts in like two months! So I found you all. It is my understanding that Firebird is the open source of Interbase, so I hope I am not out of line asking this question:

We are using Interbase 7, I have heard from co-workers that as the database grows in size (300M to 400M), performance really takes a hit. I am currently adding a whole new feature to the application, so I will be adding a bunch of new tables.

I am wondering... Is it worth while to optimize tables to fit into pages? My thought is that if you had a simple table:

id integer
name char(10)

Rather than leave the name at 10 char's, round it up to 12 so the table is 16 bytes and will fit into a sector nicely. The only catch is that all the tables will have varchar's of lengths over 50 most of the time. Would that null and void such an optimization or not? Here is an example:

id integer
name char(40)
desc varchar(60)

My thought is that by rounding the desc up to 84, that will make the table max out at 128 bytes, and again always fit nicely into a page.

Also, does placement of varchar's have a large impact on preformance? In all my tables, there is a updatedby, updateon, insertedby, and insertedon field. Should these fixed length fields always come before variable length fields or will the engine do that automatically?

Are there things I should be keeping in mind as I create new tables to help performance?

Cartoper