Subject | Re: [ib-support] Sizing the Server |
---|---|
Author | Kaputnik |
Post date | 2001-12-31T15:11:38Z |
Well,
just ran a Test-Data Generator and the table below with 2mill. records takes
up pretty exactly 157MB on my disks....
keep in midn, that File-Size goes pretty linear for simple tasks, but open
transactions and other things will keep growing your databese exponentially
in the worst case.
Running updates in a DB with other open transactions will create new
versions of records keeping the old ones still in the db until garbage
collection can delete them finally. Unused disk-space is not released, so a
2GB database, where you delete all records from all tables will still be 2GB
in size.
At the uni, ur main FB server is a Pentium III 500 with 1GB RAM and 8
LVD-SCSI Disks. One disk is System, one disk is Pagefile, one is for FB
temporary files and the rest is a RAID5 for the Database (although the DB
itself is only 1.2GB in size).
It works pretty well for up to 25 concurrent users at our site and the
response-times are pretty good. The DB itself is running pretty much SP's
and triggers, and the concurrent 5-10 users we typically have do not load
the machine up to its max....
we will switch the FB server to a dual xeon machine next year, but without
need. The old server is simply old and the leasing-contract is paid, so we
are shutting it down and another chair at the uni with less money will get
it. I tent to say that sizing for an FB server is not plain CPU-Power and
even not plain RAM, but the speed and count of your disks.
As IB uses not really much RAM you will be fine with a 512MB RAM machine in
most cases. Only FB Classic will make use of more than one CPU, so plain
processing power is not the biggest factor. Well, a XEON will run better
than a pentium, of course, and the architecture of the Athlon will make it
also faster than the Pentium4 at lower clock speed, so a nice Athlon XP or a
nice XEON 1000 will be a good choice for SS.
The biggest issue for IB is the speed and count of your disks. Make them
RAID and make them plenty. The faster your disk-throughput, the faster your
DB.
Oracle or DB/2 can cahce query-plans and can even share result-sets of
queries for other users. If. e.g. a user requests a result with several
joins and ordered by and so on, DB/2 will do the reading only one time, and
the next user with the same query will just get the result-set shared. This
needs extensive caching on the server. FB can not do such stuff, so runnung
the same queries on and on will indeed use the few pages of cache it has,
but joins or other things are always rebuild. So disks are the main factor
for a fast server.
These are only hints. There are far more knowledgeable people on this list
who are mor experienced with sizing servers, and they will for sure give you
more hints :-)
CU, Nick
""Paul Schmidt"" <paul@...> schrieb im Newsbeitrag
news:3C301FC3.23301.585EE@localhost...
just ran a Test-Data Generator and the table below with 2mill. records takes
up pretty exactly 157MB on my disks....
keep in midn, that File-Size goes pretty linear for simple tasks, but open
transactions and other things will keep growing your databese exponentially
in the worst case.
Running updates in a DB with other open transactions will create new
versions of records keeping the old ones still in the db until garbage
collection can delete them finally. Unused disk-space is not released, so a
2GB database, where you delete all records from all tables will still be 2GB
in size.
At the uni, ur main FB server is a Pentium III 500 with 1GB RAM and 8
LVD-SCSI Disks. One disk is System, one disk is Pagefile, one is for FB
temporary files and the rest is a RAID5 for the Database (although the DB
itself is only 1.2GB in size).
It works pretty well for up to 25 concurrent users at our site and the
response-times are pretty good. The DB itself is running pretty much SP's
and triggers, and the concurrent 5-10 users we typically have do not load
the machine up to its max....
we will switch the FB server to a dual xeon machine next year, but without
need. The old server is simply old and the leasing-contract is paid, so we
are shutting it down and another chair at the uni with less money will get
it. I tent to say that sizing for an FB server is not plain CPU-Power and
even not plain RAM, but the speed and count of your disks.
As IB uses not really much RAM you will be fine with a 512MB RAM machine in
most cases. Only FB Classic will make use of more than one CPU, so plain
processing power is not the biggest factor. Well, a XEON will run better
than a pentium, of course, and the architecture of the Athlon will make it
also faster than the Pentium4 at lower clock speed, so a nice Athlon XP or a
nice XEON 1000 will be a good choice for SS.
The biggest issue for IB is the speed and count of your disks. Make them
RAID and make them plenty. The faster your disk-throughput, the faster your
DB.
Oracle or DB/2 can cahce query-plans and can even share result-sets of
queries for other users. If. e.g. a user requests a result with several
joins and ordered by and so on, DB/2 will do the reading only one time, and
the next user with the same query will just get the result-set shared. This
needs extensive caching on the server. FB can not do such stuff, so runnung
the same queries on and on will indeed use the few pages of cache it has,
but joins or other things are always rebuild. So disks are the main factor
for a fast server.
These are only hints. There are far more knowledgeable people on this list
who are mor experienced with sizing servers, and they will for sure give you
more hints :-)
CU, Nick
""Paul Schmidt"" <paul@...> schrieb im Newsbeitrag
news:3C301FC3.23301.585EE@localhost...
> Hello List:
>
> I trust everyone had a good Christmas, and is prepared for Y2002,
> man 2002 already, time flies...
>
> Does anyone have detailed info on sizing a server for FB? For
> example for so many connections you should have so much ram
> available, etc. How about disk space, if I have a table with the
> following structure:
>
> CREATE TABLEX (
> TABLE_ID INTEGER,
> NAME VARCHAR(50),
> AMOUNT DOUBLE PRECISION DEFAULT 0.0 NOT NULL,
> CONSTRAINT TABLEX_PK PRIMARY KEY (TABLE_ID));
>
> Is there any way to determine roughly how much 50,000 records
> will take up, how about 500,000 or 10,000,000 will take up on disk?
> Ball park is okay, within 10% say? How about indexes and the
> like?
>
> Thanks
>
> Paul
>
>
>