Subject | 64 bit I/O- Was: Test results: JayBird vs Interclient on blob select on Linux and Windows |
---|---|
Author | Rick Fincher |
Post date | 2003-03-04T18:08:59Z |
Hi Sergei,
The 64 bit version will be slower on the 32 bit CPU's because the system
can't do the 64 bit integer math associated with block manipulations in a
single cpu register. That might be mitigated somewhat by a math chip that
can do 64 bit integer manipulations, assuming the compiler that the code is
compiled with supports that.
The 64 bit I/O refers to the file sizes that are manipulated. With 32 bit
file I/O the maximum size of any one database file was 2 gigabytes. Most
big databases could be broken up into multiple files, and probably should be
broken up for other reasons, but some people these days have huge databases
that work better if they can have databases bigger than 2 gig.
This is a presumption on my part because the 64 bit Sparc processors on the
newer Suns gave a big boost to databases like Oracle.
Firebird on a 64 bit cpu with 64 bit file I/O like the Sparc II, or III
shuld suffer no speed penalty.
Maybe Helen or Ann can tell us if my presumption is correct?
A note on Hyper Threading: We run some floating point intensive simulations
on our P4 HT machines. Running the simulations individually without HT
takes about 4 minutes, so two sequential runs took 8 minutes. With HT
turned on, two runs started simultaneously completed in 5 minutes. Single
runs still completed in slightly more than 4 minutes. This was almost pure
floating point so I'm sure other types of apps will vary.
Rick
The 64 bit version will be slower on the 32 bit CPU's because the system
can't do the 64 bit integer math associated with block manipulations in a
single cpu register. That might be mitigated somewhat by a math chip that
can do 64 bit integer manipulations, assuming the compiler that the code is
compiled with supports that.
The 64 bit I/O refers to the file sizes that are manipulated. With 32 bit
file I/O the maximum size of any one database file was 2 gigabytes. Most
big databases could be broken up into multiple files, and probably should be
broken up for other reasons, but some people these days have huge databases
that work better if they can have databases bigger than 2 gig.
This is a presumption on my part because the 64 bit Sparc processors on the
newer Suns gave a big boost to databases like Oracle.
Firebird on a 64 bit cpu with 64 bit file I/O like the Sparc II, or III
shuld suffer no speed penalty.
Maybe Helen or Ann can tell us if my presumption is correct?
A note on Hyper Threading: We run some floating point intensive simulations
on our P4 HT machines. Running the simulations individually without HT
takes about 4 minutes, so two sequential runs took 8 minutes. With HT
turned on, two runs started simultaneously completed in 5 minutes. Single
runs still completed in slightly more than 4 minutes. This was almost pure
floating point so I'm sure other types of apps will vary.
Rick
----- Original Message -----
> Hi Rick,
>
> > It looks like they may have isolated the problem with your stuff. It
also
> > looks like you are using the 64 bit file I/O Firebird on Red Hat and 32
> bit
> > on the others, so that will slow Red Hat down too.
> Why so? I thought that it would be faster with 64 bit file I/O than with
32
> bit.
> It was my intuitive decision to install 64 bit I/O version of FB CS. Well,
I
> had to deep into this problem before installing it. I'll think over again.
>
> > Firebird (either SS or CS) should run faster for multiple users with Hyp
er
> > Threading turned on because the threads or processes will essentially
have
> > more cpu's to run on.
> This is valuable. I expected that and I'm glad to hear confirmation of it.
>