Subject | Re: [firebird-support] Scalability principles: Can someone confirm or correct ... |
---|---|
Author | Dalton Calford |
Post date | 2005-02-09T01:38:51Z |
David,
Have you read the "Some Solutions to Old Problems" document. Helen
Borrie was kind enough to edit a bunch of my pondering on the subject of
scalability.
It covers some hardware and software design criteria depending upon the
needs of the user. That document is getting dated, but, it does cover
some of your requirements.
I did respond to your earlier email. I stated what sort of environment
I work and develop in. I have real world working experience with
managing terabytes of data, in intensive work environments and multiple
concurrent users - all with interbase 5.6, firebird 1.0 and firebird 1.5x.
I know the capabilities of the system, and its limitations and I can say
that your estimates are out to lunch. The cpu and memory estimates are
totally wrong while the underlying subsystem and its design is more
important than the cpu or ram. San storage is too slow for performance
systems while raid has both benefits and drawbacks.
You are asking for information about hardware scalability when I have
seen the simple elimination of auto generated domains make a significant
performance change.
Scalability has very little to do with hardware. It has to do with
system design. A proper design will allow a system to scale into uses
not envisioned by the original design team. A simple design rule that
specifies that all primary keys are to be surrogate keys and that no
links are to be based upon user viewed or modifiable data will give
amazing growth capabilities. The rule that no end user application may
touch a table directly, and that all data is to be modified via
procedures, allows new functionality to be added to the database without
any fear of having an impact on legacy client applications.
Please, specify exactly what you need to know. If you want to know what
hardware is needed, give me the current needs and I can specify the
hardware. When the needs change, you can easily buy different hardware
and migrate a good database design onto the new platform without
problem. If you have a bad design, no amount of hardware is going to
make it sing.
Now, as to multi-threading and symmetrical multi-processing, well, that
is as flexible as the hardware is. You decide upon the environment that
meets the current needs.
If the needs change, a good design will migrate without any changes - it
will simply work.
Now, do you begin to understand why your questions about scalability are
meaningless? Any answer given will only apply to a small very select
set of environments and that will be misleading to anyone who is not in
that small group.
Whereas, a question as to how to design scalable systems, that can move
from different architectures without a problem, that soft of information
is useful.
best regards
Dalton
PS, our largest system, has 4 processors and 6 GB of ram, and that
system processes more records in an hour than most systems process in a
year........
David Johnson wrote:
Have you read the "Some Solutions to Old Problems" document. Helen
Borrie was kind enough to edit a bunch of my pondering on the subject of
scalability.
It covers some hardware and software design criteria depending upon the
needs of the user. That document is getting dated, but, it does cover
some of your requirements.
I did respond to your earlier email. I stated what sort of environment
I work and develop in. I have real world working experience with
managing terabytes of data, in intensive work environments and multiple
concurrent users - all with interbase 5.6, firebird 1.0 and firebird 1.5x.
I know the capabilities of the system, and its limitations and I can say
that your estimates are out to lunch. The cpu and memory estimates are
totally wrong while the underlying subsystem and its design is more
important than the cpu or ram. San storage is too slow for performance
systems while raid has both benefits and drawbacks.
You are asking for information about hardware scalability when I have
seen the simple elimination of auto generated domains make a significant
performance change.
Scalability has very little to do with hardware. It has to do with
system design. A proper design will allow a system to scale into uses
not envisioned by the original design team. A simple design rule that
specifies that all primary keys are to be surrogate keys and that no
links are to be based upon user viewed or modifiable data will give
amazing growth capabilities. The rule that no end user application may
touch a table directly, and that all data is to be modified via
procedures, allows new functionality to be added to the database without
any fear of having an impact on legacy client applications.
Please, specify exactly what you need to know. If you want to know what
hardware is needed, give me the current needs and I can specify the
hardware. When the needs change, you can easily buy different hardware
and migrate a good database design onto the new platform without
problem. If you have a bad design, no amount of hardware is going to
make it sing.
Now, as to multi-threading and symmetrical multi-processing, well, that
is as flexible as the hardware is. You decide upon the environment that
meets the current needs.
If the needs change, a good design will migrate without any changes - it
will simply work.
Now, do you begin to understand why your questions about scalability are
meaningless? Any answer given will only apply to a small very select
set of environments and that will be misleading to anyone who is not in
that small group.
Whereas, a question as to how to design scalable systems, that can move
from different architectures without a problem, that soft of information
is useful.
best regards
Dalton
PS, our largest system, has 4 processors and 6 GB of ram, and that
system processes more records in an hour than most systems process in a
year........
David Johnson wrote:
>Although no one has responded directly to my original question about
>scalability, there have been a number of answers posted to other
>question that pertain to my question, and articles on the web that
>indirectly touch on it. Can someone confirm or correct what I think I
>am learning?
>
>The "classic" model defines the heavy weight vertically scalable
>configuration for firebird. It is most appropriate for scaling for
>environments that are architecturally similar to the original VAX
>cluster, such as some recent developments in the Opteron world. It is
>also more appropriate where: higher performance is required, the number
>of concurrent connections is on the same order as the number of CPU's,
>and there is lots of available memory to run the heavyweight connection
>processes.
>
>The "superserver" model defines a light weight scaling configuration
>that is most appropriate for environments that are architecturally more
>similar to the Intel hyperthreading model, or at least dis-similar to
>the VAX cluster architecture. Superserver will allow a well built
>application to function against a lighter weight server instance, with
>only a limited performance penalty. Superserver is less robust, and
>demands that application code pay more attention to thread safety since
>connections and connection resources tend to be pooled.
>
>Based on these principles, recent notes in the mailing list that a low
>end server class machine can safely handle 50 users with 2 CPU's and 2
>GB of memory, and empirical knowledge of some relative platform scales,
>I have some preliminary "guesstimates" for scaling the classic model:
>
>user count CPU count/arch RAM size OS Storage
>50 users 2/Intel 2GB RAM linux, Windows RAID
>100 users 2/IBM PPC4 4 GB RAM linux, AIX RAID
>500 users 6/IBM PPC4 6 GB RAM linux, AIX SAN
>1000 users 16/IBM PPC4 8 GB RAM linux, AIX SAN
>10000 users ?/ES9000*? 128 GB RAM linux, AIX SAN
>
>* Note that scaling to the ES/9000 may not be at all straightforward
>
>
>
>
>Yahoo! Groups Links
>
>
>
>
>
>
>
>
>
>