Subject | RE: [firebird-support] Re: Transaction ID |
---|---|
Author | David Johnson |
Post date | 2006-03-14T03:16:54Z |
This type of usage requires heavy transaction processing power and fast
buses. You don't find it on WIntel class hardware, even clustered. You
won't even find it on midrange hardware. Each "CPU" on these boxes is
actually an array of CPU's that is comparable in actual throughput to an
array of 10 2 GHz P4's. Boxes may run with from 1 to 64 of these CPU
arrays.
The DBMS splits the database across many files on many volumes.
Typically, data and indexes are configured reside on different physical
DASD units to minimize I/O thrashing (remember how often this question
comes up in the architecture list?).
There are only a few hundred distributed connections. Processing within
the hosting hardware boundary has an arbitrary number of connections
that are pooled by the transactioning system, that is in turn shared by
terminals and internal processes.
The very heavy usage systems have midrange feeder systems at a few
thousand local collection points. The minis funnel compressed data
streams through dedicated connections to the national headquarters as
fast as they can, saturating the telco's fiber optic connection at peak
times of the year.
You can run linux as a guest OS on these boxes. Vertical scalability pf
linux is accomplished by allowing multiple linux instances to run
concurrently as guest OS's on separate partitions in the machine.
DASD has gigabytes of cache, and is connected by multi-gigabit fiber
optic.
Can we tweak Vulcan to run on this platform? Please? :o)
buses. You don't find it on WIntel class hardware, even clustered. You
won't even find it on midrange hardware. Each "CPU" on these boxes is
actually an array of CPU's that is comparable in actual throughput to an
array of 10 2 GHz P4's. Boxes may run with from 1 to 64 of these CPU
arrays.
The DBMS splits the database across many files on many volumes.
Typically, data and indexes are configured reside on different physical
DASD units to minimize I/O thrashing (remember how often this question
comes up in the architecture list?).
There are only a few hundred distributed connections. Processing within
the hosting hardware boundary has an arbitrary number of connections
that are pooled by the transactioning system, that is in turn shared by
terminals and internal processes.
The very heavy usage systems have midrange feeder systems at a few
thousand local collection points. The minis funnel compressed data
streams through dedicated connections to the national headquarters as
fast as they can, saturating the telco's fiber optic connection at peak
times of the year.
You can run linux as a guest OS on these boxes. Vertical scalability pf
linux is accomplished by allowing multiple linux instances to run
concurrently as guest OS's on separate partitions in the machine.
DASD has gigabytes of cache, and is connected by multi-gigabit fiber
optic.
Can we tweak Vulcan to run on this platform? Please? :o)
On Tue, 2006-03-14 at 13:46 +1100, Alan McDonald wrote:
> > A real transaction count for a non-hypothetical large system (not
> > firebird): 1.2 million database transactions per hour (8000 per
> second),
> > 24x7.
> >
> > I am aware of several systems that are 4 to 8 times as heavily used
> as
> > this particular one.
> >
> > That's 149 days!
> >
>
> and that's working thru one database file? not a farm?
> I don't have any single piece of hardware able to keep going at that
> pace.
> How many concurrent (unique) connections are there supporting 8000 Ts
> per
> second?
> Alan
>