Subject | Re: Can we, can we, can we????... |
---|---|
Author | David Johnson |
Post date | 2005-06-18T01:33:33Z |
On Fri, 2005-06-17 at 19:13 +0000, Firebird-Architect@yahoogroups.com
wrote:
code to interface with legacy back-end systems. You need sub-second
responsiveness for 8,000 users whose work lives rotate around system
capabilities, plus automated processes tracking some 50,000 pieces of
equipment and a similar number of current business transactions in real
time in a multi-terabyte database.
Weekly spike loading is roughly 2.8 million transactions per hour.
Yearly spike is almost double that. System responsiveness during spikes
is not permitted to degrade noticeably, because it translates directly
to noticeable numbers of dollars per minute in lost business.
It is much better to have the query break in development than to hang
the users up for a minute in production. A "hard" break must be fixed
in development (the governor killed it, it's too long, figure out what's
wrong and make it work) - a long running query can be ignored and cause
delays in transacting business.
My system is small potatoes compared to one of our customers' systems.
They actually paid Southwestern Bell to have a fiber optic line run
roughly 200 miles through the mountains to support their real time
transaction processing needs. During their spike times (4 weeks in the
year) they will completely flood the data bandwidth of the fiber optic
line and virtually shut out local DSL users that share the backbone that
they paid for.
I guess that this explains my nonsensical questions about scalability
and performance on other mailing lists :o)
wrote:
> > 1) Quotas are NOT a replacement for on-demand request cancelling,In a typical (for me) system: You have about 150 developers writing
> they
> > are different, although related, features appropriate for different
> > use cases.
>
> Granted - but I'd need to hear from somebody with a real application
> that could be improved by quotas. I can certainly imagine situations
> where quotas make managing an application more difficult, so I need a
> very concrete reason to go there - not just because we can.
code to interface with legacy back-end systems. You need sub-second
responsiveness for 8,000 users whose work lives rotate around system
capabilities, plus automated processes tracking some 50,000 pieces of
equipment and a similar number of current business transactions in real
time in a multi-terabyte database.
Weekly spike loading is roughly 2.8 million transactions per hour.
Yearly spike is almost double that. System responsiveness during spikes
is not permitted to degrade noticeably, because it translates directly
to noticeable numbers of dollars per minute in lost business.
It is much better to have the query break in development than to hang
the users up for a minute in production. A "hard" break must be fixed
in development (the governor killed it, it's too long, figure out what's
wrong and make it work) - a long running query can be ignored and cause
delays in transacting business.
My system is small potatoes compared to one of our customers' systems.
They actually paid Southwestern Bell to have a fiber optic line run
roughly 200 miles through the mountains to support their real time
transaction processing needs. During their spike times (4 weeks in the
year) they will completely flood the data bandwidth of the fiber optic
line and virtually shut out local DSL users that share the backbone that
they paid for.
I guess that this explains my nonsensical questions about scalability
and performance on other mailing lists :o)