Subject | Re: [IB-Architect] Interbase Culture and Open Source |
---|---|
Author | jerzy.tomasik@ti.com |
Post date | 2000-03-28T19:16:16Z |
--- In IB-Architect@onelist.com, "Markus Kemper" <mkemper@i...> wrote:
be sanitized by the lawyers, but we've been using InterBase for a
fairly large databases (200 GB) for years. Now, our database
consists of hundreds of stand-alone databases (not a multiple file
database) each one several hundreds of MB up to 2 GB in size. We
have specifically configured our system this way, because it made
maintenance and archiving trivial. We use InterBase for tracking
test measurements on individual chips in semiconductor manufacturing.
Some of the features that were extremely attractive to us are:
tables with more than 1000 columns, ability to modify tables without
locking out readers and easy connectivity. One may argue that
hundreds of small files are not the same as one large database, and I
would agree. But the point is that other databases would lock us
into a massive database which would not be optimal for our usage.
Ultimately, our users have online access to hundred of GBs of
measurement data with good response time -- they don't even know or
care what database works behind the scenes.
Our entire implemenation consists of four standalone servers in
different geographical locations, about 200 users with 20 or so
concurrent. Each server has on the order of 150-200 GB of gdb files.
This entire system is supported about 0.1 DBA to aid Sys Admins of
the servers once in a while. Really the only time we do any DBA work
is when we change HW or SW or to troubleshoot obscure problems. For
the most part the system is self-maintained.
BTW, this has been implemented in Silicon Systems, now part of Texas
Instruments.
Jerzy
> This is a direct challenge to me and the rest of the InterBaseThis is a personal opinion, since an official statement would have to
> community. We are not known for or respected for our sophistication
> (power via simplicity), feature rich ability, scalability and
> reliability. Standard marketing can only take us so far, we
> need case studies that prove that we are a contender in this
> space. When we can change this mindset in the industry companies
> will be wondering why they employ a full time DBA instead of just
> using InterBase.
>
> Markus
be sanitized by the lawyers, but we've been using InterBase for a
fairly large databases (200 GB) for years. Now, our database
consists of hundreds of stand-alone databases (not a multiple file
database) each one several hundreds of MB up to 2 GB in size. We
have specifically configured our system this way, because it made
maintenance and archiving trivial. We use InterBase for tracking
test measurements on individual chips in semiconductor manufacturing.
Some of the features that were extremely attractive to us are:
tables with more than 1000 columns, ability to modify tables without
locking out readers and easy connectivity. One may argue that
hundreds of small files are not the same as one large database, and I
would agree. But the point is that other databases would lock us
into a massive database which would not be optimal for our usage.
Ultimately, our users have online access to hundred of GBs of
measurement data with good response time -- they don't even know or
care what database works behind the scenes.
Our entire implemenation consists of four standalone servers in
different geographical locations, about 200 users with 20 or so
concurrent. Each server has on the order of 150-200 GB of gdb files.
This entire system is supported about 0.1 DBA to aid Sys Admins of
the servers once in a while. Really the only time we do any DBA work
is when we change HW or SW or to troubleshoot obscure problems. For
the most part the system is self-maintained.
BTW, this has been implemented in Silicon Systems, now part of Texas
Instruments.
Jerzy