Subject RE: IB Clustering (random thoughts and questions)
Author Leyne, Sean

This really is a support/usage type question, not an architectural
issue. Accordingly, I've posted the report to the IB-Support list for
further follow up.

WRT your first problem and Option A, I find it very difficult to believe
that this approach produces such poor performance (you should be seeing
performance on the order of at least 500 rows per second), I'd review
your code and check that:

- you're not committing the changes with each insert operation,
committ every 1000 rows

- you are preparing the insert statement once, outside your main
loop, and then simply assigning the parameters for each insert inside
the main loop

- you are using the correct committ mode


> -----Original Message-----
> From: montgomery@... [mailto:montgomery@...]
> Sent: Wednesday, August 08, 2001 3:01 PM
> To:
> Subject: [IB-Architect] IB Clustering (random thoughts and questions)
> Hi,
> I've had Interbase in my production environment (NT/2000 running an
> internet-based political accounting product) since v4.2, and I have
> been extremely happy with IB's stability, performance, and near-zero
> administration requirements. However, I do have two problems with IB
> which seem to be somewhat insurmountable that may force me to move my
> production data to MS SQL Server (sigh), if they can't be resolved.
> At first glance of various SQL Server user testimonials, it seems SQL
> Server 2000 with clustering will solve both of my problems outlined
> below.
> The First Problem:
> Most of my clients have very small datasets on our server, with
> perhaps 10,000 records (at most) within their data space. There are
> some prospective clients we would really like to get (and who really
> want to be able to use our services) who have databases containing 2-
> 3 million records of equivalent data in their possesion that would
> need to be imported into our system. Getting this type of client
> will more than pay for the transition to SQL Server and its
> additional administration costs.
> To import data, I can: a) use an external application using IBO to
> perform a series of parameterized inserts, b) use a script file of
> insert statements containing a whole bunch of insert statements
> created from the clients data, or c) use the 'External Files'
> approach (which is somewhat of an arcane process).
> Option 'C' can be safely ruled out because it uses every last bit of
> CPU power on the IB server until it completes (which takes many
> hours - leaving our other clients with an unacceptable level of
> system performance until the import is completed).
> Option 'A' is INCREDIBLY slow. It might take a week to import a
> clients data using that option.
> Option 'B' seems to be in the middle of 'A' and 'B', as long as the
> script has the 'Yield' property set to True, otherwise it behaves
> much like Option 'C'.
> Overall, I don't believe the long-term management of such a large
> number of records will be difficult, but just getting those records
> into the database to begin with seems nearly impossible.
> The Second Problem:
> I really, really want some type of automatic fail-over redundancy if
> my primary IB server crashes. IB really doesn't seem to make this
> easy to accomplish.
> The only (seemingly) viable theory I've worked out to accomplish this
> is:
> 1) Put all data files onto a shared disk array, connected to a
> Primary and Backup IB server
> 2) Replicate the ISC4 database from the Primary server to the Backup
> server in real time.
> 3) Use a hardware load balancer which will route all client requests
> to the Primary server unless the Primary server is COMPLETELY
> unavailable, in which case requests will be directed to the Backup
> server.
> Does this theory seem like it is worth pursuing? Am I missing some
> fundamental aspect that I should be considering?
> Let me be very clear: I really want to continue using IB! Any
> thoughts, suggestions, comments, or rants would be greatly
> appreciated.
> Best Regards,
> David Montgomery
> montgomery@...