Subject Stored Procedures, Isolation Levels, and Transactions
Author Epstein, Ed
I am very new to programming stored procedures. I have already created some
that do what I need them to do and have gained a little proficiency at
writing them. However, I am running up against a formidable problem.

I have a SP that is updating a field in about 3 million rows on a 11 million
row table. It does this with about 7 different update statements in the SP.
I have waited about 48 hours at a time for it to finish and it never does.
I see the GDB file growing, which is what I have assumed to be the
uncommitted pages. It starts at a very good rate and then slows down and
stops completely. I never get to the point to commit the SP. My own
interpretation of this event is that my particular server can only handle so
many uncommitted pages/transactions before slowing down to a standstill.

The machine, btw, is a 3 GHZ Xeon 1 meg cache, 2 GIGS memory, and the drive
subsystem is a 128 meg cache, 64 bit RAID-0 running with Ultra32,15K RPM,
3ms seek time HD's. I cannot get a faster system for quite some time. I
thought this would at least be adequate to run the database.

What I am curious about is if you can specify isolation level in the SP, to
prevent other users from accessing the data till the SP is done, and perform
commits within the SP itself. That way I can break down the updates into
manageable chunks.

I would like to refrain from doing this in the front-end since I am urged to
keep all logic server-side.