Subject Re: Stored Procedures, Isolation Levels,.. .
Author Alexander V.Nevsky
--- In firebird-support@yahoogroups.com, "Epstein, Ed" <eepstein@c...>
wrote:
> The server itself does not crash. Its the programs being used to
manipulate
> it. Also, crash
> may not be the best word. "Hung" may be more appropriate. I have used
> IBWorkbench, M$ Access, and VB6 to activate and deactivate the indices.

Ed, to operate in heavy occurances it is better to use tool which
attracts as less intermediate levels as possible. Among mentioned I
beleive it is IBWorkbench. But personally I in such curcuimstances
would use isql on local exclusive connection.

> At
> the same time I am pulling a select statement on RDB$INDICES for that
> particular index to check its inactive flag with a separate connection.

Metadata changes and actions related with mass changes are
recommended to be made in exclusive access mode.

> statement "ALTER INDEX BLAH_BLAH_BLAH ACTIVE", and then commit the
> transaction. The execute obviously comes back very quickly. The commit
> however is where all the programs seem "hung".

System tables are not the same as user tables, they are interface,
container of parameters for internal engine non-SQL machinery which
take this parameters and perform real actions on commit only. This is
the reason why it is dangerous to use uncommited metadata changes even
in the same transaction which makes this changes.

> They never receive a
> commitcomplete event. I noticed with the server itself, that even
though I
> stopped the program after a period of time, it was still rebuilding the
> index in an attempt to carry out the commit.

Before reactivating indices try to disconnect all connections and
perform sweep.

> I was able to see this with
> the select statement on the RDB$INDICES. The rebuild is taking up to 18
> hours to complete.

Wow. This means your indices contains very much duplicates and are
bad indices which can decrease performance even on regular selects. I
just built 7-segment index on 7 million records table, some of
segments have ~3.5 millions duplicates but whole index is unique - 4
minutes. On old PII-350.

> This is a very big problem for me. I will routinely be inserting
millions
> of rows into the database and be updating and deleting them. With no
> indices it would of course be faster now that I understand its
rebuilding
> the indices after every update. However, at many times I will be
performing
> reports on this data and indices are crucial to getting them in a viable
> time period. Even with indices a single query took 82 minutes on 11
million
> records.

Ed, natural scan of mentioned table on mentioned computer - 64
seconds. You should re-design your indices structure and perhaps your
queries. One of ways is creation of composite indices adding ID to bad
index to make it unique. But this is'nt panacea, all should be well
considered, in some cases natural scan is better than usage of bad
index. If you'll make many indices with the same segments, optimizer
have more chances to be confused and you will be forced to pay more
attention to plans for your queries.

Best regards,
Alexander.