Subject | Re: Stored Procedures, Isolation Levels,.. . |
---|---|
Author | Alexander V.Nevsky |
Post date | 2004-04-24T12:05:53Z |
--- In firebird-support@yahoogroups.com, "Epstein, Ed" <eepstein@c...>
wrote:
attracts as less intermediate levels as possible. Among mentioned I
beleive it is IBWorkbench. But personally I in such curcuimstances
would use isql on local exclusive connection.
recommended to be made in exclusive access mode.
container of parameters for internal engine non-SQL machinery which
take this parameters and perform real actions on commit only. This is
the reason why it is dangerous to use uncommited metadata changes even
in the same transaction which makes this changes.
perform sweep.
bad indices which can decrease performance even on regular selects. I
just built 7-segment index on 7 million records table, some of
segments have ~3.5 millions duplicates but whole index is unique - 4
minutes. On old PII-350.
seconds. You should re-design your indices structure and perhaps your
queries. One of ways is creation of composite indices adding ID to bad
index to make it unique. But this is'nt panacea, all should be well
considered, in some cases natural scan is better than usage of bad
index. If you'll make many indices with the same segments, optimizer
have more chances to be confused and you will be forced to pay more
attention to plans for your queries.
Best regards,
Alexander.
wrote:
> The server itself does not crash. Its the programs being used tomanipulate
> it. Also, crashEd, to operate in heavy occurances it is better to use tool which
> may not be the best word. "Hung" may be more appropriate. I have used
> IBWorkbench, M$ Access, and VB6 to activate and deactivate the indices.
attracts as less intermediate levels as possible. Among mentioned I
beleive it is IBWorkbench. But personally I in such curcuimstances
would use isql on local exclusive connection.
> AtMetadata changes and actions related with mass changes are
> the same time I am pulling a select statement on RDB$INDICES for that
> particular index to check its inactive flag with a separate connection.
recommended to be made in exclusive access mode.
> statement "ALTER INDEX BLAH_BLAH_BLAH ACTIVE", and then commit theSystem tables are not the same as user tables, they are interface,
> transaction. The execute obviously comes back very quickly. The commit
> however is where all the programs seem "hung".
container of parameters for internal engine non-SQL machinery which
take this parameters and perform real actions on commit only. This is
the reason why it is dangerous to use uncommited metadata changes even
in the same transaction which makes this changes.
> They never receive athough I
> commitcomplete event. I noticed with the server itself, that even
> stopped the program after a period of time, it was still rebuilding theBefore reactivating indices try to disconnect all connections and
> index in an attempt to carry out the commit.
perform sweep.
> I was able to see this withWow. This means your indices contains very much duplicates and are
> the select statement on the RDB$INDICES. The rebuild is taking up to 18
> hours to complete.
bad indices which can decrease performance even on regular selects. I
just built 7-segment index on 7 million records table, some of
segments have ~3.5 millions duplicates but whole index is unique - 4
minutes. On old PII-350.
> This is a very big problem for me. I will routinely be insertingmillions
> of rows into the database and be updating and deleting them. With norebuilding
> indices it would of course be faster now that I understand its
> the indices after every update. However, at many times I will beperforming
> reports on this data and indices are crucial to getting them in a viablemillion
> time period. Even with indices a single query took 82 minutes on 11
> records.Ed, natural scan of mentioned table on mentioned computer - 64
seconds. You should re-design your indices structure and perhaps your
queries. One of ways is creation of composite indices adding ID to bad
index to make it unique. But this is'nt panacea, all should be well
considered, in some cases natural scan is better than usage of bad
index. If you'll make many indices with the same segments, optimizer
have more chances to be confused and you will be forced to pay more
attention to plans for your queries.
Best regards,
Alexander.