Subject | Re: [firebird-support] Re: Merging two textual blob fields |
---|---|
Author | Ann W. Harrison |
Post date | 2005-01-24T17:08:19Z |
Hi Paul,
and updating them shouldn't be a problem. It will create a significant
undo log - you can avoid that but choosing a "no auto undo" transaction
option if that's surfaced by your application. Firebird does not
materialize the full set of records to be updated before updating them.
Firebird does (must?) materialize all records before sorting them.
handle the record by group, pick some primary key value ranges that give
you a reasonable sized subset and select the records based on those
ranges.
Regards,
Ann
>No, you won't. Sorting the records will use memory, but just reading
> Do you know the best way to step through each of 1.5 million records
> without using up a massive ammount of resources? If I do a "SELECT *
> FROM TABLE", then step through each record in turn, I'm going to run
> into problems with memory (I think!).
and updating them shouldn't be a problem. It will create a significant
undo log - you can avoid that but choosing a "no auto undo" transaction
option if that's surfaced by your application. Firebird does not
materialize the full set of records to be updated before updating them.
Firebird does (must?) materialize all records before sorting them.
>Right. That's not a good idea. If you really really really want to
> If I use FIRST and SKIP, am I right in assuming that the queries will
> get slower the further into the table I get (I'm assuming that "SELECT
> FIRST 100 SKIP 1000000 * FROM TABLE" will still step through the first
> 1000000 records, but only return the next 100)?
handle the record by group, pick some primary key value ranges that give
you a reasonable sized subset and select the records based on those
ranges.
Regards,
Ann