Subject | Re: [IBDI] Re: Firebird 1 |
---|---|
Author | David Jencks |
Post date | 2001-06-06T02:51:56Z |
Hi,
We've had a long discussion here, featuring:
-limit m. n
-bookmarks
-rownums
-Ann's arbitrarily complicated query to fetch the next records using a
bookmark
and we found that people want to do this with arbitrarily complex joins.
My question is, is there any reason to think the rownum or complicated
query solutions won't force the server to fetch all the rows satisfying the
query into memory, then skip the first n, or rows not satisfying the
"nested after" condition, and start returning more?
In other words, don't these all boil down to asking the server to do all
the extra work it would have to do with limit m, n anyway? The syntax for
limit is certainly simpler and easier to use, doesn't return a fake column
( although that might be ok anyway), and might be easier for the engine to
understand.
In regard to limit m, n, are there any cases when the server could avoid
loading all rows into memory, sorting them, then skipping the first m-1,
and returning those requested (for instance by using indexes)? (Answers
for the current optimizer and a hypothetical perfect one both welcome).
I still think the best answer to this question is to figure out how to make
the user ask a question that will only give her 7 rows, perfectly answering
their needs, but this is not really a database question.
david jencks
ps.
Ann's query needs to include enough columns in the order by to give each
row a unique "bookmark":
suppose the last row in a batch is the 5th in a set of 10 with identical
sort column values. Depending on the version of the query, the "next" set
will include all 10 or none of these rows.
To generate a working query automatically, I suppose you need to include
pk's from all tables in the join in the order by clause, after the ones you
are interested in.
We've had a long discussion here, featuring:
-limit m. n
-bookmarks
-rownums
-Ann's arbitrarily complicated query to fetch the next records using a
bookmark
and we found that people want to do this with arbitrarily complex joins.
My question is, is there any reason to think the rownum or complicated
query solutions won't force the server to fetch all the rows satisfying the
query into memory, then skip the first n, or rows not satisfying the
"nested after" condition, and start returning more?
In other words, don't these all boil down to asking the server to do all
the extra work it would have to do with limit m, n anyway? The syntax for
limit is certainly simpler and easier to use, doesn't return a fake column
( although that might be ok anyway), and might be easier for the engine to
understand.
In regard to limit m, n, are there any cases when the server could avoid
loading all rows into memory, sorting them, then skipping the first m-1,
and returning those requested (for instance by using indexes)? (Answers
for the current optimizer and a hypothetical perfect one both welcome).
I still think the best answer to this question is to figure out how to make
the user ask a question that will only give her 7 rows, perfectly answering
their needs, but this is not really a database question.
david jencks
ps.
Ann's query needs to include enough columns in the order by to give each
row a unique "bookmark":
suppose the last row in a batch is the 5th in a set of 10 with identical
sort column values. Depending on the version of the query, the "next" set
will include all 10 or none of these rows.
To generate a working query automatically, I suppose you need to include
pk's from all tables in the join in the order by clause, after the ones you
are interested in.
On 2001.06.05 14:45:00 -0400 ft@... wrote:
> Oops!
>
> This type of query won't work. Suppose the last movie in the first
> list is a late night screening, the next query will exclude almost
> all the movies that the full dataset of the first query would return.
>
> In general, in a dataset sorted on multiple keys, you need a unique
> key sorted according to the combined aggregate sorting order in order
> to be able to select consecutive parts. ROWNUM is such a key with the
> added property of being a counter.
>
> Fred Toussi
>
> --- In IBDI@y..., "Ann W. Harrison" <aharrison@i...> wrote:
> > At 04:48 PM 6/5/2001 +0200, Paulo Gaspar wrote:
> >
>
>
> > My inclination would be to construct an initial query like this:
> >
> > select <whatever> from movies m
> > join schedules s on s.movie = m.movie
> > join cinemas c on c.cinema = s.cinema
> > join towns t on t.town = c.town
> > where 50 <= distance (mycity, city)
> > and s.date >= CURRENT_DATE and s.date < CURRENT_DATE + 7
> > order by m.movie, t.town, c.cinema, s.room, s.date, s.time
> >
> > I'd get the first 20, noticing the values of the last row retrieved.
> > When the user hits next, I'd issue this query:
> >
> > select <whatever> from movies m
> > join schedules s on s.movie = m.movie
> > join cinemas c on c.cinema = s.cinema
> > join towns t on t.town = c.town
> > where 50 <= distance (mycity, city)
> > and s.date >= CURRENT_DATE and s.date < CURRENT_DATE + 7
> > and m.movie >= :saved_movie
> > and t.town >= :saved_town
> > and c.cinema >= :saved_cinema
> > and s.room >= :saved_room
> > and s.date >= :saved_date
> > and s.time >= :saved_time
> > order by m.movie, t.town, c.cinema, s.room, s.date, s.time
> >
> > I'd get 21 of these, throw out the first one as a duplicate, and
> > return those values. Prior is left as an exercise for the student.
> >
> > If I offered the user the ability to click on an arbitrary page
> > rather than just go through hitting next and prior, I'd retrieve
> > enough data to fill the range of pages I offered (as google
> > appears to do) and get more if the user showed signs of moving
> > out of that range.
> >
> > Actually, and in fact, I'd write an intermediate application
> > designed to cache significant quantities of data. This is a
> > place where cheap memory is a real win.
>
>
>
> Community email addresses:
> Post message: IBDI@yahoogroups.com
> Subscribe: IBDI-subscribe@yahoogroups.com
> Unsubscribe: IBDI-unsubscribe@yahoogroups.com
> List owner: IBDI-owner@yahoogroups.com
>
> Shortcut URL to this page:
> http://www.yahoogroups.com/community/IBDI
>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>
>
>