Subject | Re: [firebird-support] trying to avoid large datasets |
---|---|
Author | Helen Borrie |
Post date | 2006-07-07T05:32:56Z |
At 03:03 PM 7/07/2006, you wrote:
be slower, not faster.
Think "minimum set", i.e. look for 0 or more rows where the
near-duplication might occur. That means querying the table WHERE
something, never mind how many or few there are. You might get 200
or you might get none....
Get smart about your search params with equivalence, LIKE, STARTING
WITH and CONTAINING, and "OR" the conditions. Only equivalence and
STARTING WITH will use an index; and LIKE is a two-edged sword; and
CONTAINING provides case-insensitivity if your character set supports
upper/lower case mappling.
Get even smarter by having the user store keywords and then make your
existence search under the hood be on one or more keywords. You can
ask the user to define the keywords, or you can make your own keyword
parser routine that you can call at both points of the idea (inputs
as well as existence searches). You can write one or more SPs to
take the input from the parser and behave appropriately as the result
of the search test...
If you can abandon the old spreadsheet thinking, you can make good
use of relational principles to store, retrieve and match up rows
with keywords.
./heLen
>i'm in the process of porting a music catalogue maintenance programYou're still thinking in spreadsheet mode and the FIRST n idea will
>from clarion (which natively handles large datasets well) to c++.
>
>entering new title into database revolves around a screen that lists
>*all* the titles in the catalogue (about 120000) which takes 10
>seconds or so to load in order (unordered takes almost no time). the
>data entry operators like this because it allows them to locate where
>an item might be and then quickly scroll up and down to see if it's
>already been entered but with a spelling mistake.
>
>i realise this is more of ye olde paradox way of doing things but
>they've said it's a compulsory feature so there's not much i can do. i
>was wondering if i could fudge some similar kind of functionality by
>doing something like...
>
>SELECT FIRST 10 ...
>FROM ...
>WHERE param = 'whatever they enter'
>ORDER BY title
>
>but also selecting items preceeding whatever they're searching for?
>kinda like
>
>SELECT PREVIOUS 10 ...
>FROM ...
>WHERE param = 'whatever they enter'
>ORDER BY title
>
>so it might return
>
>whatever they eeter
>whatever they enter
>whatever they etner
be slower, not faster.
Think "minimum set", i.e. look for 0 or more rows where the
near-duplication might occur. That means querying the table WHERE
something, never mind how many or few there are. You might get 200
or you might get none....
Get smart about your search params with equivalence, LIKE, STARTING
WITH and CONTAINING, and "OR" the conditions. Only equivalence and
STARTING WITH will use an index; and LIKE is a two-edged sword; and
CONTAINING provides case-insensitivity if your character set supports
upper/lower case mappling.
Get even smarter by having the user store keywords and then make your
existence search under the hood be on one or more keywords. You can
ask the user to define the keywords, or you can make your own keyword
parser routine that you can call at both points of the idea (inputs
as well as existence searches). You can write one or more SPs to
take the input from the parser and behave appropriately as the result
of the search test...
If you can abandon the old spreadsheet thinking, you can make good
use of relational principles to store, retrieve and match up rows
with keywords.
./heLen