Subject Re: Max length of SQL statement
Author Adam
> > That's not the right answer. Could you review the question and
> > answer that question? I want to understand what your "fuzzy
> search"
> > is actually supposed to achieve. Otherwise, I give up.


The inability of Firebird to handle more than 1499 terms in an in
statement is surely a limitation (from my understanding it is an
implementation artifact that was quite hard to change, not a design

This is evidenced by the fact Firebird is quite happy for you to do:

where ID in (1,2,3,.....,1499)
OR ID in (1500,1501,1502,....etc)

If it were a design choice, then it should stop you from doing that.

I can't answer exactly what Stephen is running up against, but I can
confirm that we have had to write a work around within our
applications for this limitation. Fortunately, we are usually dealing
with integer datatypes, so our client side logic can do a
transformation into a bunch of between statements OR'd together, but
if you have non-ordinal datatypes, this is not really possible.

If ID is an integer,

where ID in (1,2,...,1499)
can be transformed to
where ID between 1 and 1499
(which has a better plan too)

But there is nothing you can do about continuous datatypes

where Name in ('Fred', 'Barney', 'Wilma')
can not be transformed into between statements no matter what you try

The usual argument that comes up is why on earth would someone type in
1500 arguments. It is a valid question to ask, because you should be
asking yourself if there is a better way to achieve this. (which is
exactly what Stephen is doing). If you are able to (the members of the
IN condition are records in a table for example), it is far superior
to use joins, but sometimes it is external to the database.

Of course no human is going to check or uncheck thousands of boxes,
but the presumption that it is a human generating the data for the in
clause is not necessarily true. When a machine or some other
instrument is measuring information from outside the database, you
have to put up with the fact that it is interested in specific
records, and not limited to 1500 or however many may fit on a screen
at once.

Of course a nice work-around would be to store the interesting values
in a table, and join to that table. There are a number of problems
that need to be addressed.

Inserting is pretty much limited to hardware speed, but 10000 values
per second is about expected. This is mainly because it needs to get
written to disk. If you are interested in summing 40000 records, it
may take you longer to provide the IDs of all the interesting records
than it does for Firebird to calculate the result. Furthermore, the
nature of the queries means you are doing bulk deletes, which
introduces a lot of work for the garbage collection. You need to come
up with a design that allows multiple programs to run these sorts of
queries simultaneously (which means using context variables of some
sort - eg CURRENT_TRANSACTION), and you need to make sure you don't
leave any crud there once you finish.

Temporary tables nicely fixes most of these problems, as the engine
does not have to worry so much about durability (can write to disk if
it needs to, but no requirement) or garbage collection.