Subject | RE: [firebird-support] Performing LIKE queries on a BLOB |
---|---|
Author | Robert DiFalco |
Post date | 2003-10-04T17:22:48Z |
Hmmmm...maybe one day when the crazy days of my current project have
ended I can take a look at the FB source. Seems like I should be able to
buffer all the data in segment by segment or use a streaming blob to
prevent the wild-card matching from failing when crossing segment
boundaries. For now, there really is no solution. Probably the best I
can do for the user is to create a search where I parse each expression
into a different search before sending it to the database. Of course,
they will lose the order so I will have to make it explicit that wild
card matches are unordered. For example:
DOC LIKE '%line1%line2?l%line3%';
Would become:
DOC LIKE '%line1%' AND DOC LIKE '%line2?l%' AND DOC LIKE '%line3%';
The order problem is that this search will lose the users intent that
line1 must come before line2. But that's okay, as I said, I'll just make
it clear in the help text.
R.
-----Original Message-----
From: Helen Borrie [mailto:helebor@...]
Sent: Friday, October 03, 2003 4:13 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] Performing LIKE queries on a BLOB
At 02:21 PM 3/10/2003 -0700, you wrote:
spans the boundary of a segment. That is, if the pattern is completely
enclosed in a segment, it will be found; but if it partly in one
segment
and partly in another, it won't.
The engine itself doesn't know or care what's in a blob: at storage
time,
it's just a parking attendant. At retrieval time it works
segment-by-segment.
It might be a solution to match your segment size exactly to your line
size
and to make sure that the client application was actually sending
segments
in such a way that they were stored as that size. Even so, it would
only
work if all lines were the same length and your client application
wasn't
messing about with segment sizes. I believe most DAOs optimise the
sizes
of blob segments for the wire, which makes the database-declared segment
size irrelevant.
And no, it doesn't affect large VARCHARs, as they aren't stored in
segments.
If you decide to experiment, I'd be interested to hear what you come up
with.
heLen
Yahoo! Groups Sponsor
ADVERTISEMENT
To unsubscribe from this group, send an email to:
firebird-support-unsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
ended I can take a look at the FB source. Seems like I should be able to
buffer all the data in segment by segment or use a streaming blob to
prevent the wild-card matching from failing when crossing segment
boundaries. For now, there really is no solution. Probably the best I
can do for the user is to create a search where I parse each expression
into a different search before sending it to the database. Of course,
they will lose the order so I will have to make it explicit that wild
card matches are unordered. For example:
DOC LIKE '%line1%line2?l%line3%';
Would become:
DOC LIKE '%line1%' AND DOC LIKE '%line2?l%' AND DOC LIKE '%line3%';
The order problem is that this search will lose the users intent that
line1 must come before line2. But that's okay, as I said, I'll just make
it clear in the help text.
R.
-----Original Message-----
From: Helen Borrie [mailto:helebor@...]
Sent: Friday, October 03, 2003 4:13 PM
To: firebird-support@yahoogroups.com
Subject: Re: [firebird-support] Performing LIKE queries on a BLOB
At 02:21 PM 3/10/2003 -0700, you wrote:
>Are there size limitations on the size of a pattern if Firebird? Forthis
>example, if I have a BLOB with say 500 lines in it, I can search for
>blob_field LIKE "line1%line70" and it works. But I can NOT search for
>"line1%line200". It seems like it has to be a bug with the string
>matching algorithm. I tried different segment sizes on the blob but
>had no impact. I haven't had a chance to check if there is the sameI remember Claudio explaining this as occurring when the matched string
>limitation of searching within large VARCHAR fields.
spans the boundary of a segment. That is, if the pattern is completely
enclosed in a segment, it will be found; but if it partly in one
segment
and partly in another, it won't.
The engine itself doesn't know or care what's in a blob: at storage
time,
it's just a parking attendant. At retrieval time it works
segment-by-segment.
It might be a solution to match your segment size exactly to your line
size
and to make sure that the client application was actually sending
segments
in such a way that they were stored as that size. Even so, it would
only
work if all lines were the same length and your client application
wasn't
messing about with segment sizes. I believe most DAOs optimise the
sizes
of blob segments for the wire, which makes the database-declared segment
size irrelevant.
And no, it doesn't affect large VARCHARs, as they aren't stored in
segments.
If you decide to experiment, I'd be interested to hear what you come up
with.
heLen
Yahoo! Groups Sponsor
ADVERTISEMENT
To unsubscribe from this group, send an email to:
firebird-support-unsubscribe@yahoogroups.com
Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.