Subject Re: Question about limits in fb 2.5 or 3.0
Author woodsmailbox
> Please tell use what you are trying to do.

I'll try to make a quicklist of examples when it bites me again.

> Most people find that
> normalized databases don't come close to bumping into restrictions
> (you'll find the 64K record limit very, very common among
databases).

My dbs are normalized, that's not the problem. I can live with 64k
records in _tables_, that's predictable and can be decided at design
time. But then you use list() and group by and order by, and it only
bites you at runtime.

Views are supposed to denormalize, so it's only natural for them to
have big row sizes, right? Same for stored proc parameters. Well, you
can only practically fit 3 of those 4k utf8 strings in a view or sp
params. That's pretty arbitrary from the user pov. I'm aware that it's
very hard to remove those limitations, so this rant doesn't have much
value for that cause, but at least it will make some people aware I
guess. I wish I had found this in some FAQ before.


>
> Blobs exist for two reasons. One is that some data items have no
upper
> bound whatsoever. A Word document, for example, can be arbitrarily
> long. The other is that fetching large objects is expensive, and if
> large fields are not required for a statement, not fetching them is
a
> huge win.
>
> In 1983, it made sense to make blobs separate types. In 2009, it
makes
> more sense to bifurcate at runtime based on a length threshold, but
> blobs and non-blobs sharing common semantics.
>
> That said, I think the problem may be in the way you are designing
your
> application. Tell us about it. Maybe we can make some
suggestions...
>