Subject pumping very large records (30 megs!)
Author spou
Hi to everyone.

I have a FB 1.03 installed, to which I save color images that can take
up to 30megs per record.

At first, the system was design for single image of about 45K - 65K,
and was working well (albeit slowly) on 64K datalink. Users of course
had "great ideas" and now they save multi-page color TIFF...

the software is made with delphi6 + IBO, and have been working
flawlessly for about 2 years, even with the "great ideas".

the database is now about 69 gigs. no need to say that backups are
becomming an issue.

We have a secondary site, with a datalink of 0.5Mbits. I've decided
to create a backup copy of the database on the server at this site,
which will eventually be updated via replication.

But first, I need to make a copy of the original database to the
secondary site.

The database also have text tables, which have been pumped very
quickly. But when I try to send a record containing an image
(remember, 30 or so megs), I end up with error 10054 (connection reset
by peer), followed by 10061 (connection refused).

now, I think that it is a network related problem, because I never had
such errors between the client and the server as long as both are on
the same network (10 and 100mbits)

what's happening goes as this:
-the pump starts sending the record.

-on the remote server, I can see memory used by ibserver going from 3
megs lowly going up by 64k increments until it reaches about 20 to 30
megs / 10 to 15 minutes (it varies depending on unknown reasons, maybe
net traffic).

-at this point the memory "eating" stops, and going back to the main
server, I get the errors I mentionned earlyer (10054/10061) and the
transactions stops without having sent the whole record.

I have tryed putting longer connection timeout, change the dummy
packet interval, and put more memory cache in the ibconfig file, but
notting seems to change the end result, and no modifications made a
significant change on the point the transfert will crash. I have
tried during night time (when traffic is low to null) and the same
thing happens, althrought it seems to last longer before the crash.

As I said, text (small records, make 1 K each) was pumped very quickly
with no problems at all. I would say that it have to do with the very
large record over a (relatively) slow network connection.

I have tried making a test run on the main server pumping to the main
server in an other local database, and it works well, pumping a
record in about 12 seconds.

both servers are W2K, using IBPump (by Clever Components). datalink
is 0.5 mbits, FB is 1.0.3 classic.

if anyone have an idea, some pointers, url, anything, it is welcomed
in advance.

Stephane