Subject Re: [IBO] extreme Blob handling
Author Jason Wharton
The default blob handling IBO performs for you is all done in contiguous
memory. There is a way that you can get down to the segment level if you
want to. I need to make this more accessible. In short, what you will do is
have your own routine to write segments and then get back the BLOB_ID and
put it into the column value.

Let me know if you need some more assistance there. Is your BLOB contents
available with a TStream wrapper? If this were the case what I see that
could be useful is a blob routine that receives in a TStream descendant and
gives back a BLOB_ID. It could also just pass in the column/parameter to
write the BLOB_ID into.

Regards,
Jason Wharton
CPS - Mesa AZ
http://www.ibobjects.com

-- We may not have it all together --
-- But together we have it all --


----- Original Message -----
From: "Maik Wojcieszak" <mw@...>
To: <IBObjects@yahoogroups.com>
Sent: Thursday, December 05, 2002 2:47 AM
Subject: [IBO] extreme Blob handling


> Hi,
>
> I have a question regarding some effects with extreme
> Blob handling - large files
>
> Writing 100MB into database in 100kb pieces (1000 records)
> is takes 40sec. Writing 100MB in 10MB pieces (10 records) takes
> 75sec. Why is it singnificantly slower to write large Blobs ?
>
> trying to write a 100MB Blob seems to fail or takes too long.
> Also my memory has been allocated completly by ibo.
> Is there a limit in writing such blobs and/or why can't I write
> the file without using that much memory ?
>
> Is there a way to optimize my writing function (below) ?
>
> If somebody is interested I have written a little tool wich measures
> the writing/reading time for database/filesystem. I don't know if
> I could attach it in this mailinglist but I can send it directly to
> anyone who wants to use it.
>
> Here is my writing function :
> There is no real file written. Only the same buffer is written
> into the blob again and again.
>
>
> function TForm1.CreateDBFile(bufsize, FileSize: Integer): Extended;
> var
> toWrite : Integer;
> hMem: integer;
> lpMem: PByte;
> DatStream : TStream;
> ImageBlob : TIB_Column;
> c,n1,n2 : TLargeInteger;
>
> begin
> hMem := GlobalAlloc(GMEM_MOVEABLE,bufsize);
> if hMem = 0 then begin
> ShowMessage('cannot allocate buffer');
> exit;
> end;
>
> lpMem := GlobalLock(hMem);
> If lpMem = nil Then begin
> GlobalFree(hMem);
> ShowMessage('cannot lock memory');
> exit;
> end;
>
> QueryPerformanceFrequency(c);
> QueryPerformanceCounter(n1);
>
> dsql.SQL.Clear;
> dsql.SQL.Add('INSERT INTO FILE_BENCHMARK_TAB (FILE_DATA)');
> dsql.SQL.Add('VALUES (:data)');
> ImageBlob := dsql.ParamByName('data');
> DatStream := ImageBlob.Statement.CreateBlobStream(ImageBlob,
bsmReadWrite);
>
> // now write the file
> toWrite := FileSize;
> while toWrite >= bufsize do begin
> DatStream.WriteBuffer(lpMem,bufsize);
> toWrite := toWrite - bufsize;
> ProgressBar2.Position := Round(((FileSize - toWrite)/FileSize) *
100)
> end;
> if toWrite > 0 then
> DatStream.WriteBuffer(lpMem,toWrite);
>
>
> GlobalUnlock(hMem);
> GlobalFree(hMem);
>
> try
> dsql.prepare;
> dsql.ExecSQL;
> IB_Transaction.Commit;
> except
> IB_Transaction.Rollback;
> ShowMessage('Error writing to database');
> end;
>
> DatStream.Free;
> QueryPerformanceCounter(n2);
> ProgressBar2.Position := 0;
> result := (n2-n1)/c;
> end;
>
> thanks for any hint
> maik