Subject | extreme Blob handling |
---|---|
Author | Maik Wojcieszak |
Post date | 2002-12-05T09:47:33Z |
Hi,
I have a question regarding some effects with extreme
Blob handling - large files
Writing 100MB into database in 100kb pieces (1000 records)
is takes 40sec. Writing 100MB in 10MB pieces (10 records) takes
75sec. Why is it singnificantly slower to write large Blobs ?
trying to write a 100MB Blob seems to fail or takes too long.
Also my memory has been allocated completly by ibo.
Is there a limit in writing such blobs and/or why can't I write
the file without using that much memory ?
Is there a way to optimize my writing function (below) ?
If somebody is interested I have written a little tool wich measures
the writing/reading time for database/filesystem. I don't know if
I could attach it in this mailinglist but I can send it directly to
anyone who wants to use it.
Here is my writing function :
There is no real file written. Only the same buffer is written
into the blob again and again.
function TForm1.CreateDBFile(bufsize, FileSize: Integer): Extended;
var
toWrite : Integer;
hMem: integer;
lpMem: PByte;
DatStream : TStream;
ImageBlob : TIB_Column;
c,n1,n2 : TLargeInteger;
begin
hMem := GlobalAlloc(GMEM_MOVEABLE,bufsize);
if hMem = 0 then begin
ShowMessage('cannot allocate buffer');
exit;
end;
lpMem := GlobalLock(hMem);
If lpMem = nil Then begin
GlobalFree(hMem);
ShowMessage('cannot lock memory');
exit;
end;
QueryPerformanceFrequency(c);
QueryPerformanceCounter(n1);
dsql.SQL.Clear;
dsql.SQL.Add('INSERT INTO FILE_BENCHMARK_TAB (FILE_DATA)');
dsql.SQL.Add('VALUES (:data)');
ImageBlob := dsql.ParamByName('data');
DatStream := ImageBlob.Statement.CreateBlobStream(ImageBlob, bsmReadWrite);
// now write the file
toWrite := FileSize;
while toWrite >= bufsize do begin
DatStream.WriteBuffer(lpMem,bufsize);
toWrite := toWrite - bufsize;
ProgressBar2.Position := Round(((FileSize - toWrite)/FileSize) * 100)
end;
if toWrite > 0 then
DatStream.WriteBuffer(lpMem,toWrite);
GlobalUnlock(hMem);
GlobalFree(hMem);
try
dsql.prepare;
dsql.ExecSQL;
IB_Transaction.Commit;
except
IB_Transaction.Rollback;
ShowMessage('Error writing to database');
end;
DatStream.Free;
QueryPerformanceCounter(n2);
ProgressBar2.Position := 0;
result := (n2-n1)/c;
end;
thanks for any hint
maik
I have a question regarding some effects with extreme
Blob handling - large files
Writing 100MB into database in 100kb pieces (1000 records)
is takes 40sec. Writing 100MB in 10MB pieces (10 records) takes
75sec. Why is it singnificantly slower to write large Blobs ?
trying to write a 100MB Blob seems to fail or takes too long.
Also my memory has been allocated completly by ibo.
Is there a limit in writing such blobs and/or why can't I write
the file without using that much memory ?
Is there a way to optimize my writing function (below) ?
If somebody is interested I have written a little tool wich measures
the writing/reading time for database/filesystem. I don't know if
I could attach it in this mailinglist but I can send it directly to
anyone who wants to use it.
Here is my writing function :
There is no real file written. Only the same buffer is written
into the blob again and again.
function TForm1.CreateDBFile(bufsize, FileSize: Integer): Extended;
var
toWrite : Integer;
hMem: integer;
lpMem: PByte;
DatStream : TStream;
ImageBlob : TIB_Column;
c,n1,n2 : TLargeInteger;
begin
hMem := GlobalAlloc(GMEM_MOVEABLE,bufsize);
if hMem = 0 then begin
ShowMessage('cannot allocate buffer');
exit;
end;
lpMem := GlobalLock(hMem);
If lpMem = nil Then begin
GlobalFree(hMem);
ShowMessage('cannot lock memory');
exit;
end;
QueryPerformanceFrequency(c);
QueryPerformanceCounter(n1);
dsql.SQL.Clear;
dsql.SQL.Add('INSERT INTO FILE_BENCHMARK_TAB (FILE_DATA)');
dsql.SQL.Add('VALUES (:data)');
ImageBlob := dsql.ParamByName('data');
DatStream := ImageBlob.Statement.CreateBlobStream(ImageBlob, bsmReadWrite);
// now write the file
toWrite := FileSize;
while toWrite >= bufsize do begin
DatStream.WriteBuffer(lpMem,bufsize);
toWrite := toWrite - bufsize;
ProgressBar2.Position := Round(((FileSize - toWrite)/FileSize) * 100)
end;
if toWrite > 0 then
DatStream.WriteBuffer(lpMem,toWrite);
GlobalUnlock(hMem);
GlobalFree(hMem);
try
dsql.prepare;
dsql.ExecSQL;
IB_Transaction.Commit;
except
IB_Transaction.Rollback;
ShowMessage('Error writing to database');
end;
DatStream.Free;
QueryPerformanceCounter(n2);
ProgressBar2.Position := 0;
result := (n2-n1)/c;
end;
thanks for any hint
maik