Subject | Re: [IBO] extreme Blob handling |
---|---|
Author | Maik Wojcieszak |
Post date | 2002-12-05T16:41:07Z |
Jason,
in my current project I have a lot of large files that are available as
a stream. In the real case I do write a stream.
All what helps me to optimize performance and memory usage will
be helpfull.
I think I need some more assistance because I have no idea where
to get the information I need to write this function.
thanks
maik
in my current project I have a lot of large files that are available as
a stream. In the real case I do write a stream.
All what helps me to optimize performance and memory usage will
be helpfull.
I think I need some more assistance because I have no idea where
to get the information I need to write this function.
thanks
maik
On Thu, 5 Dec 2002 08:09:09 -0700, Jason Wharton wrote:
>The default blob handling IBO performs for you is all done in contiguous
>memory. There is a way that you can get down to the segment level if you
>want to. I need to make this more accessible. In short, what you will do is
>have your own routine to write segments and then get back the BLOB_ID and
>put it into the column value.
>
>Let me know if you need some more assistance there. Is your BLOB contents
>available with a TStream wrapper? If this were the case what I see that
>could be useful is a blob routine that receives in a TStream descendant and
>gives back a BLOB_ID. It could also just pass in the column/parameter to
>write the BLOB_ID into.
>
>Regards,
>Jason Wharton
>CPS - Mesa AZ
>http://www.ibobjects.com
>
>-- We may not have it all together --
>-- But together we have it all --
>
>
>----- Original Message -----
>From: "Maik Wojcieszak" <mw@...>
>To: <IBObjects@yahoogroups.com>
>Sent: Thursday, December 05, 2002 2:47 AM
>Subject: [IBO] extreme Blob handling
>
>
>> Hi,
>>
>> I have a question regarding some effects with extreme
>> Blob handling - large files
>>
>> Writing 100MB into database in 100kb pieces (1000 records)
>> is takes 40sec. Writing 100MB in 10MB pieces (10 records) takes
>> 75sec. Why is it singnificantly slower to write large Blobs ?
>>
>> trying to write a 100MB Blob seems to fail or takes too long.
>> Also my memory has been allocated completly by ibo.
>> Is there a limit in writing such blobs and/or why can't I write
>> the file without using that much memory ?
>>
>> Is there a way to optimize my writing function (below) ?
>>
>> If somebody is interested I have written a little tool wich measures
>> the writing/reading time for database/filesystem. I don't know if
>> I could attach it in this mailinglist but I can send it directly to
>> anyone who wants to use it.
>>
>> Here is my writing function :
>> There is no real file written. Only the same buffer is written
>> into the blob again and again.
>>
>>
>> function TForm1.CreateDBFile(bufsize, FileSize: Integer): Extended;
>> var
>> toWrite : Integer;
>> hMem: integer;
>> lpMem: PByte;
>> DatStream : TStream;
>> ImageBlob : TIB_Column;
>> c,n1,n2 : TLargeInteger;
>>
>> begin
>> hMem := GlobalAlloc(GMEM_MOVEABLE,bufsize);
>> if hMem = 0 then begin
>> ShowMessage('cannot allocate buffer');
>> exit;
>> end;
>>
>> lpMem := GlobalLock(hMem);
>> If lpMem = nil Then begin
>> GlobalFree(hMem);
>> ShowMessage('cannot lock memory');
>> exit;
>> end;
>>
>> QueryPerformanceFrequency(c);
>> QueryPerformanceCounter(n1);
>>
>> dsql.SQL.Clear;
>> dsql.SQL.Add('INSERT INTO FILE_BENCHMARK_TAB (FILE_DATA)');
>> dsql.SQL.Add('VALUES (:data)');
>> ImageBlob := dsql.ParamByName('data');
>> DatStream := ImageBlob.Statement.CreateBlobStream(ImageBlob,
>bsmReadWrite);
>>
>> // now write the file
>> toWrite := FileSize;
>> while toWrite >= bufsize do begin
>> DatStream.WriteBuffer(lpMem,bufsize);
>> toWrite := toWrite - bufsize;
>> ProgressBar2.Position := Round(((FileSize - toWrite)/FileSize) *
>100)
>> end;
>> if toWrite > 0 then
>> DatStream.WriteBuffer(lpMem,toWrite);
>>
>>
>> GlobalUnlock(hMem);
>> GlobalFree(hMem);
>>
>> try
>> dsql.prepare;
>> dsql.ExecSQL;
>> IB_Transaction.Commit;
>> except
>> IB_Transaction.Rollback;
>> ShowMessage('Error writing to database');
>> end;
>>
>> DatStream.Free;
>> QueryPerformanceCounter(n2);
>> ProgressBar2.Position := 0;
>> result := (n2-n1)/c;
>> end;
>>
>> thanks for any hint
>> maik
>
>
>
>
>___________________________________________________________________________
>IB Objects - direct, complete, custom connectivity to Firebird or InterBase
> without the need for BDE, ODBC or any other layer.
>___________________________________________________________________________
>http://www.ibobjects.com - your IBO community resource for Tech Info papers,
>keyword-searchable FAQ, community code contributions and more !
>
>Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>