Subject Re: [firebird-support] Blob write to Embedded on Linux performance
Author Mike Ro


On 06/06/14 11:04, Frank Schlottmann-Gödde frank@... [firebird-support] wrote:
 

Ok, this is what I get for a 13MB file on an Intel NUC (Celeron),
database and home are on an USB Drive, so no real good hardware.

SQL> set stat;
SQL> set time;
SQL> select b_loadfromfile('/home/frank/w.mp3') from rdb$database;

B_LOADFROMFILE
=================
0:1
==============================================================================
B_LOADFROMFILE:
BLOB display set to subtype 1. This BLOB: subtype = 0
==============================================================================

Current memory = 37822224
Delta memory = 416616
Max memory = 37899608
Elapsed time= 9.423 sec
Cpu = 0.000 sec
Buffers = 2048
Reads = 0
Writes = 830
Fetches = 1767

Frank, thank you for taking the time to do this. It confirms that there is definitely something wrong with my setup (are you using Firebird version 2.5.2?).

I am seeing 45 seconds for a 13Mb file using C++ / IBPP and 48 seconds using PHP.

Exactly the same code, database structure and hardware on Windows inserts a 13Mb BLOB in just 2.4 seconds!

I could send you my udf if you want to test it.


That would be great thank you! Mine is basically the example (from examples/udf/udflib.c) but using BLOBCALLBACK from ibase.h instead of BLOB.

Could you also share your table definition, especially what indexes or constraints you have defined as I wonder if this is causing the exception? Also do you have anything special in your firebird.conf?
.ddd,_._,___

I am creating a brand new database for each test like this:

create database 'udftest.fdb' user 'sysdba' password 'masterkey';

declare external function f_insertblob cstring (20), blob returns int by value entry_point 'insertblob' module_name 'test_udf.so';

create table bloby (id int, data blob); commit;

insert into bloby (id, data) values (0, 'hello');  commit;

select f_modulo_m('BIGFILE.MP3', data) from bloby where id = 0;

select * from bloby;

Thanks once again, Mike.