Subject Windows vs linux server performance
Author David Suárez
Hi all,



Maybe somebody has experimented a similar behavoir and can bring some light.




While doing some tests with BLOBs, we found an interesting performance
difference between linux and windows servers.



Given a simple table with a primary key and a blob field:



CREATE TABLE bindata (

iobjid BIGINT NOT NULL,

bdata BLOB sub_type 0 segment size 1);

ALTER TABLE bindata ADD CONSTRAINT PK_bindata PRIMARY KEY (iobjid);





This table has aprox. 18MB in blob data, splittted in 1100 records. Some
records are 1MB, others just several hundreds bytes. We try to run isql
utility to select this data locally in both linux and windows servers. In
both cases, output.txt contains expected data



time /opt/firebird/bin/isql -u SYSDBA -p masterkey /data/testblob/01.fdb -i
inputcommands.txt > output.txt



Inputcommands.txt contains



SET BLOB ALL;

SELECT * FROM bindata;



In linux (Fedora Core 4, local superserver 1.5.2 + libstdc++ 5 compat rpms):




real 1m37.937s

user 0m0.144s

sys 0m0.211s



same test with windows local firebird server (1.5.2 superserver)



elapsed-time: 2,651s



both machines are xeon 3200 + 1GB ram



Conclussion: 2,6 windows vs 97,9 linux. Anybody has found something similar?
I believe there must be some configuration issue in the linux side, as the
problems also show up inserting blob contents.



Kind regards



David



[Non-text portions of this message have been removed]