Subject | Re: [firebird-support] What structure would fit better? |
---|---|
Author | Helen Borrie |
Post date | 2011-12-18T22:14:46Z |
At 12:50 AM 19/12/2011, you wrote:
For example:
create sequence seq_aTable;
create table aTable (
pKey BigInt not null,
symbol char(10) not null,
s_date date not null,
a_blob blob sub_type 1,
constraint pk_aTable primary key (pKey));
commit;
create unique index ix1u_aTable on aTable (symbol, s_date);
commit;
set term ^;
create trigger bi_aTable for aTable
active position 0
as
begin
if (new.pKey is null) then
new.Pkey = next value for seq_aTable;
end ^
commit ^
set term ;^
For loading the records for a year, I suggest using a stored procedure. Come back here and ask if you are interested. I can't suggest anything without knowing where you get these symbol values from but it will be a loop involving :
insert into aTable (symbol, s_date)
values (?, ?);
Then, for loading up the blob for the current date:
update aTable
set a_Blob = ?
where s_date = CURRENT_DATE
and symbol = ?
Again, how you actually go about that depends on where the symbols and blob data are coming from.
For retrieving the "latest committed" blob data for a single symbol:
select a.a_Blob from aTable a
where
a.symbol = ?
and a.s_date =
(select max(b.s_date) from aTable b
where b.a_blob is not null
and b.symbol = a.symbol
and (extract (year from b.s_date) = extract(year from CURRENT_DATE)) )
...and so on.
./heLen
>I have 3000 symbols and each symbol will have a blob data everyday.Use a relational structure - do not combine meaningful data and make spreadsheets in a relational database. Also, though it seems unrelated to the current question, there is no need to have a separate database for every year.
>In one year, there will be 3000*365 blob records.
>And I will insert these records per yearly database. ex)db_2011.fdb
>
>I am thinking of creating a table having (symbol_date, blob) structure.
>So, the records will look like
>(s0001_20110101, b0001_20110101)
>...
>(s0001_20111231, b0001_20111231)
>....
>....
>(s3000_20110101, b0001_20110101)
>...
>(s3000_20111231, b0001_20111231)
>
>And my main most frequent query will be to find a record corresponding to a symbol and a specific date. ex) (s_1004, b1004_20110728)
>
>Also main everyday insert will be to insert today's new record for each symbol. That is 3000 inserts once everyday.
>And everyday I will have to find the record having the most recent date.
>
>Two possible structures.
>
>[1] (Symbol, Date, Blob) => Symbol primary and Date is secondary key
>[2] (Symbol_Date, Blob) => Symbol_Date is single primary key
>
>At the moment I was experimenting with [2] simply because of easiness.
>
>What structure would fit better in terms of performance and ease of use? And why?
For example:
create sequence seq_aTable;
create table aTable (
pKey BigInt not null,
symbol char(10) not null,
s_date date not null,
a_blob blob sub_type 1,
constraint pk_aTable primary key (pKey));
commit;
create unique index ix1u_aTable on aTable (symbol, s_date);
commit;
set term ^;
create trigger bi_aTable for aTable
active position 0
as
begin
if (new.pKey is null) then
new.Pkey = next value for seq_aTable;
end ^
commit ^
set term ;^
For loading the records for a year, I suggest using a stored procedure. Come back here and ask if you are interested. I can't suggest anything without knowing where you get these symbol values from but it will be a loop involving :
insert into aTable (symbol, s_date)
values (?, ?);
Then, for loading up the blob for the current date:
update aTable
set a_Blob = ?
where s_date = CURRENT_DATE
and symbol = ?
Again, how you actually go about that depends on where the symbols and blob data are coming from.
For retrieving the "latest committed" blob data for a single symbol:
select a.a_Blob from aTable a
where
a.symbol = ?
and a.s_date =
(select max(b.s_date) from aTable b
where b.a_blob is not null
and b.symbol = a.symbol
and (extract (year from b.s_date) = extract(year from CURRENT_DATE)) )
...and so on.
./heLen