Subject | Re: [firebird-support] Re: Do FB has any thumbrule for Nume of coloums ??? |
---|---|
Author | Svein Erling Tysvaer |
Post date | 2007-03-12T10:14:57Z |
I have used tables with more than 1000 columns. Not for ordinary use,
but when people request data to be used with a statistical package that
doesn't support relational data, I have transformed relational data into
a 'flat Firebird table' with repetitive groups and lots of nulls and
then done the export. As for data that are for regular use with
Firebird, I guess I'm not used to tables with more than 100 fields.
I don't like wide, flat tables, but I haven't had much big problems with
them Firebirdwise. I've had problems with Firebird queries which are
bigger than 32 Kb (INSERT INTO <tableA>(<columns>) SELECT <columns> FROM
<tableB> JOIN <tableB>... occationally gets bigger than that), but I
don't know whether that is a limitation of IB_SQL, DB Workbench or
Firebird itself since I was near that limit and easily solved it by
shortening the field names. I've also had problems with too complex
queries, but you haven't indicated any such problems.
If you run into problems with very wide tables, it may be more difficult
to find people that can help you out since most people avoid them.
Firebird is a relational database system, and as a database developer,
you should insist on the relational bit being taken care of. If there
really are 1000 values associated with each record that aren't
determined by other values and aren't repetitive, and those 1000 fields
would be relevant for many records, then I think 1000 fields are OK.
Though that will not normally be the case. From what you wrote, 11
columns in a separate table sounds like more appropriate. A primary key
of the table, a foreign key to the salary record and then 9 other fields.
Though JOINing (particularly if OUTER JOINing) 100 times to a table
doesn't sound optimal either - test whether Firebird handles that if it
is a likely scenario for a report (I think I experienced the plan being
reported only partially when selecting from about 20 aliases with a
further 20 NOT EXISTS references, though that was Firebird 1.5).
It is also possible to settle for something in between.
I think speed problems are better solved by showing the query, plan and
mention the selectivity of various indexes on this list. Add some
additional information about the involved tables and you are likely to
get answers to help you out if possible (sometimes demands are
impossible to fulfil).
HTH,
Set
niegil_firebirddev wrote:
but when people request data to be used with a statistical package that
doesn't support relational data, I have transformed relational data into
a 'flat Firebird table' with repetitive groups and lots of nulls and
then done the export. As for data that are for regular use with
Firebird, I guess I'm not used to tables with more than 100 fields.
I don't like wide, flat tables, but I haven't had much big problems with
them Firebirdwise. I've had problems with Firebird queries which are
bigger than 32 Kb (INSERT INTO <tableA>(<columns>) SELECT <columns> FROM
<tableB> JOIN <tableB>... occationally gets bigger than that), but I
don't know whether that is a limitation of IB_SQL, DB Workbench or
Firebird itself since I was near that limit and easily solved it by
shortening the field names. I've also had problems with too complex
queries, but you haven't indicated any such problems.
If you run into problems with very wide tables, it may be more difficult
to find people that can help you out since most people avoid them.
Firebird is a relational database system, and as a database developer,
you should insist on the relational bit being taken care of. If there
really are 1000 values associated with each record that aren't
determined by other values and aren't repetitive, and those 1000 fields
would be relevant for many records, then I think 1000 fields are OK.
Though that will not normally be the case. From what you wrote, 11
columns in a separate table sounds like more appropriate. A primary key
of the table, a foreign key to the salary record and then 9 other fields.
Though JOINing (particularly if OUTER JOINing) 100 times to a table
doesn't sound optimal either - test whether Firebird handles that if it
is a likely scenario for a report (I think I experienced the plan being
reported only partially when selecting from about 20 aliases with a
further 20 NOT EXISTS references, though that was Firebird 1.5).
It is also possible to settle for something in between.
I think speed problems are better solved by showing the query, plan and
mention the selectivity of various indexes on this list. Add some
additional information about the involved tables and you are likely to
get answers to help you out if possible (sometimes demands are
impossible to fulfil).
HTH,
Set
niegil_firebirddev wrote:
> Hi All,
>
> Thanks for the replies. We had similar doubts and that's why this was
> posted.
>
> Ours is a payroll application and this table stores the processed
> details of all months. In our design, there are user definable salary
> components. There can be 20 to 100 components typically (including
> statutory, intermediate components). Now there are 9 values attached
> to each component (original, Loss of pay, arrears, net value etc).
> So, in the worst case, there can be 900 values against each staff (I
> put 2000 to 3000 on the higher side). Table also contains few other
> fields.
>
> In our existing design, we store the info staff & component wise.
> There is no problem except that certain report takes lot of time (as
> info has to be unpacked every time). I feel the most natural way to
> store this info is, to store the complete data staff wise, but then
> you will have more columns.
>
> What we wanted to know is, when a Database system supports unlimited
> columns, are there any problems related to it?
>
> With regards,
> Niegil