Subject | Re: Firebird 2.0 Indexing |
---|---|
Author | Svein Erling Tysvær |
Post date | 2005-05-30T08:00:45Z |
Is the database that you're converting from a desktop database (like
Access or Paradox)? They work nicely on tables, as opposed to datasets
which Firebird and other client/server databases prefer. Firebird
requires a different mindset than desktop databases, and until it
clicks, you'll be fighting Firebird rather than enjoying it. I even
enjoy the "indexing system" of Firebird as it is today!
For anything but data transfer, I cannot understand why anyone would
want to run a statement without a where clause on a million-row table
(well, possibly if there were some kind of grouping reducing the
returned set to a few rows). Myself, I work with medium sized tables,
up to about 10 million rows. Occasionally, I request 10000 rows, but
that is only when someone will actually go through and examine every
one of those rows. Without exception, I only request larger result
sets for people who want to do further processing themselves in other
database or statistical systems.
Besides, three and a half minutes to return rows that people will use
months to examine isn't too long in my honest opinion. With 2.1
million rows, that means 9900 rows per second! And I doubt the
developers will be tempted to remove the multigenerational
architecture to make it easier for people not wanting to adopt to a
client/server mindset.
Sorry if the above sounds a bit harsh,
Set
Access or Paradox)? They work nicely on tables, as opposed to datasets
which Firebird and other client/server databases prefer. Firebird
requires a different mindset than desktop databases, and until it
clicks, you'll be fighting Firebird rather than enjoying it. I even
enjoy the "indexing system" of Firebird as it is today!
For anything but data transfer, I cannot understand why anyone would
want to run a statement without a where clause on a million-row table
(well, possibly if there were some kind of grouping reducing the
returned set to a few rows). Myself, I work with medium sized tables,
up to about 10 million rows. Occasionally, I request 10000 rows, but
that is only when someone will actually go through and examine every
one of those rows. Without exception, I only request larger result
sets for people who want to do further processing themselves in other
database or statistical systems.
Besides, three and a half minutes to return rows that people will use
months to examine isn't too long in my honest opinion. With 2.1
million rows, that means 9900 rows per second! And I doubt the
developers will be tempted to remove the multigenerational
architecture to make it easier for people not wanting to adopt to a
client/server mindset.
Sorry if the above sounds a bit harsh,
Set
--- In firebird-support@yahoogroups.com, "buppcpp" wrote:
> Were the changes to the indexes in 2.0 suppose to provide more than
> larger indexes?
>
> The reason I'm asking is because some of my queries still run just
> as slow in 2.0 as they did 1.5.2.
>
> Ex. SELECT distinct store_no from mytable; (Does a table scan)
>
> mytable has 2.1 million records with an index on store_no.
> It takes 3.32 minutes to run.
>
> On the database I want to convert from, it takes 15 secs. (Data is
> exactly the same between the two databases)
>
> SELECT * from mytable where store_no > 2; (Uses store_no index)
>
> It takes 3.27 minutes
> On the database I want to convert from, it takes 1.57 minutes.
>
> This is a freshly restored DB.
>
> Will these types of indexing issues be taken care of 2.0, if not,
> when?
>
> I for one would prefer to have the indexing system fixed in 2.0,
> because I think we may need to sit on it for awhile, while 3.0 gets
> all of it's bugs knocked out. Plus, 3.0 will have the same
> problems, if it's not taken care of now, but with the overhead of
> SMP code (for 1 CPU systems).