Subject Re: hadoopdb parallel database
Author paulruizendaal

Perhaps it is more interesting to have a look at ScimoreDB's approach to scaling out:

They way I see it, it takes a classical RDBMS and automates most stuff for a sharding type cluster:
and it claims to scale pretty well:
(as long as the application doesn't have hot spots, the claimed scaling sounds credible); it is organised in a symetric binary tree, but it is arranged differently over the available machines for each connection:

Now, there is plenty wrong with this design (HA is hard, live reconfiguration is hard, rebalancing the data when a new sharding key is needed is hard, to name a few) but it can serve a whole range of scenario's.

I'm not sure the blr API still exists in FB head, but if it does the Scimore design could be done by taking the SQL compiler & optimiser, breaking it out to a separate binary and enhancing it to do the automated sharding, replication, etc. and send appropriate blr to each node in the tree.

The other weak point in the Scimore approach is that each node is a classical engine, with fat connections, no distinction between short and long transactions, disk based durability, etc. Moving to a more specialised engine at each node could make the design 10x faster than the numbers reported by Scimore.


--- In, marius adrian popa <mapopa@...> wrote:
> I will try to see if it can be used with firebird nodes