Subject | Re: [IB-Architect] Opening remote databases |
---|---|
Author | reed mideke |
Post date | 2000-08-26T04:21Z |
Maybe I can summarize the options:
1) server accessing db file on local hard disk
- What we have today
2) ONE server accessing a db file on a remote disk
- Prohibited for user safety (because we can't ensure that
there is only one server accessing it)
- Also possibly problems if the filesystem doesn't support
some operation (locking, memory mapping) that IB expects to work.
I don't think this is a problem, but it might be.
* Possible value using a fileserver that is fast but unable to
run a server process, or where a large amount of disk space is
required but only available on a network machine. (You could even
spread your database files over several different machines if
you were really strapped for space)
* Possibly not too hard to implement, because to be safe, you only
have to know >if< there is another server using the file, not what
its doing. Easier for super-server than classic.
* should be both easy and potentially useful for read-only
databases. (but only if the DB is read-only for ALL servers,
including the one to which it is local. I suspect that other
servers would get confused if they thought the DB was readonly
and someone else was changing it under them.)
3) Multiple servers access db file on remote disk
- requires major work (a network lock manager).
one would expect a network lock manager to be much slower
than one which is contained to a single machine.
* possible value emulating Paradox type system ;-(
* possible value in load ballancing, but I suspect that
the fileserver or central lock manager would become your
bottleneck.
In the context above, a 'server' is either a single super server,
running on one machine, or some number of classic processes and
a lock manager, all on one machine.
Another note is that the database file On Disk Structure
is platform specific, so any database on a remote system
would only be suitable for one platform (or possibly all
platforms with the same byte order, alignment, and word
size.)
Phil Shrimpton wrote:
would not be of much value. Suppose you have the configuration
IB SERVER MACHINE 1
|--------------FILE SERVER
IB SERVER MACHINE 2
and your clients normally connect to server 1.
If server 1 dies, then clients can connect to server 2, which will
have the database immediatly available. But if the file server dies,
you are still SOL. Instead of having three machines, why not spend it
on one nice redundant, hot swapable disk array and two
processor / power supply / ram packages.
That way, if a disk goes bad, you can replace it, or if another
component goes bad, you just move the disk to the other system.
all the machines accesing the DB would prevent this from being
very useful in most situations (I could be wrong, I'm no expert
on this sort of thing, but I do remember that the lock manager
is already often a bottleneck.)
By using replication, you avoid this low level sychronization,
or rather, make it less time critical and more complex ;-).
Reed Mideke rfm(at)cruzers.com
If that doesn't work: rfm(at)portalofevil.com
InterBase build instructions: www.cruzers.com/~rfm
1) server accessing db file on local hard disk
- What we have today
2) ONE server accessing a db file on a remote disk
- Prohibited for user safety (because we can't ensure that
there is only one server accessing it)
- Also possibly problems if the filesystem doesn't support
some operation (locking, memory mapping) that IB expects to work.
I don't think this is a problem, but it might be.
* Possible value using a fileserver that is fast but unable to
run a server process, or where a large amount of disk space is
required but only available on a network machine. (You could even
spread your database files over several different machines if
you were really strapped for space)
* Possibly not too hard to implement, because to be safe, you only
have to know >if< there is another server using the file, not what
its doing. Easier for super-server than classic.
* should be both easy and potentially useful for read-only
databases. (but only if the DB is read-only for ALL servers,
including the one to which it is local. I suspect that other
servers would get confused if they thought the DB was readonly
and someone else was changing it under them.)
3) Multiple servers access db file on remote disk
- requires major work (a network lock manager).
one would expect a network lock manager to be much slower
than one which is contained to a single machine.
* possible value emulating Paradox type system ;-(
* possible value in load ballancing, but I suspect that
the fileserver or central lock manager would become your
bottleneck.
In the context above, a 'server' is either a single super server,
running on one machine, or some number of classic processes and
a lock manager, all on one machine.
Another note is that the database file On Disk Structure
is platform specific, so any database on a remote system
would only be suitable for one platform (or possibly all
platforms with the same byte order, alignment, and word
size.)
Phil Shrimpton wrote:
> I can see a use for failover situations, where if a IBServer goes down, theIt seems to me that this would add another point of failure, and so
> client apps can connect to another one 'instantly', this will only require
> 'single server' access to the database file(s).
>
would not be of much value. Suppose you have the configuration
IB SERVER MACHINE 1
|--------------FILE SERVER
IB SERVER MACHINE 2
and your clients normally connect to server 1.
If server 1 dies, then clients can connect to server 2, which will
have the database immediatly available. But if the file server dies,
you are still SOL. Instead of having three machines, why not spend it
on one nice redundant, hot swapable disk array and two
processor / power supply / ram packages.
That way, if a disk goes bad, you can replace it, or if another
component goes bad, you just move the disk to the other system.
> The other reason I can see is for Load balancing. A lot of people,As I suggested above, I suspect that the overhead of synchronising
> including myself, have a number of Servers running for load balancing
> reasons, but currently this requires each server to have a replicated
> database file(s) and all the 'issues' that go with that. If the database
> file(s) could be on a separate machine, it would make life a lot easier and
> would not require replication functionality. This method would require
> multiple server access to the database file(s)
>
all the machines accesing the DB would prevent this from being
very useful in most situations (I could be wrong, I'm no expert
on this sort of thing, but I do remember that the lock manager
is already often a bottleneck.)
By using replication, you avoid this low level sychronization,
or rather, make it less time critical and more complex ;-).
> Can't think of any more off hand.--
>
> Cheers
>
> Phil
>
Reed Mideke rfm(at)cruzers.com
If that doesn't work: rfm(at)portalofevil.com
InterBase build instructions: www.cruzers.com/~rfm