Subject | Re: [IB-Architect] SOME SOLUTIONS TO OLD PROBLEMS |
---|---|
Author | dcalford |
Post date | 2000-06-20T03:21:36Z |
Ok,
Since I started this thread, I gave a small background on hardware selection
for the servers. I also described how I would configure the drive geometry (I
forgot to mention that it is important to have a extra drive or two for data
manipulation during maintenance). All of this is towards the idea of
building a 24x7x365 system.
I then went into some details about the definition of a particular style of
surragate key to uniquely identify any record across multiple servers and
gdb's. This to further aid the developer to envision how many different gdb's
on different servers can operate as a single database.
I further went into details about how I break up my database into different
types of gdb's for particular needs (core/log/historical).
Now, we need to go into the next step - how the log that is inside of each
database is moved into the log files, and then put to use.
I must begin to explain the different tools that go into the system to make it
work well.
(You can get away with shortcuts, but every tool I am describing here was
built due to a problem that had to be solved by us.
First problem and the tool that is solved by it....
We have multiple machines, each with thier own ip address, each with a
unique path to a particular database.
At any time, a server can be taken down for maintenance, or a new file made
active via a backup/restore routine that has now become
ready. It is very difficult to hard code this information into a bde alias
or IB_Connection style component. You have to also consider that the client
machines that are connecting to the various gdb's may be on a remote site that
is hours away from someone who can make any changes to it's configuration.
The net came up with tool to solve a similar problem, a DNS server will take a
name lookup query and return it's ip address. Since we need more than a
simple ip address, we just extend the model a little and customize it to our
needs.
First, in our network's DNS server we add a few entries that give multiple
ip's to a single web name.
Lets call the web name to be GDBCONTROL.MYDOMAIN.COM and lets say it has a
series of IP addresses (192.168.10.2 .....192.168.10.6) that corresponds to
the machines on our network that will be running our new GDBCONTROL daemon.
What exactly is the GDBCONTROL daemon you ask? Well, it is the extended DNS
server I was talking about.
When a client machine wants to connect to the system, it first does a DNS
lookup for 'GDBCONTROL.MYDOMAIN.COM' and then sends a specially formated UDP
packet to the returned address (and to a predetermined port) if a response
does not happen within a specified time period, another DNS lookup occurs for
'GDBCONTROL.MYDOMAIN.COM' and if you configured your DNS correctly, the DNS
server will be rotating through all the ips you gave for that domain name and
the client machine will eventually have contact with one of the GDBCONTROL
daemons. This allows you to add GDBCONTROL daemons to the system as needed or
have them drop from the loop without your end users (or automated bots) really
caring.
The GDBCONTROL daemon, is connected to one of the core databases and runs a
query against the database to find what database the requesting client gets
directed to. I usually prompt the user for thier username and password before
starting this process. That information is sent to the GDBCONTROL and the
daemon in turn, verifies the user is allowed to connect from that particular
client machine at that particular time. After all verification routines have
been run, the server:/path as well as the users real log in and password for
the server they are connecting to is sent back to the client so that the
client can make the appropriate connections.
The GDBCONTROL daemon not only is connected to a core gdb, it keeps in contact
with a user interface with via UDP broadcasts that can also log all connection
activity. The actual daemon has no interface.
The daemon also is in contact with one of the 'ISALIVE' bots (I will describe
the ISALIVE bots later) and knows if one of the servers is down or not.
At this point, the client connections or any one of the automated bots can
start up and connect to the system, regardless of what machines are up or down
as long as a physical connection is available between the client machine and a
operating core server.
Ok, I have talked alot about bots, but I have not really defined them. A bot
is short for 'robot' - a automated process that has very limited preprogramed
responses to limited input. A Interbase bot is a bot that is totally
controlled by database events and values kept in tables inside the core gdb's
----------------------------------------------------------
Sorry folks, time for me to go be a dad and take care of my little one
best regards
Dalton
Since I started this thread, I gave a small background on hardware selection
for the servers. I also described how I would configure the drive geometry (I
forgot to mention that it is important to have a extra drive or two for data
manipulation during maintenance). All of this is towards the idea of
building a 24x7x365 system.
I then went into some details about the definition of a particular style of
surragate key to uniquely identify any record across multiple servers and
gdb's. This to further aid the developer to envision how many different gdb's
on different servers can operate as a single database.
I further went into details about how I break up my database into different
types of gdb's for particular needs (core/log/historical).
Now, we need to go into the next step - how the log that is inside of each
database is moved into the log files, and then put to use.
I must begin to explain the different tools that go into the system to make it
work well.
(You can get away with shortcuts, but every tool I am describing here was
built due to a problem that had to be solved by us.
First problem and the tool that is solved by it....
We have multiple machines, each with thier own ip address, each with a
unique path to a particular database.
At any time, a server can be taken down for maintenance, or a new file made
active via a backup/restore routine that has now become
ready. It is very difficult to hard code this information into a bde alias
or IB_Connection style component. You have to also consider that the client
machines that are connecting to the various gdb's may be on a remote site that
is hours away from someone who can make any changes to it's configuration.
The net came up with tool to solve a similar problem, a DNS server will take a
name lookup query and return it's ip address. Since we need more than a
simple ip address, we just extend the model a little and customize it to our
needs.
First, in our network's DNS server we add a few entries that give multiple
ip's to a single web name.
Lets call the web name to be GDBCONTROL.MYDOMAIN.COM and lets say it has a
series of IP addresses (192.168.10.2 .....192.168.10.6) that corresponds to
the machines on our network that will be running our new GDBCONTROL daemon.
What exactly is the GDBCONTROL daemon you ask? Well, it is the extended DNS
server I was talking about.
When a client machine wants to connect to the system, it first does a DNS
lookup for 'GDBCONTROL.MYDOMAIN.COM' and then sends a specially formated UDP
packet to the returned address (and to a predetermined port) if a response
does not happen within a specified time period, another DNS lookup occurs for
'GDBCONTROL.MYDOMAIN.COM' and if you configured your DNS correctly, the DNS
server will be rotating through all the ips you gave for that domain name and
the client machine will eventually have contact with one of the GDBCONTROL
daemons. This allows you to add GDBCONTROL daemons to the system as needed or
have them drop from the loop without your end users (or automated bots) really
caring.
The GDBCONTROL daemon, is connected to one of the core databases and runs a
query against the database to find what database the requesting client gets
directed to. I usually prompt the user for thier username and password before
starting this process. That information is sent to the GDBCONTROL and the
daemon in turn, verifies the user is allowed to connect from that particular
client machine at that particular time. After all verification routines have
been run, the server:/path as well as the users real log in and password for
the server they are connecting to is sent back to the client so that the
client can make the appropriate connections.
The GDBCONTROL daemon not only is connected to a core gdb, it keeps in contact
with a user interface with via UDP broadcasts that can also log all connection
activity. The actual daemon has no interface.
The daemon also is in contact with one of the 'ISALIVE' bots (I will describe
the ISALIVE bots later) and knows if one of the servers is down or not.
At this point, the client connections or any one of the automated bots can
start up and connect to the system, regardless of what machines are up or down
as long as a physical connection is available between the client machine and a
operating core server.
Ok, I have talked alot about bots, but I have not really defined them. A bot
is short for 'robot' - a automated process that has very limited preprogramed
responses to limited input. A Interbase bot is a bot that is totally
controlled by database events and values kept in tables inside the core gdb's
----------------------------------------------------------
Sorry folks, time for me to go be a dad and take care of my little one
best regards
Dalton