Subject Re: [IB-Architect] SOME SOLUTIONS TO OLD PROBLEMS
Author dcalford
Hi Marcelo,

I want to start by saying thanks! (I love questions, it gets me thinking)



> What other bots are you running?

I am running bots for the following purposes

ISALIVE - this bot simply tests to see if the different servers/gdb's are
available and keeps the core file it is connected to informed of what ones are
down or slow to respond.

SECURIT - this bot responds to comands to add users, grant rights, checks for
the dependancies between (my groups table, my users table, my rights table) and
makes sure that a simple procedure ADD_USER_TO_GROUP(username, groupname) would
give the user the rights that the group has. This prevents the client apps from
needing to know the sysdba password to work with the ib 5 api. It also works
with interbase 4.

ROLLFWD - explained to death so far.

REPLICAT - this is the bot that is used in many-to many replication across
servers.

METABUILD - this is the bot that actually performs metadata updates - I have
triggers on all the system tables preventing anyone but this bot from doing DDL
statements.

DOCUMENT - these bots create the sequential numbers for invoices etc. There can
be only one bot per numeric range.

I have a few others but these are the main ones.

> What tools did you use to build them?

Mostly Delphi, sometimes perl, c, fpk, basically any language that I was trying
to learn at the time. They are very basic applications so they are a good test
app to learn a new language. (my version of hello world)

> How cross-platform are they?

It depends what they were written in.

> What's the ISALIVE bot about - an IB-ping bot, I presume?

Not quite a ping per say, it actually connects and queries the different
databases and records if it was successful (a machine could be up but the gdb is
not available.

> How do bots react to unavaliable servers - mails, paging, file
> system logging, logging to local IB, logging to system log on NT?
>

When a bot first starts, it finds out what core file (read server and gdb) that
it is supposed to be a slave to. If that server is not available, it notifies
the GDBCONTROL. GDBCONTROL records this into the core file that it is connected
to. GDBCONTROL either reassigns the bot to another core file or responds with
the shutdown command. If the bot does not get either the information for a
working core file or the shutdown command, it shuts down on its own. I am
currently putting the extras for a UDP broadcast ability to registered ip
addresses - this way a remote user can signal the bot to broadcast everything it
is doing. I have not finished this because I have not really had a need for
such a detailed log (there is enough redundancy in the system that I went a
whole week without realizing one of the secondary core files and it's associated
bots were down - I checked GDBCONTROL session log, saw the problem, rebooted NT
and voila, everything was back up and running)


> <SNIP>
> "The GDBCONTROL daemon not only is connected to a core gdb, it keeps in
> contact
> with a user interface with via UDP broadcasts that can also log all
> connection
> activity. The actual daemon has no interface."
> </SNIP>
> You lost me here.

Think of a NT service. You start it. You have no user interface. It runs in
the background and if you want to configure it, you change its configuration
settings in the registry or pass values via the services api.
With the GDBCONTROL, all control commands and responses are done with UDP
broadcasts. I am going to detail the interface in my docs.


> Client-side questions:
> How do your clients manage their connections to the database? Do you keep
> active transactions while they browse data, or do they do this in a
> disconnected fashion?

After a client has connected to a core file, it operates normally except that
all inserts get performed via stored procedures. The client also maintains a
session log of all dml actions. That way, if a core file goes down, the client
requests a different core file to connect to from GDBCONTROL, and then replays
all the previous DML actions of the user. When the first core file is brought
back online, the UID's of each record are used to make sure that no record gets
double modified. This is the way that end users do not loose work when a core
file fails in some way.

> Server-side questions:
> Uhh... I'm too drowsy right now. See you all tomorrow.

:)


> I'd like to have a tutorial before building wizards for this. So people can
> build a sample system themselves, following a step-by-step guide which
> explains the reasons behind the actions, and then use a wizard to build the
> stock stuff quickly.

That is exactly where I am going - I will put forward sample SQL structures for
you to see.

Thanks again

Dalton