Subject | Re: [Firebird-Architect] Bulk loader. |
---|---|
Author | Mauricio Longo |
Post date | 2007-11-04T14:39:09Z |
Though I'm sure it is just one of the possible scenarios for its usage, one
of the more common uses for this facility is to import data from another
system or from some form of off-line archive.
I've implemented such a system in MSSQL many years ago. We needed to import
data from a mainframe once a month. Using BCP (Bulk Copy) and having
disabled the triggers and index updating the import procedure would take up
to 12 hours. A normal insert operation window would be longer than a
weekend which was the maximum window available, therefore not feasible.
Most of the time, in this kind of situation, we can be certain that all the
data which is being "imported" will be consistent at the end of the insert
so deffering constraint evaluation until the end of the statement would
work well.
Mauricio.
of the more common uses for this facility is to import data from another
system or from some form of off-line archive.
I've implemented such a system in MSSQL many years ago. We needed to import
data from a mainframe once a month. Using BCP (Bulk Copy) and having
disabled the triggers and index updating the import procedure would take up
to 12 hours. A normal insert operation window would be longer than a
weekend which was the maximum window available, therefore not feasible.
Most of the time, in this kind of situation, we can be certain that all the
data which is being "imported" will be consistent at the end of the insert
so deffering constraint evaluation until the end of the statement would
work well.
Mauricio.
On 11/4/07, Vlad Khorsun <hvlad@...> wrote:
>
> >>> Are we talking about deferred constraint evaluation by a chance? :)
> >>
> >> More or less - yes ;) But this is not "true" deferred constraint - in
> my case its
> >> deferred until end of statement, not end of transaction. It may be
> easyer to
> >> implement i think
> >
> > Ok, you've convinced me here. :)
>
> Good ;) Hope i not mistaken there.
>
> > Now another question: where do we put the files to load? On the file
> > system accessible from the server? Should we support remote bulk load?
>
> I don't like artificial limitations. Why should we limit users offering
> bulk load
> from files only ? I prefer to implement generic bulk load API, say prepare
> statement
> for bulk operation, pass into it large (or even huge ;) set of parameters,
> may be few
> times, routine to finish load process and check results. Next we may
> implement at
> higher level bulk loading from file, stream, etc.
>
> Regards,
> Vlad
>
>
[Non-text portions of this message have been removed]