Subject | Re: [firebird-support] Re: Benefit from Sequential Insert of High/Low PK |
---|---|
Author | Ann Harrison |
Post date | 2013-01-28T22:49:27Z |
On Mon, Jan 28, 2013 at 2:21 PM, Jeff <jeffplata@...> wrote:
generate your keys on server using an INSERT ... RETURNING to get the value
back, if that's supported through your interface.
where the first part is obtained from the database, probably from a
generator, and the second part is incremented in the client. Assuming it's
handled to avoid the confusion between 13/23 and 132/3, that should be OK.
As long as each client is adding values in approximately the same part of
the index, the performance should be fine, especially if you're expecting
no more than a million records in three years. Index load performance gets
much more interesting when you expect millions of records a week.
compression.
64-bit integers are called in each - use a BigInt.
compression.
Good Luck,
Ann
>I guess I don't quite follow that logic. One low-to-no cost option is to
>
> High/Low is a technique of generating primary keys client side in contrast
> to generating keys server side with fb generators. Here is a link to a
> discussion that probably could explain it better:
> http://stackoverflow.com/questions/282099/whats-the-hi-lo-algorithm
>
> The application I am trying to build is distributed system spread
> geographically, most of them on slow connection. Hence, the need for client
> side generated primary keys.
generate your keys on server using an INSERT ... RETURNING to get the value
back, if that's supported through your interface.
> If the hi/lo technique works as expected and let's say for example thereBasically the High/Low key scheme appears to be a two-part primary key
> are a hundred client users doing inserts at random times of the day, then
> there will be instances where each batch of inserts are not in sequence
> with other client's batch of inserts. Fast forward three years and the
> table now holds hundreds of thousands or even a million record, while most
> or all clients are still working with their initial high values. Woe to the
> client with the first acquired (least) high value; his inserts would fall
> rather towards the beginning of the table/index, while the client with the
> last acquired (highest) high value would fall towards the end of the
> table/index (lucky?).
>
where the first part is obtained from the database, probably from a
generator, and the second part is incremented in the client. Assuming it's
handled to avoid the confusion between 13/23 and 132/3, that should be OK.
As long as each client is adding values in approximately the same part of
the index, the performance should be fine, especially if you're expecting
no more than a million records in three years. Index load performance gets
much more interesting when you expect millions of records a week.
>Having the most variable part of a key last leads to better index
> If my previous assumptions are correct, then it is worth making an effort
> to ensure that all inserts fall towards the end of the table/index by
> prefixing the generated PKs with a value based on the current time, for
> instance (all clients in the same time zone).
compression.
> More work though, and an integer data type might not suffice to accomodateOr - forgive me I work with several databases and sometimes forget what
> the now very large value PK. Maybe should use Numeric(18,0)?
>
64-bit integers are called in each - use a BigInt.
>GUIDs and UUIDs are normally generated in a way that defeats prefix
> OTOH, if you say these are non issue with fb, then maybe I should just
> consider GUIDs or UUIDs for PKs and forget about the server altogether with
> regards to PK generation?
>
compression.
Good Luck,
Ann
>[Non-text portions of this message have been removed]