Subject | Re: High "Mutex wait" value, after increase in "Hash Slots" to 90001 |
---|---|
Author | Dmitry Yemanov |
Post date | 2017-03-11T18:39:17Z |
11.03.2017 21:03, 'Leyne, Sean' wrote:
bigger gets silently reset to 65521.
locks. Regular (every few minutes) fb_lock_print -w output could
probably shed some light...
Dmitry
>Nope, 65521 is just hardcoded as the max supported value, everything
>>> We have a client with 320GB database (running FB CS v2.5)
>>
>> Is it really so? FB does not support LockHashSlots more than 64K, it would
>> truncate your 90001 down to 65521.
>
> Is the engine that smart to trim to an exact prime number?
bigger gets silently reset to 65521.
> Curious, how would the number of owners (even free owners) impact the lock manager?It depends on what those owners (read: connections) do inside the database.
> Today's numbers are much worse, again. But the activity load is substantially lower (based on the "Enqs" and "Acquires" values).Too many deadlock scans, it means long (> 10 sec) waiting for some
>
> LOCK_HEADER BLOCK
> Version: 145, Active owner: 0, Length: 67108864, Used: 5269256
> Flags: 0x0001
> Enqs: 202113295, Converts: 934229, Rejects: 96674, Blocks: 615455
> Deadlock scans: 103, Deadlocks: 0, Scan interval: 10
> Acquires: 326947224, Acquire blocks: 186433223, Spin count: 0
> Mutex wait: 57.0%
locks. Regular (every few minutes) fb_lock_print -w output could
probably shed some light...
Dmitry