Subject | High Mutex Wait Figures |
---|---|
Author | Greg Kay |
Post date | 2006-11-30T23:10:50Z |
Does anyone have any ideas on what might be causing high mutex wait
figures? (Examples below.) We are running Firebird Classic 1.53 on a
dual CPU opteron 252 with 16 Gb ram and SuSE Linux 10.1. Typical usage
is about 1 million transactions a day using about 150-200 connections
at a time (between 8am and 8pm) with the server mostly at 30-40% of
capacity.
Here is our lock print output with one query running:
LOCK_HEADER BLOCK
Version: 15, Active owner: 0, Length: 17661952, Used: 17629432
Lock manager pid: 2519
Semmask: 0x6904, Flags: 0x0001
Enqs: -1241332649, Converts: 28072483, Rejects: 773292147,
Blocks: 448459958
Deadlock scans: 74, Deadlocks: 1, Scan interval: 10
Acquires: 610013459, Acquire blocks: 623534782, Spin count: 0
Mutex wait: 102.2%
Hash slots: 2039, Hash lengths (min/avg/max): 0/ 0/ 2
Remove node: 0, Insert queue: 0, Insert prior: 0
Owners (2): forward: 27568, backward: 5347580
Free owners (289): forward: 843044, backward: 863600
Free locks (108096): forward: 9276608, backward: 2930196
Free requests (212717): forward: 1346756, backward: 6964012
Lock Ordering: Enabled
Here is our lock print output with the system under load:
Version: 15, Active owner: 1072728, Length: 17661952, Used: 17629432
Lock manager pid: 2519
Semmask: 0x6904, Flags: 0x0001
Enqs: -1518127255, Converts: 27439415, Rejects: 773157252,
Blocks: 447841694
Deadlock scans: 74, Deadlocks: 1, Scan interval: 10
Acquires: 327067309, Acquire blocks: 618532897, Spin count: 0
Mutex wait: 189.1%
Hash slots: 2039, Hash lengths (min/avg/max): 9/ 19/ 33
Remove node: 0, Insert queue: 0, Insert prior: 0
Owners (191): forward: 27568, backward: 1661872
Free owners (100): forward: 1060976, backward: 3119400
Free locks (68325): forward: 9799984, backward: 8635028
Free requests (128499): forward: 9166572, backward: 1668032
Lock Ordering: Enabled
figures? (Examples below.) We are running Firebird Classic 1.53 on a
dual CPU opteron 252 with 16 Gb ram and SuSE Linux 10.1. Typical usage
is about 1 million transactions a day using about 150-200 connections
at a time (between 8am and 8pm) with the server mostly at 30-40% of
capacity.
Here is our lock print output with one query running:
LOCK_HEADER BLOCK
Version: 15, Active owner: 0, Length: 17661952, Used: 17629432
Lock manager pid: 2519
Semmask: 0x6904, Flags: 0x0001
Enqs: -1241332649, Converts: 28072483, Rejects: 773292147,
Blocks: 448459958
Deadlock scans: 74, Deadlocks: 1, Scan interval: 10
Acquires: 610013459, Acquire blocks: 623534782, Spin count: 0
Mutex wait: 102.2%
Hash slots: 2039, Hash lengths (min/avg/max): 0/ 0/ 2
Remove node: 0, Insert queue: 0, Insert prior: 0
Owners (2): forward: 27568, backward: 5347580
Free owners (289): forward: 843044, backward: 863600
Free locks (108096): forward: 9276608, backward: 2930196
Free requests (212717): forward: 1346756, backward: 6964012
Lock Ordering: Enabled
Here is our lock print output with the system under load:
Version: 15, Active owner: 1072728, Length: 17661952, Used: 17629432
Lock manager pid: 2519
Semmask: 0x6904, Flags: 0x0001
Enqs: -1518127255, Converts: 27439415, Rejects: 773157252,
Blocks: 447841694
Deadlock scans: 74, Deadlocks: 1, Scan interval: 10
Acquires: 327067309, Acquire blocks: 618532897, Spin count: 0
Mutex wait: 189.1%
Hash slots: 2039, Hash lengths (min/avg/max): 9/ 19/ 33
Remove node: 0, Insert queue: 0, Insert prior: 0
Owners (191): forward: 27568, backward: 1661872
Free owners (100): forward: 1060976, backward: 3119400
Free locks (68325): forward: 9799984, backward: 8635028
Free requests (128499): forward: 9166572, backward: 1668032
Lock Ordering: Enabled