Subject | Re: [ib-support] Update |
---|---|
Author | Jason Chapman (JAC2) |
Post date | 2003-01-10T10:27:47Z |
To get to the answer we really to get a little more solid about what the
usage is between AM and PM.
Slow and gradual performance degradation does imply either load or
increasing inefficiency in doing the work (more user doing more vs server
performing worse due to transaction gap). Maybe you could try and keep get
the TX gap to grow way past 2000 and see if performance continues to
degrade, e.g. write a dummy app that opens and keeps a TX open, get them to
start it in the morning, monitor the TX gap, leave it on all night, and
compare perf the next morning, is it the same or worse than the morning
before?
I would be keen to build a test bed to perf monitor the queries your system
runs against the server and the queries the server runs against itself (in
its SP / Triggers).
I would never add dozens of indices in one go, normally there has to be a
good reason and I have to do a regression test to ensure it doesn't kill
other processes. Add the wrong index, perf gets better, but the end of
month n-way join report from hell kills the server and people start
reporting server crashes......
JAC.
<Michael.Vilhelmsen@...> wrote in message
news:avm3kh+4hhe@......
usage is between AM and PM.
Slow and gradual performance degradation does imply either load or
increasing inefficiency in doing the work (more user doing more vs server
performing worse due to transaction gap). Maybe you could try and keep get
the TX gap to grow way past 2000 and see if performance continues to
degrade, e.g. write a dummy app that opens and keeps a TX open, get them to
start it in the morning, monitor the TX gap, leave it on all night, and
compare perf the next morning, is it the same or worse than the morning
before?
I would be keen to build a test bed to perf monitor the queries your system
runs against the server and the queries the server runs against itself (in
its SP / Triggers).
I would never add dozens of indices in one go, normally there has to be a
good reason and I have to do a regression test to ensure it doesn't kill
other processes. Add the wrong index, perf gets better, but the end of
month n-way join report from hell kills the server and people start
reporting server crashes......
JAC.
<Michael.Vilhelmsen@...> wrote in message
news:avm3kh+4hhe@......
> Hi
>
> I have made an appl. that is a point of sale.
> When a client do a sale (i.e. 3 things are being sold) then I do this:
>
> 1.
> Update a record with the amount sold.
>
> 2.
> This record does an update on 12 tables (through trigges).
>
> 3.
> Then I do an update on another table with the value of the sold items.
>
> 4.
> This update does an update on 2 other tables (through) trigges.
>
> 5.
> Then I insert somewhere between 8 and 40 records in a table with
> somewhere around 40 index on.
>
>
> Now
> My customer complains aboout speed.
> I have recently created those 40 index. Before there was only 4.
>
> Could those 40 index cause the system to slow down significantly ?
>
>
> The other thing I have in mind is, that my gab between oldest
> transaction and next transaction is somewhere around 2000.
>
> I think its one of the above that causes the slow update, but can I
> tell which is more likely to do that ?
>
> Besides, I think some of the clients have come to a habbit of opening
> some windows, without closing them again.
> So now I'm I have started to change my appl. to close after 10 minuts
> idle time.
>
> Regards
> Michael
>
>
>
>
> To unsubscribe from this group, send an email to:
> ib-support-unsubscribe@egroups.com
>
>
>
> Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
>
>
>