Subject | execute statement makes SP less expensive to call? |
---|---|
Author | unordained |
Post date | 2010-02-24T23:32:05Z |
I have a stored procedure that basically does the following:
begin
if (get_context(...) = 1) then
begin
execute procedure ...
execute procedure ...
execute procedure ...
end
set_context(..., 0);
end
I've found that even when the context variable is 0, it still takes 200ms to
"prepare" to execute this procedure. It's doing nothing, yet it's sucking up
200ms. (300 index reads, 2000 fetches, etc.)
Now, if I wrap it inside another procedure, and move the if/then/end there, I
still have the problem. Seemingly, it's preparing the entire tree of procedures
that might get called, not just those immediately inside the called procedure,
and if you have a lot of procedures in your call tree, it's expensive to
prepare to call the root procedure, even though many procedures will only be
executed conditionally.
If I change the wrapper procedure so it just does
execute statement "execute procedure do_stuff";
then I get a really fast-to-call wrapper procedure most of the time, at the
cost of a slight increase (?) in cost on the rare occasion the inner procedure
actually has to do something. I'm not fond of using execute statement, and I
hate to use it purely as a speed hack, but ... it's significant, in my case.
Is this normal? Has partial-prepare been considered on the devel-list? If an
execute procedure statement is inside a condition, could its preparation be
delayed? I don't want to lose dependencies if I don't have to, they can be
awfully handy. And as I recall, execute statement has some particular security
policies associated with it that I wouldn't have with a direct execute
procedure?
(Background: The context variable gets set by various triggers, so the idea is
to rebuild datasets only if triggers have fired that indicate a rebuild is
needed; I prefix statements that rely on rebuilt data with a quick call to this
procedure "just in case", and let the procedure handle it.)
-Philip
begin
if (get_context(...) = 1) then
begin
execute procedure ...
execute procedure ...
execute procedure ...
end
set_context(..., 0);
end
I've found that even when the context variable is 0, it still takes 200ms to
"prepare" to execute this procedure. It's doing nothing, yet it's sucking up
200ms. (300 index reads, 2000 fetches, etc.)
Now, if I wrap it inside another procedure, and move the if/then/end there, I
still have the problem. Seemingly, it's preparing the entire tree of procedures
that might get called, not just those immediately inside the called procedure,
and if you have a lot of procedures in your call tree, it's expensive to
prepare to call the root procedure, even though many procedures will only be
executed conditionally.
If I change the wrapper procedure so it just does
execute statement "execute procedure do_stuff";
then I get a really fast-to-call wrapper procedure most of the time, at the
cost of a slight increase (?) in cost on the rare occasion the inner procedure
actually has to do something. I'm not fond of using execute statement, and I
hate to use it purely as a speed hack, but ... it's significant, in my case.
Is this normal? Has partial-prepare been considered on the devel-list? If an
execute procedure statement is inside a condition, could its preparation be
delayed? I don't want to lose dependencies if I don't have to, they can be
awfully handy. And as I recall, execute statement has some particular security
policies associated with it that I wouldn't have with a direct execute
procedure?
(Background: The context variable gets set by various triggers, so the idea is
to rebuild datasets only if triggers have fired that indicate a rebuild is
needed; I prefix statements that rely on rebuilt data with a quick call to this
procedure "just in case", and let the procedure handle it.)
-Philip