Discussion:
SQL performance in prod
V Raman
2018-07-18 22:30:02 UTC
Permalink
List

We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.

I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.

If anyone is interested is seeing the report, i can provide a link to them
by email.

Venky
Vishnu
2018-07-18 22:49:20 UTC
Permalink
Hi Venky,

There can be many reasons.Below are few questions I would ask myself for
troubslehooting.

Are you able to replicate ? Good vs plan depends on the data it is querying
?

Are you using bind variables in your query? Using bind variables vs hard
coded makes any difference? Is it selecting data from partitions?What’s
your oracle version? Are you using 12c adaptive features that’s making this
change in behavior ?

For fixing, one option is to get good plan hash value and fix using
baseline.

Thanks,
Vishnu
Post by V Raman
List
We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to them
by email.
Venky
Kellyn Pot'Vin-Gorman
2018-07-18 22:50:33 UTC
Permalink
I would consider an addm comparison report. If you have OEM, you could
"search sql" and place the sqlid and pul the addm for the snapshots
involved.

Kellyn
Post by V Raman
List
We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to them
by email.
Venky
--
[image: Kellyn Pot'Vin on about.me]

*Kellyn Pot'Vin-Gorman*
DBAKevlar Blog <http://dbakevlar.com>
President Denver SQL Server User Group <http://denversql.org/>
about.me/dbakevlar
Tim Gorman
2018-07-18 22:58:49 UTC
Permalink
Venky,

"Assuming there is not much change in the DB"

Let's narrow down the things that can change an execution plan...

1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics


When you state the assumption about "no much change in the DB", I am
assuming that you're discussing items #1-4.

How about item #5?  Can you query the SYS.AUX_STATS$ table and display
the column PVAL1 where PNAME has the value "DSTART" or "DSTOP"?

How about item #6?  Can you display the contents of
DBA_TAB_STATS_HISTORY for the tables involved in the query?  Please
refer to the useful blog posts by Uwe Hesse HERE
<https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/> and
by Marcel-Jan Krijgsman HERE
<https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/>
for more information, if necessary?

Hope this helps?

Thanks!

-Tim
Post by V Raman
List
We have a SQL that is performing intermittently bad in our prod env.
The good ones take 2 to 5 mins, the bad ones run for hours we kill
them. They run fine in the non prod env. I ran an awsqrpt and based on
that I see that there are a few executions with the bad ones taking
hours. Looking at the differences in the execution plan, the good ones
have lots of nested loops in them, with the bad ones having lots of
hash joins.
I am trying to figure out the cause(s). Assuming there is not much
change in the DB, the first thing that comes to mind is statistics.
Can the listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to
them by email.
Venky
Gus Spier
2018-07-18 23:33:28 UTC
Permalink
Has the production host server the same amount of tasking as the dev/test
server? Is there any possibility that "outside" influences are impacting
your performance?
Regards,
Gus
Post by Vishnu
Venky,
"Assuming there is not much change in the DB"
Let's narrow down the things that can change an execution plan...
1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics
When you state the assumption about "no much change in the DB", I am
assuming that you're discussing items #1-4.
How about item #5? Can you query the SYS.AUX_STATS$ table and display the
column PVAL1 where PNAME has the value "DSTART" or "DSTOP"?
How about item #6? Can you display the contents of DBA_TAB_STATS_HISTORY
for the tables involved in the query? Please refer to the useful blog
posts by Uwe Hesse HERE
<https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/> and
by Marcel-Jan Krijgsman HERE
<https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/>
for more information, if necessary?
Hope this helps?
Thanks!
-Tim
List
We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to them
by email.
Venky
V Raman
2018-07-19 03:21:01 UTC
Permalink
Thanks a lot Tim and everyone else.

Tim: 1-5 no change. Although i am not 100% sure about #4.

Since this is v12 there is the default stats job every night; adaptive
features turned off fully. I will have to dig to see if the stats change
made an effect.

Venky.
Post by Vishnu
Venky,
"Assuming there is not much change in the DB"
Let's narrow down the things that can change an execution plan...
1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics
When you state the assumption about "no much change in the DB", I am
assuming that you're discussing items #1-4.
How about item #5? Can you query the SYS.AUX_STATS$ table and display the
column PVAL1 where PNAME has the value "DSTART" or "DSTOP"?
How about item #6? Can you display the contents of DBA_TAB_STATS_HISTORY
for the tables involved in the query? Please refer to the useful blog
posts by Uwe Hesse HERE
<https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/> and
by Marcel-Jan Krijgsman HERE
<https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/>
for more information, if necessary?
Hope this helps?
Thanks!
-Tim
List
We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to them
by email.
Venky
Sayan Malakshinov
2018-07-19 03:30:00 UTC
Permalink
Hi Raman,

The easiest way is to get and compare 10053 traces for both plans.
V Raman
2018-07-19 03:33:17 UTC
Permalink
Thanks Sayan. I can get the 10053 for the good one. We dont know when the
bad plan will hit us next.
Post by Sayan Malakshinov
Hi Raman,
The easiest way is to get and compare 10053 traces for both plans.
V Raman
2018-07-19 03:34:05 UTC
Permalink
but I will keep 10053 in mind. thanks
Post by V Raman
Thanks Sayan. I can get the 10053 for the good one. We dont know when the
bad plan will hit us next.
Post by Sayan Malakshinov
Hi Raman,
The easiest way is to get and compare 10053 traces for both plans.
Sayan Malakshinov
2018-07-19 03:46:58 UTC
Permalink
Raman,

You can enable 10053 for this sql only and just wait when it repeat bad
plan:
alter system set events 'trace[rdbms.SQL_Optimizer.*][sql:your-sql_id]';
Post by V Raman
but I will keep 10053 in mind. thanks
Post by V Raman
Thanks Sayan. I can get the 10053 for the good one. We dont know when
the bad plan will hit us next.
Post by Sayan Malakshinov
Hi Raman,
The easiest way is to get and compare 10053 traces for both plans.
Stefan Knecht
2018-07-19 04:03:58 UTC
Permalink
Tim, you forgot one:

7. The fact whether it rains Monday morning or not

The original anecdote referred to the fact that if it rained, a certain
employee that normally arrives first on a sunny day, would get to the
office later - which caused a different employee to first trigger execution
plan creation, with different bind variables, leading to a different plan.

So the query would run fast all day on a sunny day, but slow all day when
it rained.

Venky - try looking at the values of the bind variables of a good run vs a
bad run.
Post by Vishnu
Venky,
"Assuming there is not much change in the DB"
Let's narrow down the things that can change an execution plan...
1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics
When you state the assumption about "no much change in the DB", I am
assuming that you're discussing items #1-4.
How about item #5? Can you query the SYS.AUX_STATS$ table and display the
column PVAL1 where PNAME has the value "DSTART" or "DSTOP"?
How about item #6? Can you display the contents of DBA_TAB_STATS_HISTORY
for the tables involved in the query? Please refer to the useful blog
posts by Uwe Hesse HERE
<https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/> and
by Marcel-Jan Krijgsman HERE
<https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/>
for more information, if necessary?
Hope this helps?
Thanks!
-Tim
List
We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to them
by email.
Venky
--
//
zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!
Visit us at zztat.net | @zztat_oracle | fb.me/zztat | zztat.net/blog/
Tim Gorman
2018-07-19 13:36:23 UTC
Permalink
Stefan,

I had one too many:  hardware changes could not affect execution plans. 
I added that to pad the list without much thought just before pressing
SEND, and of course regretted it two seconds later.

There are certainly more items to add, but I started with the
easily-verifiable stuff.

Clearly a 10053 trace is the ultimate, but as the OP noted, it requires
prescience to be set before it is needed, and prescience isn't always
available.

10053 trace output is also not easy to read.  Reading two such traces,
comprehending both, and then comparing and contrasting usually requires
intelligence and attention to detail approaching the level of Wolfgang
Breitling.

Thanks!

-Tim
Post by Stefan Knecht
7. The fact whether it rains Monday morning or not
The original anecdote referred to the fact that if it rained, a
certain employee that normally arrives first on a sunny day, would get
to the office later - which caused a different employee to first
trigger execution plan creation, with different bind variables,
leading to a different plan.
So the query would run fast all day on a sunny day, but slow all day
when it rained.
Venky - try looking at the values of the bind variables of a good run
vs a bad run.
Venky,
"Assuming there is not much change in the DB"
Let's narrow down the things that can change an execution plan...
1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics
When you state the assumption about "no much change in the DB", I
am assuming that you're discussing items #1-4.
How about item #5?  Can you query the SYS.AUX_STATS$ table and
display the column PVAL1 where PNAME has the value "DSTART" or
"DSTOP"?
How about item #6?  Can you display the contents of
DBA_TAB_STATS_HISTORY for the tables involved in the query? 
Please refer to the useful blog posts by Uwe Hesse HERE
<https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/>
and by Marcel-Jan Krijgsman HERE
<https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/>
for more information, if necessary?
Hope this helps?
Thanks!
-Tim
Post by V Raman
List
We have a SQL that is performing intermittently bad in our prod
env. The good ones take 2 to 5 mins, the bad ones run for hours
we kill them. They run fine in the non prod env. I ran an awsqrpt
and based on that I see that there are a few executions with the
bad ones taking hours. Looking at the differences in the
execution plan, the good ones have lots of nested loops in them,
with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not
much change in the DB, the first thing that comes to mind is
statistics. Can the listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a
link to them by email.
Venky
--
//
zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!
fb.me/zztat <http://fb.me/zztat> | zztat.net/blog/
<http://zztat.net/blog/>
Mark W. Farnham
2018-07-19 14:08:06 UTC
Permalink
well
 if you change the number of cpus, then you can get a different number of parallel servers and a different hash distribution, so something prone to skew of row assignments could in theory be pretty flat with one number of cpus getting one number of parallel servers and slow with a different number of cpus getting a different number of parallel servers with a small number of them getting the load.



So you were initially right (but for a pretty rare yet interesting case.)



I don’t know whether JL’s laundry list of “nothing changed, but the performance is different” laundry list has been mentioned in this thread, but as laundry lists go it is better than most.



Of course something like Method-R’s tool can pin down what is taking time in the pair. That removes all guessing about WHAT is consuming the time, which sometimes helps diagnose what triggers the different classes of response.

But most likely you are getting entirely different plan, so you want to focus on that to rule it out first. It is often the case that non-prod has a less robust parameter choice in the collection of regression tests for performance. As in the story of the rainy day, if in prod an unusual parameter choice produces a plan that is sub-optimal for the majority of re-uses of the plan for that query, then you would like to be able to flush just that one query from the shared pool and let it get reparsed. I don’t know how to do that, despite asking for that rifle shot in place of the shot gun flush shared pool that can cause a parse storm decades ago.

Still, if you have the sql text you can modify it with a comment and see if a fresh parse is likely to get the good plan and more importantly, eliminate the prescience required to ask for a Wolfgang trace.



In addition to difference of parameter (in the sense of bind variable predicate choice as opposed to init stuff), timing of stats collection versus running of the query such that some predicates exceed the recorded high value is a classic way to get a completely different plan.



Your mileage may vary. In addition to this oracle-l thread, I would certainly also review the Oracle Scratchpad laundry list.



mwf



From: oracle-l-***@freelists.org [mailto:oracle-l-***@freelists.org] On Behalf Of Tim Gorman
Sent: Thursday, July 19, 2018 9:36 AM
To: Stefan Knecht
Cc: ***@gmail.com; oracle-l-freelists
Subject: Re: SQL performance in prod



Stefan,

I had one too many: hardware changes could not affect execution plans. I added that to pad the list without much thought just before pressing SEND, and of course regretted it two seconds later.

There are certainly more items to add, but I started with the easily-verifiable stuff.

Clearly a 10053 trace is the ultimate, but as the OP noted, it requires prescience to be set before it is needed, and prescience isn't always available.

10053 trace output is also not easy to read. Reading two such traces, comprehending both, and then comparing and contrasting usually requires intelligence and attention to detail approaching the level of Wolfgang Breitling.

Thanks!

-Tim



On 7/18/18 21:03, Stefan Knecht wrote:

Tim, you forgot one:



7. The fact whether it rains Monday morning or not



The original anecdote referred to the fact that if it rained, a certain employee that normally arrives first on a sunny day, would get to the office later - which caused a different employee to first trigger execution plan creation, with different bind variables, leading to a different plan.



So the query would run fast all day on a sunny day, but slow all day when it rained.



Venky - try looking at the values of the bind variables of a good run vs a bad run.









On Thu, Jul 19, 2018 at 5:58 AM, Tim Gorman <***@gmail.com> wrote:

Venky,

"Assuming there is not much change in the DB"

Let's narrow down the things that can change an execution plan...

1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics


When you state the assumption about "no much change in the DB", I am assuming that you're discussing items #1-4.

How about item #5? Can you query the SYS.AUX_STATS$ table and display the column PVAL1 where PNAME has the value "DSTART" or "DSTOP"?

How about item #6? Can you display the contents of DBA_TAB_STATS_HISTORY for the tables involved in the query? Please refer to the useful blog posts by Uwe Hesse HERE <https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/> and by Marcel-Jan Krijgsman HERE <https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/> for more information, if necessary?

Hope this helps?

Thanks!

-Tim






On 7/18/18 15:30, V Raman wrote:

List



We have a SQL that is performing intermittently bad in our prod env. The good ones take 2 to 5 mins, the bad ones run for hours we kill them. They run fine in the non prod env. I ran an awsqrpt and based on that I see that there are a few executions with the bad ones taking hours. Looking at the differences in the execution plan, the good ones have lots of nested loops in them, with the bad ones having lots of hash joins.



I am trying to figure out the cause(s). Assuming there is not much change in the DB, the first thing that comes to mind is statistics. Can the listers help with ideas? Thanks.



If anyone is interested is seeing the report, i can provide a link to them by email.



Venky
--
//

zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!

Visit us at <http://zztat.net/> zztat.net | @zztat_oracle | fb.me/zztat | zztat.net/blog/
Larry Elkins
2018-07-19 14:38:02 UTC
Permalink
I’ve normally used DBMS_SHARED_POOL.PURGE with the C option to purge specific SQL’s and get a reparse and desired plan. It continued to work ok, for me anyway, in 12c, but Carlos Sierra, and maybe some others, had mentioned potential issues in 12x. Like the link mentions, I’d normally do this to get a re-parse it to pick up a baseline we just implemented, or simply to get a re-parse away from an “odd” value such as is already being discussed.



<https://carlos-sierra.net/2017/11/22/purging-a-cursor-in-oracle-revisited/> https://carlos-sierra.net/2017/11/22/purging-a-cursor-in-oracle-revisited/





Regards,



Larry G. Elkins

<mailto:***@verizon.net> ***@verizon.net

214.695.8605



From: oracle-l-***@freelists.org [mailto:oracle-l-***@freelists.org] On Behalf Of Mark W. Farnham
Sent: Thursday, July 19, 2018 9:08 AM
To: ***@gmail.com; 'Stefan Knecht' <***@gmail.com>
Cc: ***@gmail.com; 'oracle-l-freelists' <oracle-***@freelists.org>
Subject: RE: SQL performance in prod



well
 if you change the number of cpus, then you can get a different number of parallel servers and a different hash distribution, so something prone to skew of row assignments could in theory be pretty flat with one number of cpus getting one number of parallel servers and slow with a different number of cpus getting a different number of parallel servers with a small number of them getting the load.



So you were initially right (but for a pretty rare yet interesting case.)



I don’t know whether JL’s laundry list of “nothing changed, but the performance is different” laundry list has been mentioned in this thread, but as laundry lists go it is better than most.



Of course something like Method-R’s tool can pin down what is taking time in the pair. That removes all guessing about WHAT is consuming the time, which sometimes helps diagnose what triggers the different classes of response.

But most likely you are getting entirely different plan, so you want to focus on that to rule it out first. It is often the case that non-prod has a less robust parameter choice in the collection of regression tests for performance. As in the story of the rainy day, if in prod an unusual parameter choice produces a plan that is sub-optimal for the majority of re-uses of the plan for that query, then you would like to be able to flush just that one query from the shared pool and let it get reparsed. I don’t know how to do that, despite asking for that rifle shot in place of the shot gun flush shared pool that can cause a parse storm decades ago.

Still, if you have the sql text you can modify it with a comment and see if a fresh parse is likely to get the good plan and more importantly, eliminate the prescience required to ask for a Wolfgang trace.



In addition to difference of parameter (in the sense of bind variable predicate choice as opposed to init stuff), timing of stats collection versus running of the query such that some predicates exceed the recorded high value is a classic way to get a completely different plan.



Your mileage may vary. In addition to this oracle-l thread, I would certainly also review the Oracle Scratchpad laundry list.



mwf



From: oracle-l-***@freelists.org <mailto:oracle-l-***@freelists.org> [mailto:oracle-l-***@freelists.org] On Behalf Of Tim Gorman
Sent: Thursday, July 19, 2018 9:36 AM
To: Stefan Knecht
Cc: ***@gmail.com <mailto:***@gmail.com> ; oracle-l-freelists
Subject: Re: SQL performance in prod



Stefan,

I had one too many: hardware changes could not affect execution plans. I added that to pad the list without much thought just before pressing SEND, and of course regretted it two seconds later.

There are certainly more items to add, but I started with the easily-verifiable stuff.

Clearly a 10053 trace is the ultimate, but as the OP noted, it requires prescience to be set before it is needed, and prescience isn't always available.

10053 trace output is also not easy to read. Reading two such traces, comprehending both, and then comparing and contrasting usually requires intelligence and attention to detail approaching the level of Wolfgang Breitling.

Thanks!

-Tim

On 7/18/18 21:03, Stefan Knecht wrote:

Tim, you forgot one:



7. The fact whether it rains Monday morning or not



The original anecdote referred to the fact that if it rained, a certain employee that normally arrives first on a sunny day, would get to the office later - which caused a different employee to first trigger execution plan creation, with different bind variables, leading to a different plan.



So the query would run fast all day on a sunny day, but slow all day when it rained.



Venky - try looking at the values of the bind variables of a good run vs a bad run.









On Thu, Jul 19, 2018 at 5:58 AM, Tim Gorman <***@gmail.com <mailto:***@gmail.com> > wrote:

Venky,

"Assuming there is not much change in the DB"

Let's narrow down the things that can change an execution plan...

1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics


When you state the assumption about "no much change in the DB", I am assuming that you're discussing items #1-4.

How about item #5? Can you query the SYS.AUX_STATS$ table and display the column PVAL1 where PNAME has the value "DSTART" or "DSTOP"?

How about item #6? Can you display the contents of DBA_TAB_STATS_HISTORY for the tables involved in the query? Please refer to the useful blog posts by Uwe Hesse HERE <https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/> and by Marcel-Jan Krijgsman HERE <https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/> for more information, if necessary?

Hope this helps?

Thanks!

-Tim





On 7/18/18 15:30, V Raman wrote:

List



We have a SQL that is performing intermittently bad in our prod env. The good ones take 2 to 5 mins, the bad ones run for hours we kill them. They run fine in the non prod env. I ran an awsqrpt and based on that I see that there are a few executions with the bad ones taking hours. Looking at the differences in the execution plan, the good ones have lots of nested loops in them, with the bad ones having lots of hash joins.



I am trying to figure out the cause(s). Assuming there is not much change in the DB, the first thing that comes to mind is statistics. Can the listers help with ideas? Thanks.



If anyone is interested is seeing the report, i can provide a link to them by email.



Venky
--
//

zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!

Visit us at <http://zztat.net/> zztat.net | @zztat_oracle | fb.me/zztat <http://fb.me/zztat> | zztat.net/blog/ <http://zztat.net/blog/>
Cee Pee
2018-07-19 20:49:47 UTC
Permalink
Nice thread, we get same kind of problems too. Is there a book or paper on
how to read the 10053 trace file, like "basics of 10053"?

Reading Tim's response, I think it is a good time someone wrote a book on
that topic.

CP
Post by Larry Elkins
I’ve normally used DBMS_SHARED_POOL.PURGE with the C option to purge
specific SQL’s and get a reparse and desired plan. It continued to work ok,
for me anyway, in 12c, but Carlos Sierra, and maybe some others, had
mentioned potential issues in 12x. Like the link mentions, I’d normally do
this to get a re-parse it to pick up a baseline we just implemented, or
simply to get a re-parse away from an “odd” value such as is already being
discussed.
https://carlos-sierra.net/2017/11/22/purging-a-cursor-in-oracle-revisited/
Regards,
Larry G. Elkins
214.695.8605
freelists.org] *On Behalf Of *Mark W. Farnham
*Sent:* Thursday, July 19, 2018 9:08 AM
*Subject:* RE: SQL performance in prod
well
 if you change the number of cpus, then you can get a different
number of parallel servers and a different hash distribution, so something
prone to skew of row assignments could in theory be pretty flat with one
number of cpus getting one number of parallel servers and slow with a
different number of cpus getting a different number of parallel servers
with a small number of them getting the load.
So you were initially right (but for a pretty rare yet interesting case.)
I don’t know whether JL’s laundry list of “nothing changed, but the
performance is different” laundry list has been mentioned in this thread,
but as laundry lists go it is better than most.
Of course something like Method-R’s tool can pin down what is taking time
in the pair. That removes all guessing about WHAT is consuming the time,
which sometimes helps diagnose what triggers the different classes of
response.
But most likely you are getting entirely different plan, so you want to
focus on that to rule it out first. It is often the case that non-prod has
a less robust parameter choice in the collection of regression tests for
performance. As in the story of the rainy day, if in prod an unusual
parameter choice produces a plan that is sub-optimal for the majority of
re-uses of the plan for that query, then you would like to be able to flush
just that one query from the shared pool and let it get reparsed. I don’t
know how to do that, despite asking for that rifle shot in place of the
shot gun flush shared pool that can cause a parse storm decades ago.
Still, if you have the sql text you can modify it with a comment and see
if a fresh parse is likely to get the good plan and more importantly,
eliminate the prescience required to ask for a Wolfgang trace.
In addition to difference of parameter (in the sense of bind variable
predicate choice as opposed to init stuff), timing of stats collection
versus running of the query such that some predicates exceed the recorded
high value is a classic way to get a completely different plan.
Your mileage may vary. In addition to this oracle-l thread, I would
certainly also review the Oracle Scratchpad laundry list.
mwf
*Sent:* Thursday, July 19, 2018 9:36 AM
*To:* Stefan Knecht
*Subject:* Re: SQL performance in prod
Stefan,
I had one too many: hardware changes could not affect execution plans. I
added that to pad the list without much thought just before pressing SEND,
and of course regretted it two seconds later.
There are certainly more items to add, but I started with the
easily-verifiable stuff.
Clearly a 10053 trace is the ultimate, but as the OP noted, it requires
prescience to be set before it is needed, and prescience isn't always
available.
10053 trace output is also not easy to read. Reading two such traces,
comprehending both, and then comparing and contrasting usually requires
intelligence and attention to detail approaching the level of Wolfgang
Breitling.
Thanks!
-Tim
7. The fact whether it rains Monday morning or not
The original anecdote referred to the fact that if it rained, a certain
employee that normally arrives first on a sunny day, would get to the
office later - which caused a different employee to first trigger execution
plan creation, with different bind variables, leading to a different plan.
So the query would run fast all day on a sunny day, but slow all day when it rained.
Venky - try looking at the values of the bind variables of a good run vs a bad run.
Venky,
"Assuming there is not much change in the DB"
Let's narrow down the things that can change an execution plan...
1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics
When you state the assumption about "no much change in the DB", I am
assuming that you're discussing items #1-4.
How about item #5? Can you query the SYS.AUX_STATS$ table and display the
column PVAL1 where PNAME has the value "DSTART" or "DSTOP"?
How about item #6? Can you display the contents of DBA_TAB_STATS_HISTORY
for the tables involved in the query? Please refer to the useful blog
posts by Uwe Hesse HERE
<https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/> and
by Marcel-Jan Krijgsman HERE
<https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/>
for more information, if necessary?
Hope this helps?
Thanks!
-Tim
List
We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to them by email.
Venky
--
//
zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!
Stefan Koehler
2018-07-20 08:09:51 UTC
Permalink
Hey CP,
yes, there are several great sources for this.


Understanding the CBO principles:
---------
* Cost-Based Oracle Fundamentals by Jonathan Lewis: https://www.apress.com/de/book/9781590596364
* Some of the newer CBO behavior by Alberto Dell'Era: http://www.adellera.it/static_html/investigations/

Reading CBO traces:
---------
* Chasing the optimizer by Mauro Pagano;: https://de.slideshare.net/MauroPagano3/chasing-the-optimizer-71564184


I guess the latter is the one you are looking for as it explains an analytical approach to digest any CBO trace file, decomposing it into smaller pieces and make it easy to analyze.

Best Regards
Stefan Koehler

Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Nice thread, we get same kind of problems too. Is there a book or paper on how to read the 10053 trace file, like "basics of 10053"? 
  
Reading Tim's response, I think it is a good time someone wrote a book on that topic.
  
CP
--
http://www.freelists.org/webpage/oracle-l
l***@bluewin.ch
2018-07-20 08:30:12 UTC
Permalink
Hi Stefan et al,

I do not like the 10053 idea. It means looking into a CBO decision protokoll that did lead to a wrong plan.
So have wrong conclusions based on wrong estimations.
I rather want to know what made the optimizer stumble.
The fastest approach would be cardinality feedback. http://www.centrexcc.com/Tuning%20by%20Cardinality%20Feedback.pdf

BTW: http://www.centrexcc.com will also show a good article on how to read 10053.

Regards

Lothar

----Ursprüngliche Nachricht----
Von : ***@soocs.de
Datum : 20/07/2018 - 10:09 (GMT)
An : ***@gmail.com
Cc : oracle-***@freelists.org
Betreff : Re: SQL performance in prod

Hey CP,
yes, there are several great sources for this.


Understanding the CBO principles:
---------
* Cost-Based Oracle Fundamentals by Jonathan Lewis: https://www.apress.com/de/book/9781590596364
* Some of the newer CBO behavior by Alberto Dell'Era: http://www.adellera.it/static_html/investigations/

Reading CBO traces:
---------
* Chasing the optimizer by Mauro Pagano;: https://de.slideshare.net/MauroPagano3/chasing-the-optimizer-71564184


I guess the latter is the one you are looking for as it explains an analytical approach to digest any CBO trace file, decomposing it into smaller pieces and make it easy to analyze.

Best Regards
Stefan Koehler

Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Nice thread, we get same kind of problems too. Is there a book or paper on how to read the 10053 trace file, like "basics of 10053"? 
  
Reading Tim's response, I think it is a good time someone wrote a book on that topic.
  
CP
--
http://www.freelists.org/webpage/oracle-l



--
http://www.freelists.org/webpage/oracle-l
Mladen Gogala
2018-07-25 23:23:08 UTC
Permalink
There is some material in Jonathan's Core DBA book, allegedly the first
one of a trilogy. Douglas Adams has written a trilogy consisting of 6
books, Jonathan has written a trilogy consisting of a single book.
Jonathan hasn't written anything about the SEP field, although this
method can be extremely handy in the DBA world. In addition to the
Jonathan's excellent work, there is a noteworthy presentation by Mauro
Pagano:

https://de.slideshare.net/MauroPagano3/chasing-the-optimizer-71564184

There is also something in the Bob's Furniture Store:
http://www.oaktable.net/contribute/10053-viewer

Regards
Post by Cee Pee
Nice thread, we get same kind of problems too. Is there a book or
paper on how to read the 10053 trace file, like "basics of 10053"?
Reading Tim's response, I think it is a good time someone wrote a book
on that topic.
CP
I’ve normally used DBMS_SHARED_POOL.PURGE with the C option to
purge specific SQL’s and get a reparse and desired plan. It
continued to work ok, for me anyway, in 12c, but Carlos Sierra,
and maybe some others, had mentioned potential issues in 12x. Like
the link mentions, I’d normally do this to get a re-parse it to
pick up a baseline we just implemented, or simply to get a
re-parse away from an “odd” value such as is already being discussed.
https://carlos-sierra.net/2017/11/22/purging-a-cursor-in-oracle-revisited/
<https://carlos-sierra.net/2017/11/22/purging-a-cursor-in-oracle-revisited/>
Regards,
Larry G. Elkins
214.695.8605
*Sent:* Thursday, July 19, 2018 9:08 AM
*Subject:* RE: SQL performance in prod
well
 if you change the number of cpus, then you can get a
different number of parallel servers and a different hash
distribution, so something prone to skew of row assignments could
in theory be pretty flat with one number of cpus getting one
number of parallel servers and slow with a different number of
cpus getting a different number of parallel servers with a small
number of them getting the load.
So you were initially right (but for a pretty rare yet interesting case.)
I don’t know whether JL’s laundry list of “nothing changed, but
the performance is different” laundry list has been mentioned in
this thread, but as laundry lists go it is better than most.
Of course something like Method-R’s tool can pin down what is
taking time in the pair. That removes all guessing about WHAT is
consuming the time, which sometimes helps diagnose what triggers
the different classes of response.
But most likely you are getting entirely different plan, so you
want to focus on that to rule it out first. It is often the case
that non-prod has a less robust parameter choice in the collection
of regression tests for performance. As in the story of the rainy
day, if in prod an unusual parameter choice produces a plan that
is sub-optimal for the majority of re-uses of the plan for that
query, then you would like to be able to flush just that one query
from the shared pool and let it get reparsed. I don’t know how to
do that, despite asking for that rifle shot in place of the  shot
gun flush shared pool that can cause a parse storm decades ago.
Still, if you have the sql text you can modify it with a comment
and see if a fresh parse is likely to get the good plan and more
importantly, eliminate the prescience required to ask for a
Wolfgang trace.
In addition to difference of parameter (in the sense of bind
variable predicate choice as opposed to init stuff), timing of
stats collection versus running of the query such that some
predicates exceed the recorded high value is a classic way to get
a completely different plan.
Your mileage may vary. In addition to this oracle-l thread, I
would certainly also review the Oracle Scratchpad laundry list.
mwf
*Sent:* Thursday, July 19, 2018 9:36 AM
*To:* Stefan Knecht
oracle-l-freelists
*Subject:* Re: SQL performance in prod
Stefan,
I had one too many:  hardware changes could not affect execution
plans.  I added that to pad the list without much thought just
before pressing SEND, and of course regretted it two seconds later.
There are certainly more items to add, but I started with the
easily-verifiable stuff.
Clearly a 10053 trace is the ultimate, but as the OP noted, it
requires prescience to be set before it is needed, and prescience
isn't always available.
10053 trace output is also not easy to read. Reading two such
traces, comprehending both, and then comparing and contrasting
usually requires intelligence and attention to detail approaching
the level of Wolfgang Breitling.
Thanks!
-Tim
7. The fact whether it rains Monday morning or not
The original anecdote referred to the fact that if it rained,
a certain employee that normally arrives first on a sunny day,
would get to the office later - which caused a different
employee to first trigger execution plan creation, with
different bind variables, leading to a different plan.
So the query would run fast all day on a sunny day, but slow
all day when it rained.
Venky - try looking at the values of the bind variables of a
good run vs a bad run.
On Thu, Jul 19, 2018 at 5:58 AM, Tim Gorman
Venky,
"Assuming there is not much change in the DB"
Let's narrow down the things that can change an execution plan...
1. hardware change (i.e. #-cpus, # GB of RAM, storage, etc)
2. application software change (i.e. change to the SQL text)
3. Oracle software change (i.e. patch, upgrade, etc)
4. initialization parameter change
5. gathering system statistics
6. gathering table, index, column statistics
When you state the assumption about "no much change in the
DB", I am assuming that you're discussing items #1-4.
How about item #5?  Can you query the SYS.AUX_STATS$ table and
display the column PVAL1 where PNAME has the value "DSTART" or
"DSTOP"?
How about item #6?  Can you display the contents of
DBA_TAB_STATS_HISTORY for the tables involved in the query? 
Please refer to the useful blog posts by Uwe Hesse HERE
<https://uhesse.com/2012/04/23/diff_table_stats_in_history-example/>
and by Marcel-Jan Krijgsman HERE
<https://mjsoracleblog.wordpress.com/2013/02/19/more-statistics-history/>
for more information, if necessary?
Hope this helps?
Thanks!
-Tim
List
We have a SQL that is performing intermittently bad in our
prod env. The good ones take 2 to 5 mins, the bad ones run
for hours we kill them. They run fine in the non prod env.
I ran an awsqrpt and based on that I see that there are a
few executions with the bad ones taking hours. Looking at
the differences in the execution plan, the good ones have
lots of nested loops in them, with the bad ones having
lots of hash joins.
I am trying to figure out the cause(s). Assuming there is
not much change in the DB, the first thing that comes to
mind is statistics. Can the listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can
provide a link to them by email.
Venky
--
//
zztat - The Next-Gen Oracle Performance Monitoring and
Reaction Framework!
fb.me/zztat <http://fb.me/zztat> | zztat.net/blog/
<http://zztat.net/blog/>
--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217
Andy Sayer
2018-07-26 07:28:21 UTC
Permalink
Mladen,

“My assumption was that the plans for a single execution are different. I
maybe wrong.”

I was referring to the execution of a single line in the plan.
eg for a nested loop: there will be multiple executions of the plan lines
that hit the joined table (assuming multiple rows in the driving table),
the e-rows represents the number of rows the CBO expects to find on one
iteration.

Assuming one statement execution. A-rows will be the total number of rows
flowing from that row source from the entire execution of the statement. So
you would have to multiply the E-rows by the number of times the line was
hit

Hopefully that’s clearer now

Andy
l***@bluewin.ch
2018-07-26 07:41:58 UTC
Permalink
Andy,
for a nested loop join the estimates in the outer loop are per execution, but on the join level (the NL Operation) the estimates reflects the result of all executions.
Regards
Lothar
----UrsprÃŒngliche Nachricht----
Von : ***@gmail.com
Datum : 26/07/2018 - 09:28 (GMT)
An : ***@gmail.com
Cc : oracle-***@freelists.org
Betreff : Re: SQL performance in prod
Mladen,
“My assumption was that the plans for a single execution are different. I
maybe wrong.”
I was referring to the execution of a single line in the plan.
eg for a nested loop: there will be multiple executions of the plan lines that hit the joined table (assuming multiple rows in the driving table), the e-rows represents the number of rows the CBO expects to find on one iteration.
Assuming one statement execution. A-rows will be the total number of rows flowing from that row source from the entire execution of the statement. So you would have to multiply the E-rows by the number of times the line was hit
Hopefully that’s clearer now
Andy
Andy Sayer
2018-07-26 08:02:33 UTC
Permalink
Lothar,

That’s because the nested loop line is only executed once.

Unfortunately my phone won’t let me copy and paste to an email with a
decent font but if you go to
https://docs.oracle.com/database/121/TGSQL/tgsql_interp.htm#TGSQL94854
and scroll down to the execution plan with row source statistics. You can
see that the nested loop operation on line 1 is started 1 time even though
it has 9165 rows input into it. The e-rows is based on that one execution
of the whole set of input.

For a nested nested loop, it’s still consistent. The e-rows will be for one
execution of the line, the line might be executed multiple times though
because it is in a nested loop itself.

Andy
Post by l***@bluewin.ch
Andy,
for a nested loop join the estimates in the outer loop are per execution,
but on the join level (the NL Operation) the estimates reflects the result
of all executions.
Regards
Lothar
----UrsprÃŒngliche Nachricht----
Datum : 26/07/2018 - 09:28 (GMT)
Betreff : Re: SQL performance in prod
Mladen,
“My assumption was that the plans for a single execution are different. I
maybe wrong.”
I was referring to the execution of a single line in the plan.
eg for a nested loop: there will be multiple executions of the plan lines
that hit the joined table (assuming multiple rows in the driving table),
the e-rows represents the number of rows the CBO expects to find on one
iteration.
Assuming one statement execution. A-rows will be the total number of rows
flowing from that row source from the entire execution of the statement. So
you would have to multiply the E-rows by the number of times the line was
hit
Hopefully that’s clearer now
Andy
Harmandeep Singh
2018-07-19 05:15:27 UTC
Permalink
Hi All,

Well experts has already given the valuable inputs, below are my 2 cents

It seems to me case of following :
- There are many histograms on the underlying table columns
- Also tables have quite large number of indexes like ind(a), ind(a, b),
ind(b) where are a b are column names . It itself is wrong strategy but
yes it happens over the time
- Since there are many histograms and it is finding corresponding indexes
too, it is picking some other index each time thus flipping of plan and so
poor performance

Histogram Creation is controlled by method_opt parameter while stats
gathering
It is generally taken as default and in most cases it is " FOR ALL COLUMNS
SIZE AUTO" which is default database setting
For Oracle own product "Oracle E-business suite" it is taken as "FOR ALL
COLUMNS SIZE 1" means no histogram by default, which is best strategy for
any application

*Solution*

You can try gathering the stats on all the tables by following
exec dbms_stats.gather_table_stats('TABNAME', method_opt=>' FOR ALL COLUMNS
SIZE1 ' );

I hope it will give you plan stability and better run timings

Warm Regards,
Harmandeep Singh
Post by V Raman
List
We have a SQL that is performing intermittently bad in our prod env. The
good ones take 2 to 5 mins, the bad ones run for hours we kill them. They
run fine in the non prod env. I ran an awsqrpt and based on that I see that
there are a few executions with the bad ones taking hours. Looking at the
differences in the execution plan, the good ones have lots of nested loops
in them, with the bad ones having lots of hash joins.
I am trying to figure out the cause(s). Assuming there is not much change
in the DB, the first thing that comes to mind is statistics. Can the
listers help with ideas? Thanks.
If anyone is interested is seeing the report, i can provide a link to them
by email.
Venky
Andy Sayer
2018-07-19 07:27:32 UTC
Permalink
You’ve had a lot of suggestions, but personally I believe they’re either
over kill or guess work they’ll just waste plenty of your time.

I would just do it very simply:

Compare the two plans (by eye)
Find the pivotol line of the plan that “changes everything” - does like 13
have a tiny cardinality in a But not b? (Etc)
Identify what went into that decision
Find out how that input changed between a and b (different peeked bind,
different stat etc)

I don’t see any need to dig into traces, look into system stats, check for
Witch doctors nearby etc.

If you shared the two complete plans, then I’m sure it wouldn’t take long
to just point out where the issue is likely coming from and make some
suggestions. Make sure you include the predicate section, peeked binds and
notes - they are all packed full of important information

Hope this helps,
Andrew
Stefan Koehler
2018-07-19 07:46:22 UTC
Permalink
Hey,
I totally agree with Andy here. However you can make it a little bit more easy for us (and you).

Just create a SQLd360 report (https://github.com/sqldb360/sqldb360) and send it to us.

Best Regards
Stefan Koehler

Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
You’ve had a lot of suggestions, but personally I believe they’re either over kill or guess work they’ll just waste plenty of your time.
 
 
Compare the two plans (by eye)
Find the pivotol line of the plan that “changes everything” - does like 13 have a tiny cardinality in a But not b? (Etc)
Identify what went into that decision
Find out how that input changed between a and b (different peeked bind, different stat etc)
 
I don’t see any need to dig into traces, look into system stats, check for Witch doctors nearby etc. 
 
If you shared the two complete plans, then I’m sure it wouldn’t take long to just point out where the issue is likely coming from and make some suggestions. Make sure you include the predicate section, peeked binds and notes - they are all packed full of important information
 
Hope this helps,
Andrew
--
http://www.freelists.org/webpage/oracle-l
l***@bluewin.ch
2018-07-19 07:50:06 UTC
Permalink
Andy,
I agree to some extend.
We know:
the two plans are close in cost, otherwise the plan would not flip back and forth
in reality the two plans are have a very different run time, therefore the both cost estimate must be incorrect
since the cost estimate must the wrong, the cardinality is likely wrong too
Therefore cardinality feedback will lead to right at source of the problem
Conclusion: all you need are the actual rows e.g. from runtime stats or from sql monitor
No witch doctors, no guesses, just facts
Thanks
Lothar
----UrsprÃŒngliche Nachricht----
Von : ***@gmail.com
Datum : 19/07/2018 - 09:27 (GMT)
An : oracle-***@freelists.org
Betreff : Re: SQL performance in prod
You’ve had a lot of suggestions, but personally I believe they’re either over kill or guess work they’ll just waste plenty of your time.
I would just do it very simply:
Compare the two plans (by eye)
Find the pivotol line of the plan that “changes everything” - does like 13 have a tiny cardinality in a But not b? (Etc)
Identify what went into that decision
Find out how that input changed between a and b (different peeked bind, different stat etc)
I don’t see any need to dig into traces, look into system stats, check for Witch doctors nearby etc.
If you shared the two complete plans, then I’m sure it wouldn’t take long to just point out where the issue is likely coming from and make some suggestions. Make sure you include the predicate section, peeked binds and notes - they are all packed full of important information
Hope this helps,
Andrew
Mladen Gogala
2018-07-23 19:45:57 UTC
Permalink
Use /*+ GATHER_PLAN_STATISTICS */ hint and you will see E-Rows and
A-Rows in the plan. E-Rows are estimated rows, A-rows are actual rows.
The problem is where there is significant difference between the two.

Regards
Post by V Raman
I am trying to figure out the cause(s). Assuming there is not much
change in the DB, the first thing that comes to mind is statistics.
Can the listers help with ideas? Thanks.
--
Mladen Gogala Database Consultant Tel: (347) 321-1217
--
http://www.freelists.org/webpage/oracle-l
Andy Sayer
2018-07-23 20:52:34 UTC
Permalink
Remember that E-Rows is per execution, so you’ll need to multiply it by
executions (or starts) to compare it against A-Rows
Post by Mladen Gogala
Use /*+ GATHER_PLAN_STATISTICS */ hint and you will see E-Rows and
A-Rows in the plan. E-Rows are estimated rows, A-rows are actual rows.
The problem is where there is significant difference between the two.
Regards
Post by V Raman
I am trying to figure out the cause(s). Assuming there is not much
change in the DB, the first thing that comes to mind is statistics.
Can the listers help with ideas? Thanks.
--
Mladen Gogala Database Consultant Tel: (347) 321-1217
--
http://www.freelists.org/webpage/oracle-l
Mladen Gogala
2018-07-25 21:11:26 UTC
Permalink
Hi Andy!

My assumption was that the plans for a single execution are different. I
maybe wrong.

Regards
Remember that E-Rows is per execution, so you’ll need to multiply it
by executions (or starts) to compare it against A-Rows
--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217

--
http://www.freelists.org/webpage/oracle-l
Loading...