动态采样---DYNAMIC_SAMPLING 基于tom文章的翻译

我们的技术人员动态地采样,考虑使用情况,并设置水平。

我的问题与动态抽样有关。 它真正做什么,什么时候考虑使用它,以及可以设置的所有不同级别的含义是什么?

动态采样首先在Oracle9 i数据库版本2中可用。基于成本的优化器(CBO)可以在硬解析过程中对查询引用的表进行采样,以确定未分析段的更好的默认统计信息,并验证其“猜测”。此抽样仅在硬解析时才会进行,并用于动态生成优化器使用的更好的统计信息,因此名称为动态抽样。

优化器使用各种输入来制定计划。 它使用表上定义的任何和所有约束; 系统统计信息 - 有关您的服务器的I / O速度,CPU速度等信息; 以及从查询中涉及的细分收集的统计信息。 优化器使用统计量来估计基数 - 给定计划中每个步骤的行数预计将返回 - 这些基数是计算查询成本的主要变量。 当基数错误估计时,优化器可能会选择低效的查询计划。 第一,有些可能只说,优化器产生的低效计划的原因是基数估计是不准确的。 我喜欢说“正确的基数等于正确的计划; 错误的基数等于错误的计划。“

所以“对”是动态抽样的动机:帮助优化者得到正确的估计基数值。 为优化器提供更多信息和更准确的信息,特定于查询本身,将有助于优化器提出最佳执行计划。

动态抽样提供11个设置级别(0到10),我将解释不同级别,但请注意,在Oracle9 i数据库版本2中,默认动态抽样级别值为1,而在Oracle数据库10g版本1及以上版本中,它默认为2。

方式动态采样工作
动态采样有两种方法:

可以在数据库实例级别设置OPTIMIZER_DYNAMIC_SAMPLING参数,也可以使用ALTER SESSION命令在会话级别覆盖。
DYNAMIC_SAMPLING查询提示可以添加到特定查询中。

在本专栏中,我将使用提示功能来演示动态抽样的效果,但是您可以使用会话级别的功能来增加动态采样的使用,特别是在报告或数据仓库情况下。

如前所述,动态采样用于收集未分析段的统计信息,并验证优化器进行的“猜测”。 我会依次看这些用法。

Unanalyzed Segments
优化器将使用默认统计值,如果未分析段,并且不使用动态采样来快速估计。 这些默认基数值记录在“ Oracle数据库性能调优指南”中 。 这些默认统计值通常不是非常现实的,因为使用它们是一个一刀切的方法。 估计的行计数基于表中数据块的数量和平均行宽度的猜测。 没有动态采样,这些猜测将会被大量的损失。 考虑:

SQL> create table t
2 as
3 select owner, object_type
4 from all_objects
5 /
Table created.

SQL> select count(*) from t;

COUNT(*)
------------------------
68076

现在我来看一下查询这个未分析表的查询的估计。 要查看优化器将使用的默认基数,我必须禁用动态抽样(默认情况下在Oracle9 i数据库版本2及更高版本中启用)。 我通过DYNAMIC_SAMPLING提示实现了这一点,其中零级是禁用动态采样的值,如清单1所示。

代码清单1:禁用动态抽样以查看默认基数

SQL> set autotrace traceonly explain
SQL> select /*+ dynamic_sampling(t 0) */ * from t;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16010 | 437K| 55 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 16010 | 437K| 55 (0)| 00:00:01 |
--------------------------------------------------------------------------

正如你所看到的,估计的基数是16,010,这与真正的基数相距非常远,约为68,000。 如果我允许动态抽样,我得到一个更真实的基数估计,如清单2所示。

代码清单2:启用动态采样的获得更真实的基数

SQL> select * from t;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 77871 | 2129K| 56 (2)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 77871 | 2129K| 56 (2)| 00:00:01 |
--------------------------------------------------------------------------

Note
------------------------------------------
- dynamic sampling used for this statement

现在,77,871显然是和68,000不等的,但它(77,871)比16010这个值更接近于真实值。 通常,优化器使用动态抽样将为查询访问未分析表的查询选择更好的查询计划

当然,不准确的基数估计可以任意摆动。 在清单1中,我展示了优化器从根本上低估了基数,但也可能高估了这一点。 考虑清单3中的估计。

代码清单3:高估基数

SQL> delete from t;
68076 rows deleted.

SQL> commit;
Commit complete.

SQL> set autotrace traceonly explain
SQL> select /*+ dynamic_sampling(t 0) */ * from t;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16010 | 437K| 55 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 16010 | 437K| 55 (0)| 00:00:01 |
--------------------------------------------------------------------------

SQL> select * from t;

Execution Plan
-----------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 55 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 1 | 28 | 5 (0)| 00:00:01 |
--------------------------------------------------------------------------

Note
---------------------------------------
- dynamic sampling used for this statement

考虑如果优化器在这种情况下猜测到16,010行而不是1行可能会发生什么。 对于访问表T的查询,优化器会严重过高估计从T返回的行,从而生成不正确的计划。

那么动态采样在哪里可以使用? 首先,当您访问已创建和加载但尚未分析的任何表时,动态抽样很有用。 从Oracle数据库10g第1版开始,CBO是唯一支持的优化器,它需要准确的统计信息才能正确执行其作业。 如果存在尚未收集统计信息的表格,则优化程序将失明。 动态采样为CBO提供了所需的信息,以正常运行。

动态采样的第二个用途是使用全局临时表。 通常,全局临时表不具有统计信息,动态抽样可以向优化器提供有关这些表的信息。 您的应用程序将加载全局临时表,并且使用临时表的任何查询的第一个硬解析将动态地对其进行采样,以确定临时表的正确大小。

当Optimizer猜测
除了在优化器访问未分析的段之前提供必要的统计信息之外,动态采样还可以帮助优化器验证其任何猜测。 优化器可以获得关于“单一事物”的统计信息; 当您使用DBMS_STATS收集统计信息时,优化器会收到有关的信息

表格,行数,平均行宽等。
每个单独的列,高值,低值,不同值的数量,直方图(可能)等。 (Oracle数据库11g甚至可以收集表达式的统计信息,但它仍然是单个表达式)。 此外,Oracle数据库11 g可以收集列组的统计信息,这些统计信息可用于等式比较。
每个个体索引,聚类因子,叶块数,索引高度等。

所以,给出一个具有各种列的表格,优化器有很多信息可以使用,但是它缺少一些重要的信息,包括各个列之间如何交互的统计信息以及列值中任何相关性的统计信息。 例如,假设您有一张普查信息表,其中包含了地球上所有人的记录。 表的属性之一MONTH_BORN_IN是包含每个人出生月份的字符串字段。 另一列 - ZODIAC_SIGN-包含带有每个人星座的名称的字符串。

收集统计数据后,您可以要求优化工具估算12月份出生的人数,几乎可以很准确地估计出1/12的数据(假设出生日期的正常分配)。 如果您要求优化器估计有多少人是双鱼座,那么再次可能会准确估计1/12的数据。

到现在为止还挺好。 但现在你问:“十二月份出生的人多少是双鱼座”?所有双鱼座都是在二月或三月出生的,但优化者并不知道。 所有优化器都知道,十二月份将检索1/12的数据,双鱼座将检索1/12的数据; 它假定两列是独立的,使用非常简单的逻辑说:“12月出生的双鱼座数量将是1/12,或1/144的数据。”实际行数为零将与优化器猜测表中的1/144行非常相似,并且由于基数估计不足,结果通常是次优计划。

动态采样可以帮助解决这个问题。 当它设置得足够高,到3级或更高级别时,优化器将通过使用动态样本来验证其猜测。

为了演示这个,我将创建一个具有一些非常具体数据的表。 请注意,在该表中,如果FLAG1列的值为Y,则FLAG2列的值为N,反之亦然。 所有的数据是Y,N或N,Y-没有Y,Y记录,没有N,N记录。 清单4显示了表的创建和统计的收集。

代码清单4:创建“FLAG”表并收集统计信息

SQL> create table t
2 as select decode( mod(rownum,2), 0, ‘N‘, ‘Y‘ ) flag1,
3 decode( mod(rownum,2), 0, ‘Y‘, ‘N‘ ) flag2, a.*
4 from all_objects a
5 /
Table created.

SQL > create index t_idx on t(flag1,flag2);
Index created.

SQL > begin
2 dbms_stats.gather_table_stats
3 ( user, ‘T‘,
4 method_opt=>‘for all indexed columns size 254‘ );
5 end;
6 /
PL/SQL procedure successfully completed.

所以我有表并收集统计数据,包括FLAG1和FLAG2列的直方图。 以下显示表中的行数,一半的数字和四分之一的数字:

SQL> select num_rows, num_rows/2, num_rows/2/2 from user_tables where table_name = ‘T‘;

NUM_ROWS NUM_ROWS/2 NUM_ROWS/2/2
-------- ---------- ------------
68076 34038 17019

如果我查看表中的数据,我可以看到FLAG1 =‘N‘(数据的一半,因为是我构造的)将返回多少行,以及FLAG2 =‘N将返回多少行‘(也是一半的数据)。 我可以通过再次使用autotrace来验证这一点,如清单5所示。

代码清单5:良好的基数估计,查看表数据的一半

SQL> set autotrace traceonly explain
SQL> select * from t where flag1=‘N‘;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 33479 | 3432K| 292 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| T | 33479 | 3432K| 292 (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG1"=‘N‘)

SQL> select * from t where flag2=‘N‘;

Execution Plan
----------------------------
Plan hash value: 1601196873

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 34597 | 3547K| 292 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| T | 34597 | 3547K| 292 (1)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("FLAG2"=‘N‘)

到目前为止,很好 - 估计的基数是准确的,优化器可以生成最优的查询计划。

最后,如果我查询FLAG1 =‘N‘和FLAG2 =‘N‘,在这种情况下,我们可以看到优化器将默认为“猜”的值,大约是表中的四分之一行,如清单6所示。

代码清单6:基数估计不佳,看四分之一的表数据

SQL> select * from t where flag1 = ‘N‘ and flag2 = ‘N‘;

Execution Plan
----------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 17014 | 1744K| 292 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| T | 17014 | 1744K| 292 (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
----------------------------------------------------

1 - filter("FLAG1" = ‘N‘ AND "FLAG2" = ‘N‘)

清单6显示,当谓词变得复杂一点时,优化器忽略了大量的估计基数 - 它不知道FLAG1和FLAG2之间的关系。 输入动态抽样,如清单7所示。

代码清单7:良好的基数估计,只看六行

SQL> select /*+ dynamic_sampling(t 3) */ * from t where flag1 = ‘N‘ and flag2 = ‘N‘;

Execution Plan
-----------------------------
Plan hash value: 470836197

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 6 | 630 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 6 | 630 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IDX | 6 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
----------------------------------------------------

2 - access("FLAG1"=‘N‘ AND "FLAG2"=‘N‘)

请注意,与清单6相比,清单7中行估计得更好(6个而不是17,000个),并注意到非常不同(现在最优)的计划。 优化器不是使用全表扫描,而是决定使用索引,并且与原始的全表扫描相比,这个执行计划肯定是最佳的,因为实际上不会返回任何数据。 该指数将立即发现。

动态采样级别
有关动态抽样的常见问题是“所有级别意味着什么?”答案在Oracle数据库性能调优指南中非常简单和记录。

我在这里重现这个信息:

如果所使用的动态采样级别来自游标提示或OPTIMIZER_DYNAMIC_SAMPLING初始化参数( cursor hint or from the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter:),则采样级别如下:

等级0:不要使用动态采样。
级别1:如果满足以下条件,则对未分析的所有表进行采样:(1)查询中至少有1个未分析的表; (2)此未分析表连接到另一个表,或出现在子查询或不可合并的视图中; (3)该未分析表没有索引; (4)该未分析表具有比用于动态采样该表的块数更多的块。 采样块数是动态采样块的默认数量(32)。
级别2:对所有未分析的表应用动态抽样。 采样的块数是动态采样块的默认数量的两倍。
级别3:对满足2级标准的所有表以及所有表格进行动态抽样,其中标准选择性估计使用猜测作为潜在动态抽样谓词的谓词。 采样块数是动态采样块的默认数量。 对于未分析的表,采样的块数是动态采样块的默认数量的两倍。
级别4:对满足3级标准的所有表以及具有引用2个或更多列的单表谓词的所有表应用动态抽样。 采样块数是动态采样块的默认数量。 对于未分析的表,采样的块数是动态采样块的默认数量的两倍。
5级,6级,7级,8级和9级:使用动态采样块默认数量的2,4,8,32或128倍,对满足先前级别标准的所有表进行动态采样。
级别10:使用表中的所有块将动态采样应用于满足9级条件的所有表。

请注意,在Oracle9 i数据库版本2中,动态采样的默认设置为1级。在此设置下,只有在某些情况下,优化器才会使用动态采样来进行未分析。 这包括没有统计信息的全局临时表的情况,但不能确保对所有未分析的表使用动态抽样。 在Oracle数据库10g第1版及更高版本中,默认设置已升至2.此设置可确保优化器在优化无统计信息访问表的查询时具有良好的估计。

我在清单7中的“FLAG”表示例中使用了第3级。级别3指示优化器收集一个样本以验证它可能做出的猜测。 例如,如果我打开SQL_TRACE并运行清单7中的示例,我将在我的跟踪文件中找到清单8中的SQL。

代码清单8:由清单7查询生成的跟踪文件SQL

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param(‘parallel_execution_enabled‘, ‘false‘)
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"),
NVL(SUM(C2),:"SYS_B_01"), NVL(SUM(C3),:"SYS_B_02")
FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("T") FULL("T")
NO_PARALLEL_INDEX("T") */ :"SYS_B_03" AS C1, CASE WHEN "T"."FLAG1"=
:"SYS_B_04" AND "T"."FLAG2"=:"SYS_B_05" THEN :"SYS_B_06" ELSE :"SYS_B_07"
END AS C2, CASE WHEN "T"."FLAG2"=:"SYS_B_08" AND "T"."FLAG1"=:"SYS_B_09"
THEN :"SYS_B_10" ELSE :"SYS_B_11" END AS C3 FROM "T" SAMPLE BLOCK
(:"SYS_B_12" , :"SYS_B_13") SEED (:"SYS_B_14") "T") SAMPLESUB

清单8中的粗体显示了优化器尝试验证其猜测,它正在寻找FLAG1和FLAG2列之间的任何相关性。 通过在硬解析过程中执行此抽样查询,优化器能够确定该WHERE子句将返回很少行,调整估计的基数,并达到正确的计划。

何时使用动态采样
“什么时候应该使用动态抽样?”是一个棘手的问题。 与任何其他功能一样,有时候使用它和有时来避免它。 到目前为止,我一直专注于动态抽样的“善良”,并且基于此,您应该将级别设置为3或4,并且让优化器始终使用动态抽样来验证其猜测。

这在您花费大部分时间执行SQL的环境中很有意义,而且您的整体时间非常少,难以解析SQL。 也就是说,您正在执行的SQL运行时间很长,解析时间是整个执行时间的一小部分,例如在数据仓库环境中。 在那里,高于默认值的动态采样是完全有意义的。 您愿意在硬解析(采样发生时)给优化器一点时间,以达到这些复杂查询的最佳计划。

这将留下其他经典类型的环境:在线事务处理(OLTP)系统。 在这里,一般来说,您每秒执行查询数千次,花费很少时间执行给定的查询 - 查询通常很小且快速。 在OLTP系统中增加解析时间可能会导致您花费比执行SQL更多的时间解析。 您不希望在此增加解析时间,因此动态采样的更高级别是不可取的。

那么当您在OLTP系统中需要3级或更高级别的动态样本时,会发生什么情况? 那就是时候考察SQL Profiles,这是Oracle数据库10g第1版及更新版本的一个新功能( download.oracle.com/docs/cd/B28359_01/server.111/b28274/sql_tune.htm#PFGRF02605 ) 。

SQL配置文件在某些??方面大致相当于一个非常有力的动态样本,它也可以验证任何猜测,但是它可以在数据字典中保留这些信息。 在某些方面,SQL配置文件就像一个“静态样本”,而不是我们一直在讨论的动态样本。 使用SQL配置文件有点像收集查询的统计信息,并存储优化器在硬解析时使用的信息,它可以通过“抽样”一次并持久保存值来节省动态抽样的成本。

那么,为什么你不总是使用SQL配置文件? 那么在一个数据仓库和许多报表系统中,你不会使用绑定变量 - 你把文字值放在SQL查询本身中,所以优化器可以很好地洞察到你感兴趣的数据。在数据仓库系统中,查询性能至关重要,用户通常以特定方式生成SQL。 它们不会每天运行相同的SQL,所以没有SQL可以创建SQL配置文件! SQL生成,执行,也可能永远不会再执行。 动态抽样对于这样的数据仓库情况是完美的; 它可以快速验证优化器在硬解析时间的猜测,并向优化器提供所需的信息。 因为该查询可能不会一遍又一遍地执行,所以不需要在数据字典中保留该信息。

在OLTP系统中,情况恰恰相反。 你使用绑定变量; 你一次又一次地使用相同的SQL。 在运行时避免任何一点工作是您在这种系统中的目标,因为您需要亚秒(亚秒,subsecond,个人理解为很快的响应时间)的响应时间。 在这里,SQL配置文件更有意义:您收集查询的统计信息一次,并持续使用该查询的所有后续优化。

原文版路径:http://www.oracle.com/technetwork/testcontent/o19asktom-086775.html

原文:

As Published In
Oracle Magazine
January/February 2009
TECHNOLOGY: Ask Tom

On Dynamic Sampling
By Tom Kyte Oracle Employee ACE

Our technologist samples dynamically, considers usage, and sets levels.

My questions are related to dynamic sampling. What does it really do, when would I consider using it, and what are the meanings of all the different levels it can be set to?

Dynamic sampling first became available in Oracle9i Database Release 2. It is the ability of the cost-based optimizer (CBO) to sample the tables a query references during a hard parse, to determine better default statistics for unanalyzed segments, and to verify its “guesses.” This sampling takes place only at hard parse time and is used to dynamically generate better statistics for the optimizer to use, hence the name dynamic sampling.

The optimizer uses a variety of inputs to come up with a plan. It uses any and all constraints defined on the table; system statistics—information about your server’s I/O speeds, CPU speed, and the like; and statistics gathered from the segments involved in the query. The optimizer uses statistics to estimate cardinalities—the number of rows each step in a given plan is expected to return—and those cardinalities are a major variable in computing the cost of a query. When cardinalities are incorrectly estimated, the optimizer may choose an inefficient query plan. The No. 1, and some might say only, reason for an inefficient plan’s being generated by the optimizer is inaccurate cardinality estimations. I like to say “right cardinality equals right plan; wrong cardinality equals wrong plan.”

So, “right” is the motivation behind dynamic sampling: to help the optimizer get the right estimated cardinality values. Feeding the optimizer more information and more-accurate information, specific to the query itself, will help the optimizer come up with the optimal execution plan.

Dynamic sampling offers 11 setting levels (0 through 10), and I’ll explain the different levels, but note that in Oracle9i Database Release 2, the default dynamic sampling level value is 1, whereas in Oracle Database 10g Release 1 and above, it defaults to 2.

Ways Dynamic Sampling Works
There are two ways to use dynamic sampling:

The OPTIMIZER_DYNAMIC_SAMPLING parameter can be set at the database instance level and can also be overridden at the session level with the ALTER SESSION command.
The DYNAMIC_SAMPLING query hint can be added to specific queries.

In this column, I’m going to use the hinting capability to demonstrate the effect of dynamic sampling, but you may well use the session-level capability to increase the use of dynamic sampling, especially in a reporting or data warehouse situation.

As stated before, dynamic sampling is used to gather statistics for unanalyzed segments and to verify “guesses” made by the optimizer. I’ll look at each of these uses in turn.

Unanalyzed Segments
The optimizer will use default statistic values if a segment is not analyzed and you do not use dynamic sampling to get a quick estimate. These default cardinality values are documented in Oracle Database Performance Tuning Guide. These default statistic values are typically not very realistic, because using them is a one-size-fits-all approach. The estimated row counts are based on guesses at the number of blocks of data in the table and an average row width. Without dynamic sampling, these guesses will be off—by a large amount. Consider:

SQL> create table t
2 as
3 select owner, object_type
4 from all_objects
5 /
Table created.

SQL> select count(*) from t;

COUNT(*)
------------------------
68076

Now I’ll look at the estimates for a query that accesses this unanalyzed table. To see the default cardinalities the optimizer would use, I have to disable dynamic sampling (it is enabled by default in Oracle9i Database Release 2 and above). I achieve this via the DYNAMIC_SAMPLING hint, with a level of zero—zero being the value that disables dynamic sampling—as shown in Listing 1.

Code Listing 1: Disabling dynamic sampling to see default cardinalities

SQL> set autotrace traceonly explain
SQL> select /*+ dynamic_sampling(t 0) */ * from t;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16010 | 437K| 55 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 16010 | 437K| 55 (0)| 00:00:01 |
--------------------------------------------------------------------------

As you can see, the estimated cardinality is 16,010, which is very far off from the real cardinality, about 68,000. If I permit dynamic sampling, I get a much more realistic cardinality estimate, as shown in Listing 2.

Code Listing 2: More-realistic cardinalities with dynamic sampling enabled

SQL> select * from t;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 77871 | 2129K| 56 (2)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 77871 | 2129K| 56 (2)| 00:00:01 |
--------------------------------------------------------------------------

Note
------------------------------------------
- dynamic sampling used for this statement

Now, 77,871 is not exactly 68,000 (obviously), but it is much closer to reality than 16,010 was. In general, the optimizer will choose better query plans for queries accessing this unanalyzed table when using dynamic sampling.

An inaccurate cardinality estimate can swing either way, of course. In Listing 1, I showed the optimizer radically underestimating the cardinality, but it can overestimate as well. Consider the estimate in Listing 3.

Code Listing 3: Overestimating cardinalities

SQL> delete from t;
68076 rows deleted.

SQL> commit;
Commit complete.

SQL> set autotrace traceonly explain
SQL> select /*+ dynamic_sampling(t 0) */ * from t;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 16010 | 437K| 55 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 16010 | 437K| 55 (0)| 00:00:01 |
--------------------------------------------------------------------------

SQL> select * from t;

Execution Plan
-----------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 28 | 55 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| T | 1 | 28 | 5 (0)| 00:00:01 |
--------------------------------------------------------------------------

Note
---------------------------------------
- dynamic sampling used for this statement

Think about what might happen if the optimizer guessed 16,010 rows instead of 1 row in this case. For queries accessing table T, the optimizer would grossly overestimate the rows that will be returned from T and generate incorrect plans as a result.

So, where would dynamic sampling be useful? First and foremost, dynamic sampling is useful when you are accessing any table that has been created and loaded but not yet analyzed. Starting in Oracle Database 10g Release 1, the CBO is the only supported optimizer, and it needs accurate statistics to perform its job correctly. If a table exists that hasn’t had statistics gathered yet, the optimizer will be flying blind. Dynamic sampling gives the CBO the information it needs in order to operate correctly.

The second use for dynamic sampling is with global temporary tables. Often global temporary tables do not have statistics, and dynamic sampling can provide the optimizer with information about these tables. Your application would load the global temporary table, and the first hard parse of any query that utilized the temporary table would dynamically sample it to ascertain the correct size of the temporary table.

When the Optimizer Guesses
In addition to providing the optimizer with necessary statistics when it is accessing unanalyzed segments, dynamic sampling may also help the optimizer to validate any of its guesses. The optimizer has access to statistics about “single things” in general; when you gather statistics by using DBMS_STATS, the optimizer receives information about

The table, the number of rows, average row widths, and the like.
Each individual column, the high value, the low value, the number of distinct values, histograms (maybe), and the like. (Oracle Database 11g can even gather statistics on an expression, but it is still a single expression). Additionally, Oracle Database 11g can gather statistics on groups of columns, and these statistics can be used in equality comparisons.
Each individual index, the clustering factor, the number of leaf blocks, the index height, and the like.

So, given a table with various columns, the optimizer has lots of information to work with, but it is missing some vital information, including statistics about how the various columns interact with each other and statistics about any correlations in column values. For example, suppose you have a table of census information that includes a record for everyone on the planet. One of the table’s attributes—MONTH_BORN_IN—is a character string field containing each person’s birth month. Another column—ZODIAC_SIGN—contains character strings with the name of each person’s zodiac sign.

After gathering statistics, you ask the optimizer to estimate how many people were born in December, and it would almost certainly be able to come up with a very accurate estimate of 1/12 of the data (assuming a normal distribution of birth dates). If you asked the optimizer to estimate how many people are Pisces, it would again likely come up with an accurate estimate of 1/12 of the data again.

So far, so good. But now you ask, “How many people born in December are Pisces?” All Pisces were born in either February or March, but the optimizer isn’t aware of that. All the optimizer knows is that December will retrieve 1/12 of the data and that Pisces will retrieve 1/12 of the data; it assumes that the two columns are independent and, using very simple logic, says, “The number of people born in December who are also Pisces will be 1/12 times 1/12, or 1/144, of the data.” The actual number of rows—zero—will be very far off from the optimizer’s guess of 1/144th of the rows in the table, and the result is typically a suboptimal plan, due to the poor cardinality estimates.

Dynamic sampling can help solve this. When it is set high enough, to level 3 or above, the optimizer will validate its guesses by using a dynamic sample.

To demonstrate this, I’ll create a table with some very specific data. Note that in this table, if the FLAG1 column has a value of Y, the FLAG2 column will have a value of N, and vice versa. All the data is either Y, N, or N, Y—there are no Y, Y records and no N, N records. Listing 4 shows the creation of the table and the gathering of statistics.

Code Listing 4: Creating the “FLAG” table and gathering statistics

SQL> create table t
2 as select decode( mod(rownum,2), 0, ‘N‘, ‘Y‘ ) flag1,
3 decode( mod(rownum,2), 0, ‘Y‘, ‘N‘ ) flag2, a.*
4 from all_objects a
5 /
Table created.

SQL > create index t_idx on t(flag1,flag2);
Index created.

SQL > begin
2 dbms_stats.gather_table_stats
3 ( user, ‘T‘,
4 method_opt=>‘for all indexed columns size 254‘ );
5 end;
6 /
PL/SQL procedure successfully completed.

So I have the table and have gathered statistics, including histograms for the FLAG1 and FLAG2 columns. The following shows the number of rows in the table, half the number, and a quarter of the number:

SQL> select num_rows, num_rows/2,
num_rows/2/2 from user_tables
where table_name = ‘T‘;

NUM_ROWS NUM_ROWS/2 NUM_ROWS/2/2
-------- ---------- ------------
68076 34038 17019

If I look at the data in the table, I can see how many of the rows would be returned for FLAG1 = ‘N’ (half of the data, given how I constructed it) and how many would be returned for FLAG2 = ‘N’ (again half of the data). I can verify this by using autotrace again, as shown in Listing 5.

Code Listing 5: Good cardinality estimates, looking at half of the table data

SQL> set autotrace traceonly explain
SQL> select * from t where flag1=‘N‘;

Execution Plan
------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 33479 | 3432K| 292 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| T | 33479 | 3432K| 292 (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("FLAG1"=‘N‘)

SQL> select * from t where flag2=‘N‘;

Execution Plan
----------------------------
Plan hash value: 1601196873

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 34597 | 3547K| 292 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| T | 34597 | 3547K| 292 (1)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("FLAG2"=‘N‘)

So far, so good—the estimated cardinalities are accurate, and the optimizer can generate optimal query plans.

Last, I can see the value the optimizer will “guess” by default if I query FLAG1 = ‘N’ and FLAG2 = ‘N’—in this case, about a quarter of the rows in the table, as shown in Listing 6.

Code Listing 6: Poor cardinality estimates, looking at a quarter of the table data

SQL> select * from t where flag1 = ‘N‘ and flag2 = ‘N‘;

Execution Plan
----------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 17014 | 1744K| 292 (1)| 00:00:04 |
|* 1 | TABLE ACCESS FULL| T | 17014 | 1744K| 292 (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
----------------------------------------------------

1 - filter("FLAG1" = ‘N‘ AND "FLAG2" = ‘N‘)

Listing 6 shows that when the predicate gets just a little more complex, the optimizer misses the estimated cardinality by a huge amount—it doesn’t know about the relationship between FLAG1 and FLAG2. Enter dynamic sampling, shown in Listing 7.

Code Listing 7: Good cardinality estimate, looking at only six rows

SQL> select /*+ dynamic_sampling(t 3) */ * from t where flag1 = ‘N‘ and flag2 = ‘N‘;

Execution Plan
-----------------------------
Plan hash value: 470836197

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 6 | 630 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 6 | 630 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IDX | 6 | | 1 (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
----------------------------------------------------

2 - access("FLAG1"=‘N‘ AND "FLAG2"=‘N‘)

Note the much better row estimate (6 instead of more than 17,000) in Listing 7, compared to Listing 6, and also note the very different (and now optimal) plan. Instead of a full table scan, the optimizer has decided to use the index, and this execution plan will definitely be optimal compared to the original full table scan, because in reality, no data will be returned. The index will discover that instantly.

The Dynamic Sampling Levels
A frequently asked question about dynamic sampling is, “What do all of the levels mean?” The answer is pretty straightforward and documented in Oracle Database Performance Tuning Guide.

I reproduce that information here:

The sampling levels are as follows if the dynamic sampling level used is from a cursor hint or from the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter:

Level 0: Do not use dynamic sampling.
Level 1: Sample all tables that have not been analyzed if the following criteria are met: (1) there is at least 1 unanalyzed table in the query; (2) this unanalyzed table is joined to another table or appears in a subquery or non-mergeable view; (3) this unanalyzed table has no indexes; (4) this unanalyzed table has more blocks than the number of blocks that would be used for dynamic sampling of this table. The number of blocks sampled is the default number of dynamic sampling blocks (32).
Level 2: Apply dynamic sampling to all unanalyzed tables. The number of blocks sampled is two times the default number of dynamic sampling blocks.
Level 3: Apply dynamic sampling to all tables that meet Level 2 criteria, plus all tables for which standard selectivity estimation used a guess for some predicate that is a potential dynamic sampling predicate. The number of blocks sampled is the default number of dynamic sampling blocks. For unanalyzed tables, the number of blocks sampled is two times the default number of dynamic sampling blocks.
Level 4: Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-table predicates that reference 2 or more columns. The number of blocks sampled is the default number of dynamic sampling blocks. For unanalyzed tables, the number of blocks sampled is two times the default number of dynamic sampling blocks.
Levels 5, 6, 7, 8, and 9: Apply dynamic sampling to all tables that meet the previous level criteria using 2, 4, 8, 32, or 128 times the default number of dynamic sampling blocks respectively.
Level 10: Apply dynamic sampling to all tables that meet the Level 9 criteria using all blocks in the table.

Note that in Oracle9i Database Release 2, the default setting for dynamic sampling is level 1. At this setting, the optimizer will tend to use dynamic sampling only in some cases when a table is unanalyzed. This includes the case of global temporary tables without statistics but doesn’t ensure that dynamic sampling will be used against all unanalyzed tables. In Oracle Database 10g Release 1 and above, the default setting was raised to 2. This setting ensures that the optimizer will have a good estimate whenever it optimizes a query that accesses a table with no statistics.

I used level 3 in the “FLAG” table example in Listing 7. Level 3 instructs the optimizer to collect a sample to validate a guess it might have made. For example, if I turn on SQL_TRACE and run the example in Listing 7, I will find the SQL in Listing 8 in my trace file.

Code Listing 8: Trace file SQL generated by Listing 7 query

SELECT /* OPT_DYN_SAMP */ /*+ ALL_ROWS IGNORE_WHERE_CLAUSE
NO_PARALLEL(SAMPLESUB) opt_param(‘parallel_execution_enabled‘, ‘false‘)
NO_PARALLEL_INDEX(SAMPLESUB) NO_SQL_TUNE */ NVL(SUM(C1),:"SYS_B_00"),
NVL(SUM(C2),:"SYS_B_01"), NVL(SUM(C3),:"SYS_B_02")
FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NO_PARALLEL("T") FULL("T")
NO_PARALLEL_INDEX("T") */ :"SYS_B_03" AS C1, CASE WHEN "T"."FLAG1"=
:"SYS_B_04" AND "T"."FLAG2"=:"SYS_B_05" THEN :"SYS_B_06" ELSE :"SYS_B_07"
END AS C2, CASE WHEN "T"."FLAG2"=:"SYS_B_08" AND "T"."FLAG1"=:"SYS_B_09"
THEN :"SYS_B_10" ELSE :"SYS_B_11" END AS C3 FROM "T" SAMPLE BLOCK
(:"SYS_B_12" , :"SYS_B_13") SEED (:"SYS_B_14") "T") SAMPLESUB

The bold code in Listing 8 shows the optimizer trying to validate its guess—it is looking for any correlation between the FLAG1 and FLAG2 columns. By executing this sampling query during the hard parse, the optimizer was able to figure out that very few rows would be returned by this WHERE clause, adjust the estimated cardinality, and arrive at the correct plan.

When to Use Dynamic Sampling
“When should I use dynamic sampling?” is a tricky question. As with any other feature, there are times to use it and times to avoid it. So far I’ve concentrated on the “goodness” of dynamic sampling, and based on that, it seems that you should set the level to 3 or 4 and just let the optimizer always use dynamic sampling to validate its guesses.

That makes sense in an environment in which you spend most of your time executing SQL and very little of your overall time hard-parsing the SQL. That is, the SQL you are executing runs for a long time and the parse time is a small portion of the overall execution time, such as in a data warehousing environment. There, dynamic sampling at levels above the default makes complete sense. You are willing to give the optimizer a little more time during a hard parse (when sampling takes place) to arrive at the optimal plan for these complex queries.

That leaves the other classic type of environment: the online transaction processing (OLTP) system. Here, in general, you are executing queries thousands of times per second and spend very little time executing a given query—the queries are typically small and fast. Increasing the parse time in an OLTP system might well cause you to spend more time parsing than executing SQL. You do not want to increase the parse times here, so higher levels of dynamic sampling would not be advisable.

Next Steps

ASK Tom
Tom Kyte answers your most difficult technology questions. Highlights from that forum appear in this column. asktom.oracle.com

READ more about the Oracle Database 11g

dynamic sampling
Oracle Database Performance Tuning Guide

SQL profiles
Oracle Database Performance Tuning Guide

READ more Tom
Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions

DOWNLOAD Oracle Database 11g

So what happens when you need the benefit of the dynamic sample at level 3 or above in an OLTP system? That is when it would be time to look into SQL Profiles, a new feature of Oracle Database 10g Release 1 and above (download.oracle.com/docs/cd/B28359_01/server.111/b28274/sql_tune.htm#PFGRF02605).

A SQL profile is roughly equivalent, in some respects, to a really aggressive dynamic sample—it, too, validates any guesses, but it has the ability to persist this information in the data dictionary. In some respects, a SQL profile is like a “static sample,” as opposed to the dynamic samples we’ve been discussing. Using a SQL profile is a bit like gathering statistics for a query and storing that information for the optimizer to use at hard parse time—it saves on the cost of dynamic sampling by “sampling” once and persisting the values.

So, why would you not just always use SQL Profiles? Well, in a data warehouse and many reporting systems, you do not utilize bind variables—you put the literal values into the SQL query itself, so the optimizer has good insight into exactly the data you are interested in. In a data warehouse system, query performance is paramount and the users are typically generating SQL in an ad hoc fashion. They do not run the same SQL from day to day, so there is no SQL from which to create a SQL profile! The SQL is generated, executed, and maybe never executed again. Dynamic sampling is perfect for such a data warehouse situation; it quickly validates the optimizer guesses at hard parse time and gives the optimizer the information it needs right then and there. Because that query will likely not be executed over and over again, the information need not be persisted in the data dictionary.

In the OLTP system, the opposite is true. You do use bind variables; you do use the same SQL over and over and over again. Avoiding any bit of work you can at runtime is your goal in this kind of system, because you want subsecond response times. Here the SQL profile makes more sense: you gather the statistics for the query once and persist them for all subsequent optimizations of that query.

More Information
In addition to the Oracle documentation at oracle.com/technetwork/documentation, Jonathan Lewis’ excellent book Cost-Based Oracle Fundamentals (Apress, 2006) and my book Effective Oracle by Design (Oracle Press, 2003) provide more information on dynamic sampling and other optimizer topics.

Tom Kyte is a database evangelist in Oracle’s Server Technology division and has worked for Oracle since 1993. He is the author of Expert Oracle Database Architecture: 9i and 10g Programming Techniques and Solutions (Apress, 2005) and Effective Oracle by Design (Oracle Press, 2003), among others.

时间: 2024-11-06 15:37:38

动态采样---DYNAMIC_SAMPLING 基于tom文章的翻译的相关文章

Oracle动态采样学习

动态采样(Dynamic Sampling)是在ORACLE 9i Release 2中开始引入的一个技术,引入它的目的是为了应对数据库对象没有分析(统计信息缺失)的情况下,优化器生成更好的执行计划.简单的说,在数据库段(表.索引.分区)对象没有分析的情况下,为了使CBO优化器得到足够多的信息以保证优化器做出正确执行计划而发明的一种技术.它会分析一定数量段对象上的数据块获取CBO需要的统计信息.动态采样技术仅仅是统计信息的一种补充,它不能完全替代统计信息分析. 注意:动态采样在Oracle 11

ORACLE动态采样

原文地址:http://www.oracle.com/technetwork/issue-archive/2009/09-jan/o19asktom-086775.html 动态采样是为了解决当硬解析时对没有统计信息的表抽样收集统计信息,让优化器走出合适的执行计划. 从oracle9.2开始,动态采样共有11个级别(从0到10),oracle9.2的默认采样级别为1,oracle10.1及以上版本默认为2. 1.启用动态采样的方法: 1.1 设置参数OPTIMIZER_DYNAMIC_SAMPL

执行计划基础 动态采样

1. 统计信息默认情况下是每天晚上10点半后收集,如果新建对象还没来得级收集统计信息,就采用动态采样的方式. 2. 具体在set autotrace 跟踪的执行计划中,可以看到类似:- dynamic sampling used for this statement (level=2) 3. 除非你用类似/*+dynamic_sampling(t 0) */的HINT关闭这个动态采样. 4. 在收集过统计信息后,Oracle就不会采用动态采样. 注:建索引过程中,默认会收集索引相关的统计信息.

动态代理是基于什么原理(总结杨晓峰的博客)

动态代理的原理 编程语言分为:动态类型,静态类型 动态类型和静态类型的区别? 一种比较简单的区别方法:是在编译期进行检查,还是在运行时期进行检查. 通常认为:java是一种强类型语言,但是提供类似反射的机制,也具备部分动态语言的能力. 反射机制是java语言提供的一种基础功能,赋予程序在运行时自省(introspect能力),通过反射我们可以直接操作类或者对象,比如获取某个对象的类定义,获取类声明的属性或者方法,调用方法或者构造对象,甚至可以运行时修改类定义. 动态代理是一种方便运行时动态构建代

Java 动态代理是基于什么原理(还没整理完)

1> Java的反射机制在平时的业务开发过程中很少用到,但是在一些基础框架的搭建上应用非常广泛 2>什么是Java反射机制 Java反射机制是在运行状态中,对于任意一个类,都能够知道这个类的所有属性和方法:对于任意一个对象,都能够调用它的任意方法和属性:这种动态获取信息以及动态调用对象方法的功能称为java语言的反射机制. 3>反射机制提供了哪些功能 ----在运行时判定任意一个对象所属的类 ----在运行时构造任意一个类的对象 ----在运行时判定任意一个类所具有的成员变量和方法 --

动态路由协议(基于RIP协议 配置过程)

常见的动态路由协议可以分为"距离矢量路由协议"和"链路状态路由协议". 其中距离矢量路由协议依据的是从源网络到目标网络所经过的路由器的个数来选择路由,典型的协议是"RIP": 链路状态路由协议会综合考虑从源网络到目标网络的各条路径的情况来选择路由,典型的协议是"OSPF". 本篇文章说下RIP协议,运行该协议的路由器都会向邻居路由器学习他们所观察到的路由信息,然后向外通告自己所观察到的路由信息,因为每个路由器在信息上都依赖于邻

.Net 框架实现AOP(动态代理实现AOP,本文为翻译)

在上一节,我们将静态实现AOP,但是对于一个大型项目,要想为每个类,每个方法都去实现AOP ,进行日志记录和权限验证似乎是不可能的. 即使可能对于成百上千个类维护,也是很难维护.所以今天的主题就是如标题所述. (Real Proxy)真正代理和(Dynamic Proxy)动态代理 Real Proxy 类是个抽象类,定义在 System.Runtime.Remoting.Proxies,定义了代理的一些公共的基本的功能方法. Dynamic Proxy 类必须继承Real Proxy,通过重写

Win2D 官方文章系列翻译 - DPI (每英寸点数)和 DIPs(设备独立像素)

本文为个人博客备份文章,原文地址: http://validvoid.net/win2d-dpi-dips/ 本文旨在解释物理像素与设备独立像素(DIPs, device independent pixels)之间的区别,以及 Win2D 如何处理 DPI (dots per inch/每英寸点数). Win2D 被设计为可以自行感知设备环境,以便在各种高低不同 DPI 的设备上呈现正确的视觉表现,故多数应用可以忽略 DIPs 和 DPI 之间的区别.如果你的应用有更多特定需求,或者你希望自行定

Win2D 官方文章系列翻译 - 处理设备丢失

本文为个人博客备份文章,原文地址: http://validvoid.net/win2d-handling-device-lost/ “设备丢失”是指 GPU 设备失效无法继续进行渲染的情况.GPU 硬件故障.驱动程序缺陷.驱动程序软件更新或者从一个 GPU 切换到另一个都可能导致此问题.丢失的设备无法使用,此时任何尝试使用设备的 Win2D 都会引发异常.要解决这一问题,应用必须创建新设备再重新创建所有图形资源. 并非所有应用都需要操心如何解决设备丢失的情况.设备丢失是一种比较少见的情况(但愿