调优1-2

第一章: 调优综述

1、调优的问题
应用软件: 设计阶段、开发阶段、测试阶段、上线
1) 谁来调优
应用设计师:
应用逻辑性及应用的扩展性,数据库应用的架构(物理存储、逻辑存储、对象的架构)
开发人员: 编写高效的sql 代码
数据库管理员:
负责调整数据库的参数,参与测试,配合开发人员对测试中出现的问题加以解决
系统管理员:调整操作系统的参数(关于内存、I/O、交换分区等)
2)为什么调优
为应用上线后有一个良好的运行环境

3) 调优的目标

调优的目标必须量化(不是越快越好)

设定一个量化的目标(比如查询时间缩短到1min以内),达到目标后停止。

2、调优的过程

1)应用的设计和开发阶段
2)数据库配置阶段:
表空间及数据文件存储架构、表结构、索引等
3)新的应用上线
4)应用上线后故障排除和性能调整

3、调优的目标

1)减少和消除等待事件
2)使用最少的块来访问(索引、索引组织表、簇、使用大尺寸块、物化视图、避免行迁移和行链接等、建立分区表)
3)将数据块放入内存中访问
4)缩短响应时间
5)调整磁盘I/O
6)数据装载速度load(数据迁移)(并行、direct、nologgin)
7)减少恢复时间(recovery,调整备份周期)

4、需要调优的问题
1)会话没有被释放
2)游标设计问题
3)表关联的设计--表设计遵循范式,减少表里面记录的冗余;如果进行一些反常规的设计,可以减少表与表连接所消耗的资源。

5、调优的步骤

1)通过工具找出应用的瓶颈(操作系统:top、vmstat、iostat、topas、glance
;database:EM、statspack、awr、rda、oracle的动态性能视图)
2)分析、判断造成瓶颈的原因
3)设定一个量化的目标,进行解决
4)检查问题是否解决,如果没有达到目标,继续第三步。达到目标就结束。

6、调优和数据库安全

1)控制文件多元化(控制文件如果太多,会增加对磁盘的I/O,两个或三个就可以)
2)日志组成员的多元化(每个日志组两个成员分别放到不同的磁盘上)
3)频繁生成检查点(频繁的检查点会增加磁盘的I/O)
4)
数据文件备份(设计好的备份计划,使用增量备份)
5)启动归档(归档进程数、存放归档日志的磁盘I/O)
6) 数据块检查

10:40:51 [email protected] test1>show parameter check

NAME TYPE
VALUE
------------------------------------ -----------
------------------------------
db_block_checking string
FALSE
db_block_checksum string TRUE

7)当前并发的用户和事务的数量

第二章: 诊断和调优的工具

1、告警日志

11:54:28 [email protected] test1>show parameter dump

NAME TYPE
VALUE
------------------------------------ -----------
------------------------------
background_core_dump string
partial
background_dump_dest string
/u01/app/oracle/admin/test1/bd

ump
----告警日志

* 告警日志内容

* 启动时间以及操作模式的启动和关闭命令。
* 涉及物理结构的操作,使用ALTER DATABASE命令。
* 表空间操作。
*
所有日志切换与归档,包括所影响文件的名称。
* 用于启动实例的非默认初始化参数,还包括ALTER SYSTEM命令修改这些参数。
*
告警日志包含了影响数据库结构和实例的各种操作的连续历史记录。
* 告警日志不包含sql语句(DML,DDL)。
*
告警日志包含常见的告警与错误:
* 检查点不完全:说明日志文件过小而引起日志切换频繁。
* 无法打开文件:在数据库启动过程中产生。
*
块讹误:某个数据文件损坏引起。与DBA_EXTENTS视图一起查看。
* 归档存在问题:归档目的地满和不可用。
* 死锁。


----通过外部表方式管理告警日志

1)创建目录并授权

12:05:40 [email protected] test1>create
directory bdump as ‘/u01/app/oracle/admin/test1/bdump‘;

12:07:08 [email protected]
test1>grant read ,write on directory bdump to public;

Grant succeeded.
2)创建外部表
sql >create table alert_log( text
varchar2(4000) )
organization external
(type oracle_loader
default
directory bdump
access parameters (records delimited by newline)

location(‘alert_test1.log‘))
REJECT LIMIT UNLIMITED;

3)查询

12:12:05 [email protected] test1>select * from alert_log where text like ‘%ORA-%‘;

----告警日志备份脚本
#!/bin/bash
cd
$ORACLE_BASE/admin/$ORACLE_SID/bdump
mkdir -p
bak
lg=alert_$ORACLE_SID.log
cat $lg| gzip -9c > bak/$lg.bak.`date
+%y%m%d`.gz && cat /dev/null > $lg

Oracle 11g:

#!/bin/bash
db_name=test1
cd
$ORACLE_BASE/diag/rdbms/$db_name/$ORACLE_SID/trace
mkdir -p
bak
lg=alert_$ORACLE_SID.log
cat $lg| gzip -9c > bak/$lg.bak.`date
+%y%m%d`.gz && cat /dev/null > $lg

2、后台进程trace 文件

记录后台进程日志信息,一般是后台进程的一些错误信息,不包含调优的信息

3、用户进程trace文件

-----用于统计用户执行的sql语句,可以通过这些统计信息,判断sql语句的执行过程和对资源调用

12:58:09 [email protected] test1>show parameter sql_trace

NAME TYPE
VALUE
------------------------------------ -----------
------------------------------
sql_trace boolean
FALSE

-------默认,oracle对用户执行的sql 语句不做跟踪

1)启用跟踪(对所有session的会话进行跟踪,一般比较少用)

13:01:55 [email protected] test1>alter system set sql_trace=true;

System altered.

Elapsed: 00:00:00.19
13:03:02 [email protected] test1>show parameter sql_trace

NAME TYPE
VALUE
------------------------------------ -----------
------------------------------
sql_trace boolean
TRUE

2)对当前用户的会话进行trace
13:06:23 [email protected] test1>alter session set
sql_trace=true;

Session altered.

---普通用户启用会话跟踪
13:07:45 [email protected] test1>grant alter session to public;

Grant succeeded.

Elapsed: 00:00:00.11
13:07:55 [email protected] test1>conn
scott/tiger
Connected.
13:07:59 [email protected] test1>alter session set
sql_trace =true;

Session altered.

Elapsed: 00:00:00.08
13:08:14 [email protected] test1>

案例:

13:07:59 [email protected] test1>alter session set sql_trace =true;

Session altered.

Elapsed: 00:00:00.08
13:08:14 [email protected] test1>analyze table emp compute
statistics;

Table analyzed.

Elapsed: 00:00:00.57
13:09:20 [email protected] test1>select ename,sal,deptno from
emp where deptno=10;

ENAME SAL DEPTNO
---------- ---------- ----------
CLARK
2450 10
KING 5000 10
MILLER
1300 10

Elapsed: 00:00:00.04
13:09:45 [email protected] test1>select ename,sal,deptno from
emp where deptno=20;

ENAME SAL DEPTNO
---------- ---------- ----------
SMITH
800 20
JONES 2975 20
SCOTT
3000 20
ADAMS 1100 20
FORD 3000
20

Elapsed: 00:00:00.02
13:09:53 [email protected] test1>alter session set sql_trace
=false;

-----查看用户跟踪生成的trace

13:10:55 [email protected] test1>show parameter dump

NAME TYPE
VALUE
------------------------------------ -----------
------------------------------
user_dump_dest string
/u01/app/oracle/admin/test1/ud

ump
[[email protected] ~]$ ls -lt
/u01/app/oracle/admin/test1/udump/|more
total 2856
-rw-r----- 1 oracle
oinstall 224513 Nov 5 13:10 test1_ora_4879.trc
-rw-r----- 1 oracle oinstall
3023 Nov 5 13:07 test1_ora_4811.trc
---通过tkprof 工具格式化trace 文件

[[email protected] ~]$ tkprof /u01/app/oracle/admin/test1/udump/test1_ora_4879.trc
/home/oracle/scott.txt sys=no sort=fchela

TKPROF: Release 10.2.0.1.0 - Production on Mon Nov 5 13:18:00 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

[[email protected] ~]$

[[email protected] ~]$ vi scott.txt

TKPROF: Release 10.2.0.1.0 - Production on Mon Nov 5 13:18:00 2012

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Trace file: /u01/app/oracle/admin/test1/udump/test1_ora_4879.trc
Sort
options:
fchela
********************************************************************************
count
= number of times OCI procedure was executed
cpu = cpu time in
seconds executing
elapsed = elapsed time in seconds executing
disk =
number of physical reads of buffers from disk
query = number of buffers
gotten for consistent read
current = number of buffers gotten in current
mode (usually for update)
rows = number of rows processed by the fetch or
execute
call
********************************************************************************

select ename,sal,deptno
from
emp where deptno=10

call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ----------
----------
Parse 1 0.01 0.01 0 0
0 0
Execute 1 0.00 0.00 0 0
0 0
Fetch 2 0.00 0.00 0 8
0 3
------- ------ -------- ---------- ---------- ----------
---------- ----------
total 4 0.01 0.01 0
8 0 3

Misses in library cache during parse: 1 ;这条语句第一执行,产生了硬解析
Optimizer mode:
CHOOSE
Parsing user id: 26

Rows Row Source Operation
-------
---------------------------------------------------
3 TABLE ACCESS FULL EMP
(cr=8 pr=0 pw=0 time=603 us)

********************************************************************************

select ename,sal,deptno
from
emp where deptno=20

call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ----------
----------
Parse 1 0.01 0.00 0 0
0 0
Execute 1 0.00 0.00 0 0
0 0
Fetch 2 0.00 0.00 0 8
0 5
------- ------ -------- ---------- ---------- ----------
---------- ----------
total 4 0.01 0.00 0
8 0 5

Misses in library cache during parse: 1 ;由于未使用绑定变量,还是硬解析
Optimizer mode:
CHOOSE
Parsing user id: 26

Rows Row Source Operation
-------
---------------------------------------------------
5 TABLE ACCESS FULL EMP
(cr=8 pr=0 pw=0 time=458 us)

------通过dbms_system 跟踪其他用户的sql
1)查看用户的sid
和 serial#

13:42:16 [email protected] test1>select username,sid,serial# from v$session where
username is not null;

USERNAME SID
SERIAL#
------------------------------ ---------- ----------
SYS
39 27
SCOTT
38 68

2)对scott 用户的会话进行跟踪

13:42:47 [email protected] test1>exec
dbms_system.set_sql_trace_in_session(38,68,sql_trace=>true);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10
13:45:14 [email protected] test1>

13:43:13 [email protected] test1>var nn number;

13:46:35 [email protected] test1>exec :nn:=10;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
13:46:44 [email protected] test1>select
empno,ename,sal,deptno from emp where deptno=:nn;

EMPNO ENAME SAL DEPTNO
---------- ----------
---------- ----------
7782 CLARK 2450 10
7839
KING 5000 10
7934 MILLER 1300
10

Elapsed: 00:00:00.04
13:47:04 [email protected] test1>exec :nn:=30;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
13:47:12 [email protected] test1>select
empno,ename,sal,deptno from emp where deptno=:nn;

EMPNO ENAME SAL DEPTNO
---------- ----------
---------- ----------
7499 ALLEN 1600 30
7521
WARD 1250 30
7654 MARTIN 1250
30
7698 BLAKE 2850 30
7844 TURNER
1500 30
7900 JAMES 950 30

6 rows selected.

Elapsed: 00:00:00.02

13:45:14 [email protected] test1>exec
dbms_system.set_sql_trace_in_session(38,68,sql_trace=>false);

PL/SQL procedure successfully completed.

3)查看分析sql

select empno,ename,sal,deptno
from
emp where deptno=:nn

call count cpu elapsed disk query current
rows
------- ------ -------- ---------- ---------- ---------- ----------
----------
Parse 2 0.00 0.00 0 0
0 0
Execute 2 0.00 0.01 0 0
0 0
Fetch 4 0.00 0.00 0 16
0 9
------- ------ -------- ---------- ---------- ----------
---------- ----------
total 8 0.01 0.02 0
16 0 9

Misses in library cache during parse: 1 ;执行一次硬解析,但解析次数为2 次,一次为软解析
Misses
in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id:
26

Rows Row Source Operation
-------
---------------------------------------------------
3 TABLE ACCESS FULL EMP
(cr=8 pr=0 pw=0 time=1627 us)

4、调优工具

1)动态性能视图v$xx,动态性能视图是从内存和控制文件读取信息,实时反映了数据库当前的状态,一般用于性能监控和调优
2)数据字典视图:dba_XX,比如表和索引的统计信息
3)statspack
数据库性能统计报告
4)等待事件
5)oracle 提供诊断和调优的包

6)通过对表和索引用analyze 和dbms_stats 工具进行分析,分析后的数据会写入数据字典表里,作为建立执行计划的依据

----统计emp 连同索引

14:07:10 [email protected] test1>exec
dbms_stats.gather_table_stats(‘SCOTT‘,‘EMP‘,cascade=>true);

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.36

---统计pk_emp 索引,采样20%
14:07:29 [email protected] test1>exec
dbms_stats.gather_index_stats(user,‘PK_EMP‘,estimate_percent=>20);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09

----统计emp 表所有的数据
14:09:51 [email protected] test1>analyze table emp compute
statistics;

Table analyzed.

Elapsed: 00:00:00.10

-----采样20% 统计emp表
14:11:17 [email protected] test1>analyze table emp estimate
statistics sample 20 percent;

-----统计index 索引结构
14:13:08 [email protected] test1>analyze index pk_emp validate
structure;

Index analyzed.

Elapsed: 00:00:00.12

-----对表中的单个字段进行分析,用于数据在字段上分布倾斜比较严重的情况

案例:
13:33:29 [email protected] test1>create table t1 (id int ,name varchar2(20))
;

Table created.

Elapsed: 00:00:01.21
13:35:27 [email protected] test1>begin
13:35:35 2
for i in 1..100000 loop
13:35:49 3 insert into t1 values
(i,‘usr‘||to_char(i));
13:36:21 4 end loop;
13:36:33 5
end;
13:36:34 6 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:10.62
13:36:47 [email protected] test1>commit;

Commit complete.

Elapsed: 00:00:00.06
13:36:51 [email protected] test1>select * from t1 where
rownum <10;

ID NAME
---------- --------------------
1354 usr1354

1355 usr1355
1356 usr1356
1357 usr1357
1358
usr1358
1359 usr1359
1360 usr1360
1361 usr1361

1362 usr1362

9 rows selected.

Elapsed: 00:00:00.15

------更新表,造成数据在name字段上分布严重倾斜
13:37:04 [email protected] test1>update t1 set
name=‘tom‘ where id between 100 and 99990;

99891 rows updated.

Elapsed: 00:00:08.19
13:38:45 [email protected] test1>COMMIT;

Commit complete.

Elapsed: 00:00:00.01
13:38:58 [email protected] test1>select count(*) from t1
where name=‘tom‘;

COUNT(*)
----------
99891

Elapsed: 00:00:00.34
13:39:17 [email protected] test1>
13:39:31 [email protected]
test1>
13:39:31 [email protected] test1>analyze table t1 compute statistics;

Table analyzed.

Elapsed: 00:00:01.12
13:39:45 [email protected] test1>set autotrace
traceonly;
13:40:02 [email protected] test1>select * from t1 where name=‘usr10‘;

Elapsed: 00:00:00.15

Execution
Plan
----------------------------------------------------------
Plan hash
value: 3617692013

--------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 909 | 7272 | 70 (2)| 00:00:01 |
|*
1 | TABLE ACCESS FULL| T1 | 909 | 7272 | 70 (2)| 00:00:01
|
--------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

1 - filter("NAME"=‘usr10‘)

Statistics
----------------------------------------------------------
1
recursive calls
0 db block gets
311 consistent
gets
0 physical reads
0 redo size
465
bytes sent via SQL*Net to client
384 bytes received via SQL*Net from
client
2 SQL*Net roundtrips to/from client
0 sorts
(memory)
0 sorts (disk)
1 rows processed

13:40:33 [email protected] test1>select * from t1 where name=‘tom‘;

99891 rows selected.

Elapsed: 00:00:01.08

Execution
Plan
----------------------------------------------------------
Plan hash
value: 3617692013

--------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 909 | 7272 | 70 (2)| 00:00:01 |
|*
1 | TABLE ACCESS FULL| T1 | 909 | 7272 | 70 (2)| 00:00:01
|
--------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

1 - filter("NAME"=‘tom‘)

Statistics
----------------------------------------------------------
1
recursive calls
0 db block gets
6954 consistent
gets
0 physical reads
0 redo size
1834181
bytes sent via SQL*Net to client
73633 bytes received via SQL*Net from
client
6661 SQL*Net roundtrips to/from client
0 sorts
(memory)
0 sorts (disk)
99891 rows processed

------在name上建立索引
13:40:55 [email protected] test1>create index t1_name_ind on
t1(name) tablespace indx;

Index created.

Elapsed: 00:00:01.52
13:41:39 [email protected] test1>select * from t1 where
name=‘usr10‘;

Elapsed: 00:00:00.04

Execution
Plan
----------------------------------------------------------
Plan hash
value: 2446600014

-------------------------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
Time
|
-------------------------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 909 | 7272 | 5 (0)|
00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T1 | 909 | 7272
| 5 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T1_NAME_IND |
909 | | 2 (0)| 00:00:01
|
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

2 - access("NAME"=‘usr10‘)

Statistics
----------------------------------------------------------
1
recursive calls
0 db block gets
4 consistent
gets
1 physical reads
0 redo size
469
bytes sent via SQL*Net to client
384 bytes received via SQL*Net from
client
2 SQL*Net roundtrips to/from client
0 sorts
(memory)
0 sorts (disk)
1 rows processed

------对于tom用户重复值在90000行以上,如果采用索引将不会有好的效率,应该采用全表扫描更合适
13:41:52 [email protected]
test1>select * from t1 where name=‘tom‘;

99891 rows selected.

Elapsed: 00:00:01.42

Execution
Plan
----------------------------------------------------------
Plan hash
value: 2446600014

-------------------------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
Time
|
-------------------------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 909 | 7272 | 5 (0)|
00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T1 | 909 | 7272
| 5 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T1_NAME_IND |
909 | | 2 (0)| 00:00:01
|
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

2 - access("NAME"=‘tom‘)

Statistics
----------------------------------------------------------
1
recursive calls
0 db block gets
13759 consistent
gets
208 physical reads
0 redo size
2253725
bytes sent via SQL*Net to client
73633 bytes received via SQL*Net from
client
6661 SQL*Net roundtrips to/from client
0 sorts
(memory)
0 sorts (disk)
99891 rows processed

------对name 字段单独做分析(oracle会在name字段上建立直方图或柱状图),oracle
会获得这些字段上数据的分布的具体信息,来制定更好的执行计划

13:42:44 [email protected] test1>analyze table t1 compute statistics for columns
name;

Table analyzed.

Elapsed: 00:00:01.07
13:45:41 [email protected] test1>select * from t1 where
name=‘tom‘;

99891 rows selected.

Elapsed: 00:00:01.05

Execution
Plan
----------------------------------------------------------
Plan hash
value: 3617692013

--------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
--------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 98667 | 770K| 70 (2)| 00:00:01 |
|*
1 | TABLE ACCESS FULL| T1 | 98667 | 770K| 70 (2)| 00:00:01
|
--------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

1 - filter("NAME"=‘tom‘)

Statistics
----------------------------------------------------------
1
recursive calls
0 db block gets
6954 consistent
gets
0 physical reads
0 redo size
1834181
bytes sent via SQL*Net to client
73633 bytes received via SQL*Net from
client
6661 SQL*Net roundtrips to/from client
0 sorts
(memory)
0 sorts (disk)
99891 rows processed
;全表扫描是一个比较合理的执行计划

13:45:52 [email protected] test1>select * from t1 where name=‘usr99‘;

Elapsed: 00:00:00.04

Execution
Plan
----------------------------------------------------------
Plan hash
value: 2446600014

-------------------------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
Time
|
-------------------------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 1 | 8 | 2 (0)|
00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 8
| 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T1_NAME_IND |
1 | | 1 (0)| 00:00:01
|
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

2 - access("NAME"=‘usr99‘)

Statistics
----------------------------------------------------------
1
recursive calls
0 db block gets
4 consistent
gets
2 physical reads
0 redo size
469
bytes sent via SQL*Net to client
384 bytes received via SQL*Net from
client
2 SQL*Net roundtrips to/from client
0 sorts
(memory)
0 sorts (disk)
1 rows processed

13:46:14 [email protected] test1>

----统计后的数据会放到dba_tables、dba_indexes、indexe_stats、dba_tab_columns、index_histogram、dba_tab_histogram

----通过autotrace 查看语句的执行计划

14:17:50 [email protected] test1>set autotrace on ;打开trace

14:19:45 [email protected] test1>select ename,sal,deptno from scott.emp where
empno=7788;

ENAME SAL DEPTNO
---------- ---------- ----------
SCOTT
3000 20

Elapsed: 00:00:00.02

Execution
Plan
----------------------------------------------------------
Plan hash
value: 2949544139

--------------------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time

|
--------------------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 1 | 13 | 1 (0)|
00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 13 |
1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PK_EMP | 1 |
| 0 (0)| 00:00:01
|
--------------------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

2 - access("EMPNO"=7788)

Statistics
----------------------------------------------------------
1
recursive calls ;递归调用,从数据字典查询记录
0 db block gets
;DML操作读取的数据块的个数
2 consistent gets ; select
从内存中读取块的次数(一致性读)
0 physical reads ;物理读,从磁盘读取
0 redo size
; 生成redo 大小
526 bytes sent via SQL*Net to client

384 bytes received via SQL*Net from client
2 SQL*Net
roundtrips to/from client
0 sorts (memory) ;在pga
中排序
0 sorts (disk) ;在磁盘上排序(通过临时表空间)
1 rows
processed
set autotrace traceonly ;不返回记录执行结果,只查看执行计划和统计信息
set auto他race
off
14:20:07 [email protected] test1>set autotrace on stat 只查看统计

14:29:11 [email protected] test1>select ename,sal,deptno from scott.emp where
empno=7788;

ENAME SAL DEPTNO
---------- ---------- ----------
SCOTT
3000 20

Elapsed: 00:00:00.01

Statistics
----------------------------------------------------------
0
recursive calls
0 db block gets
2 consistent
gets
0 physical reads
0 redo size
526
bytes sent via SQL*Net to client
384 bytes received via SQL*Net from
client
2 SQL*Net roundtrips to/from client
0 sorts
(memory)
0 sorts (disk)
1 rows processed

14:29:18 [email protected] test1>set autotrace on explain ;只查看执行计划
14:29:29 [email protected]
test1>select ename,sal,deptno from scott.emp where empno=7788;

ENAME SAL DEPTNO
---------- ---------- ----------
SCOTT
3000 20

Elapsed: 00:00:00.01

Execution
Plan
----------------------------------------------------------
Plan hash
value: 2949544139

--------------------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time

|
--------------------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 1 | 13 | 1 (0)|
00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 1 | 13 |
1 (0)| 00:00:01 |
|* 2 | INDEX UNIQUE SCAN | PK_EMP | 1 |
| 0 (0)| 00:00:01
|
--------------------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

2 - access("EMPNO"=7788)

------通过对表分析建立正确的执行计划

案例:

14:05:27 [email protected] test1>l
1 select e.ename,e.sal ,d.dname,d.loc from
emp e,dept d
2* where e.deptno=d.deptno
14:05:29 [email protected] test1>/

14 rows selected.

Elapsed: 00:00:00.14

Execution
Plan
----------------------------------------------------------
Plan hash
value: 615168685

---------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|
---------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 14 | 560 | 7 (15)| 00:00:01 |
|*
1 | HASH JOIN | | 14 | 560 | 7 (15)| 00:00:01 |
| 2
| TABLE ACCESS FULL| DEPT | 4 | 120 | 3 (0)| 00:00:01 |
| 3 |
TABLE ACCESS FULL| EMP | 14 | 140 | 3 (0)| 00:00:01 |
//在dept表上的deptno字段有索引,而此计划走全表扫描,dept应该是没有做过分析。
---------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

1 - access("E"."DEPTNO"="D"."DEPTNO")

Note
-----
- dynamic sampling used for this statement

Statistics
----------------------------------------------------------
213
recursive calls
0 db block gets
133 consistent
gets
7 physical reads
0 redo size
990
bytes sent via SQL*Net to client
384 bytes received via SQL*Net from
client
2 SQL*Net roundtrips to/from client
5 sorts
(memory)
0 sorts (disk)
14 rows processed

-------对表分析后,再查看执行计划

14:05:31 [email protected] test1>exec
dbms_stats.gather_table_stats(user,‘dept‘,cascade=>true);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.38
14:06:36 [email protected] test1>select e.ename,e.sal
,d.dname,d.loc from emp e,dept d
14:06:48 2 where e.deptno=d.deptno;

14 rows selected.

Elapsed: 00:00:00.02

Execution
Plan
----------------------------------------------------------
Plan hash
value: 351108634

----------------------------------------------------------------------------------------
|
Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time

|
----------------------------------------------------------------------------------------
|
0 | SELECT STATEMENT | | 14 | 420 | 4 (0)|
00:00:01 |
| 1 | NESTED LOOPS | | 14 | 420 |
4 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL | EMP | 14
| 140 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT
| 1 | 20 | 1 (0)| 00:00:01 |
|* 4 | INDEX UNIQUE SCAN
| PK_DEPT | 1 | | 0 (0)| 00:00:01
|
----------------------------------------------------------------------------------------

Predicate Information (identified by operation
id):
---------------------------------------------------

4 - access("E"."DEPTNO"="D"."DEPTNO")

Statistics
----------------------------------------------------------
17
recursive calls
0 db block gets
72 consistent gets
;正确的执行计划减少了一致性读
0 physical reads
0 redo size

990 bytes sent via SQL*Net to client
384 bytes received via SQL*Net
from client
2 SQL*Net roundtrips to/from client
0
sorts (memory)
0 sorts (disk)
14 rows processed

14:06:55 [email protected] test1>

-------statspack
利用oracle 生成快照(snap),在两个快照之间通过对比生成性能统计报告



1)创建tablespace 存放统计数据
14:50:24 [email protected] test1>create tablespace
perftbs
14:50:46 2 datafile ‘/u01/app/oracle/oradata/test1/perftbs1.dbf‘
size 100m
14:51:14 3 autoextend on maxsize
500m;
2)安装statspack
14:54:40 [email protected] test1>@?/rdbms/admin/spcreate.sql

删除:
14:54:40 [email protected] test1>@?/rdbms/admin/spdrop.sql

3)查看statspack 统计级别
14:58:05 [email protected] test1> select * from
STATS$level_DESCRIPTION
14:58:09 2 ;

SNAP_LEVEL
----------
DESCRIPTION
------------------------------------------------------------------------------------------------------------------------
0
This
level captures general statistics, including rollback segment, row cache, SGA,
system events, background events, se
ssion events, system statistics, wait
statistics, lock statistics, and Latch information

5
This level includes capturing high resource usage SQL
Statements, along with all data captured by lower levels

6
This level includes capturing SQL plan and SQL plan usage
information for high resource usage SQL Statements, along with
all data
captured by lower levels

7
This level captures segment level statistics, including logical
and physical reads, row lock, itl and buffer busy waits,
along with all data
captured by lower levels

SNAP_LEVEL
----------
DESCRIPTION
------------------------------------------------------------------------------------------------------------------------
10
This
level includes capturing Child Latch statistics, along with all data captured by
lower levels

5 rows selected.

-----默认的是level 5

4)修改统计级别

14:43:25 [email protected] test1 >exec
statspack.MODIFY_STATSPACK_PARAMETER(I_SNAP_LEVEL=>7);

PL/SQL procedure successfully completed.

14:58:12 [email protected] test1> execute statspack.snap(i_snap_level=>7);

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.31
15:00:09 [email protected] test1> select SNAP_LEVEL from
stats$snapshot;

SNAP_LEVEL
----------
7

1 row selected.

-----修改statspack snap 级别(修改默认级别)

09:43:48 [email protected] test1>

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.58
09:46:02 [email protected] test1>exec statspack.snap;

PL/SQL procedure successfully completed.

Elapsed: 00:00:06.24
09:46:31 [email protected] test1>exec statspack.snap;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.97
09:46:40 [email protected] test1>desc
statspack$snapshot
ERROR:
ORA-04043: object statspack$snapshot does not
exist

09:46:54 [email protected] test1>desc stats$snapshot
Name
Null? Type

----------------------------------------------------------------- --------
--------------------------------------------
SNAP_ID
NOT NULL NUMBER
DBID
NOT NULL NUMBER
INSTANCE_NUMBER
NOT NULL NUMBER
SNAP_TIME
NOT NULL DATE
STARTUP_TIME
NOT NULL DATE
SESSION_ID
NOT NULL NUMBER

SERIAL#
NUMBER
SNAP_LEVEL
NUMBER
UCOMMENT
VARCHAR2(160)
EXECUTIONS_TH
NUMBER
PARSE_CALLS_TH
NUMBER
DISK_READS_TH
NUMBER
BUFFER_GETS_TH
NUMBER
SHARABLE_MEM_TH
NUMBER
VERSION_COUNT_TH
NUMBER
SEG_PHY_READS_TH
NOT NULL NUMBER
SEG_LOG_READS_TH
NOT NULL NUMBER

SEG_BUFF_BUSY_TH NOT NULL
NUMBER
SEG_ROWLOCK_W_TH NOT
NULL NUMBER
SEG_ITL_WAITS_TH
NOT NULL NUMBER
SEG_CR_BKS_RC_TH
NUMBER
SEG_CU_BKS_RC_TH
NUMBER
SEG_CR_BKS_SD_TH
NUMBER
SEG_CU_BKS_SD_TH
NUMBER
SNAPSHOT_EXEC_TIME_S
NUMBER
ALL_INIT
VARCHAR2(5)
BASELINE
VARCHAR2(1)

09:47:05 [email protected] test1>select SNAP_ID,SNAP_LEVEL from stats$snapshot;

SNAP_ID SNAP_LEVEL
---------- ----------
21 5
1
7
11 5
12 5
51
7
52 7
31 5
32
5
41 5

9 rows selected.

Elapsed: 00:00:00.02
09:47:33 [email protected] test1>

-------生成statspack报告
14:58:12 [email protected] test1> exec statspack.snap ;
生成快照(在数据稳定期间生成两个快照,建立报告;在性能出现问题时,再生成快照,建立report)

14:58:12 [email protected] test1>@?/rdbms/admin/spreport.sql

------通过两个时间段之间的报告对比来查看数据库性能的变化,再找出原因

AWR 报告使用:

Oracle AWR 介绍

Oracle Database 10g 提供了一个显著改进的工具:自动工作负载信息库
(AWR:Automatic Workload Repository)。Oracle 建议用户用这个取代 Statspack。
AWR 实质上是一个
Oracle 的内置工具,它采集与性能相关的统计数据,并从那些统计数据中导出性能量度,以跟踪潜在的问题。
与 Statspack 不同,快照由一个称为
MMON 的新的后台进程及其从进程自动地每小时采集一次。为了节省空间,采集的数据在 7
天后自动清除。
快照频率和保留时间都可以由用户修改。
它产生两种类型的输出:文本格式(类似于 Statspack 报表的文本格式但来自于 AWR
信息库)和默认的 HTML 格式(拥有到部分和子部分的所有超链接),从而提供了非常用户友好的报表。
AWR
使用几个表来存储采集的统计数据,所有的表都存储在新的名称为 SYSAUX 的特定表空间中的 SYS 模式下,并且以 WRM$_* 和 WRH$_*
的格式命名。前一种类型存储元数据信息(如检查的数据库和采集的快照),后一种类型保存实际采集的统计数据。H 代表“历史数据 (historical)”而 M
代表“元数据 (metadata)”。在这些表上构建了几种带前缀 DBA_HIST_
的视图,这些视图可以用来编写您自己的性能诊断工具。视图的名称直接与表相关;例如,视图 DBA_HIST_SYSMETRIC_SUMMARY
是在WRH$_SYSMETRIC_SUMMARY 表上构建的。

一. AWR使用
SQL>@?/rdbms/admin/awrrpt.sql

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an
HTML report, or a plain text report?
Enter ‘html‘ for an HTML report, or
‘text‘ for plain text
Defaults to ‘html‘
输入 report_type 的值:
Type
Specified: html
Specify the number of days of snapshots to choose from

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the
number of days (n) will result in the most recent
(n) days of snapshots
being listed. Pressing <return> without
specifying a number lists all
completed snapshots.
输入 num_days 的值: 1
Listing the last day‘s Completed
Snapshots
Snap

Instance DB Name Snap Id Snap Started Level

------------ ------------ --------- ------------------ -----
orcl10g
ORCL10G 142 03 7月 2009 08:11 1

143 03 7月 2009 09:00 1
144 03 7月
2009 10:00 1
145 03 7月 2009 11:00 1

146 03 7月 2009 12:01 1
Specify the
Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
输入
begin_snap 的值: 142
Begin Snapshot Id specified: 142
输入 end_snap 的值:
146
End Snapshot Id specified: 146
Specify the Report Name

~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is
awrrpt_1_142_146.html. To use this name,
press <return> to continue,
otherwise enter an alternative.
输入 report_name 的值: D:\awrrpt_1_142_146.html

Report written to D:\awrrpt_1_142_146.html

二 AWR 操作
1.查看当前的AWR保存策略
SQL> col SNAP_INTERVAL format a20

SQL> col RETENTION format a20
SQL> select * from
dba_hist_wr_control;
DBID SNAP_INTERVAL RETENTION
TOPNSQL
---------- -------------------- -------------------- ----------

262089084 +00000 01:00:00.0 +00007 00:00:00.0 DEFAULT

以上结果表示,每小时产生一个SNAPSHOT,保留7天。
2.调整AWR配置

AWR配置都是通过dbms_workload_repository包进行配置。
2.1
调整AWR产生snapshot的频率和保留策略,如将收集间隔时间改为30 分钟一次。并且保留5天时间(单位都是分钟):
SQL> exec
dbms_workload_repository.modify_snapshot_settings(interval=>30,
retention=>5*24*60);
2.2 关闭AWR,把interval设为0则关闭自动捕捉快照
SQL> exec
dbms_workload_repository.modify_snapshot_settings(interval=>0);
2.3
手工创建一个快照
SQL> exec DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();
2.4
查看快照
SQL> select * from sys.wrh$_active_session_history
2.5
手工删除指定范围的快照
SQL> exec
DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE(low_snap_id => 973, high_snap_id
=> 999, dbid => 262089084);
2.6 创建baseline,保存这些数据用于将来分析和比较
SQL>
exec dbms_workload_repository.create_baseline(start_snap_id => 1003,
end_snap_id => 1013, ‘apply_interest_1‘);
2.7 删除baseline
SQL> exec
DBMS_WORKLOAD_REPOSITORY.DROP_BASELINE(baseline_name => ‘apply_interest_1‘,
cascade => FALSE);
2.8 将AWR数据导出并迁移到其它数据库以便于以后分析
SQL> exec
DBMS_SWRF_INTERNAL.AWR_EXTRACT(dmpfile => ‘awr_data.dmp‘, mpdir =>
‘DIR_BDUMP‘, bid => 1003, eid => 1013);
2.9 迁移AWR数据文件到其他数据库

SQL> exec DBMS_SWRF_INTERNAL.AWR_LOAD(SCHNAME => ‘AWR_TEST‘, dmpfile
=> ‘awr_data.dmp‘, dmpdir => ‘DIR_BDUMP‘);
把AWR数据转移到SYS模式中:

SQL> exec DBMS_SWRF_INTERNAL.MOVE_TO_AWR (SCHNAME => ‘TEST‘);

三. AWR 报告分析
3.1 SQL ordered by Elapsed Time
记录了执行总和时间的TOP
SQL(请注意是监控范围内该SQL的执行时间总和,而不是单次SQL执行时间 Elapsed Time = CPU Time + Wait Time)。

Elapsed Time(S):
SQL语句执行用总时长,此排序就是按照这个字段进行的。注意该时间不是单个SQL跑的时间,而是监控范围内SQL执行次数的总和时间。单位时间为秒。Elapsed
Time = CPU Time + Wait Time
CPU Time(s): 为SQL语句执行时CPU占用时间总时长,此时间会小于等于Elapsed
Time时间。单位时间为秒。
Executions: SQL语句在监控范围内的执行次数总计。
Elap per Exec(s):
执行一次SQL的平均时间。单位时间为秒。
% Total DB Time: 为SQL的Elapsed Time时间占数据库总时间的百分比。

SQL ID: SQL语句的ID编号,点击之后就能导航到下边的SQL详细列表中,点击IE的返回可以回到当前SQL ID的地方。
SQL
Module:
显示该SQL是用什么方式连接到数据库执行的,如果是用SQL*Plus或者PL/SQL链接上来的那基本上都是有人在调试程序。一般用前台应用链接过来执行的sql该位置为空。

SQL Text: 简单的sql提示,详细的需要点击SQL ID。
3.2 SQL ordered by CPU Time:

记录了执行占CPU时间总和时间最长的TOP SQL(请注意是监控范围内该SQL的执行占CPU时间总和,而不是单次SQL执行时间)。
3.3
SQL ordered by Gets:
记录了执行占总buffer gets(逻辑IO)的TOP
SQL(请注意是监控范围内该SQL的执行占Gets总和,而不是单次SQL执行所占的Gets)。
3.4 SQL ordered by Reads:

记录了执行占总磁盘物理读(物理IO)的TOP SQL(请注意是监控范围内该SQL的执行占磁盘物理读总和,而不是单次SQL执行所占的磁盘物理读)。

3.5 SQL ordered by Executions:
记录了按照SQL的执行次数排序的TOP
SQL。该排序可以看出监控范围内的SQL执行次数。
3.6 SQL ordered by Parse Calls:

记录了SQL的软解析次数的TOP SQL。说到软解析(soft prase)和硬解析(hard
prase),就不能不说一下Oracle对sql的处理过程。
3.7 SQL ordered by Sharable Memory:

记录了SQL占用library cache的大小的TOP SQL。Sharable Mem (b):占用library
cache的大小,单位是byte。
3.8 SQL ordered by Version Count:
记录了SQL的打开子游标的TOP
SQL。
3.9 SQL ordered by Cluster Wait Time:
记录了集群的等待时间的TOP SQL

--------oracle 等待事件

等待事件的源起
等待事件的概念大概是从ORACLE 7.0.12中引入的,大致有100个等待事件。在ORACLE
8.0中这个数目增大到了大约150个,在ORACLE 8I中大约有220个事件,在ORACLE 9IR2中大约有400个等待事件,而在最近ORACLE
10GR2中,大约有874个等待事件。
虽然不同版本和组件安装可能会有不同数目的等待事件,但是这些等待事件都可以通过查询V$EVENT_NAME视图获得

1). 空闲等待事件指ORACLE正等待某种工作,在诊断和优化数据库的时候,不用过多注意这部分事件。
2).
非空闲等待事件专门针对ORACLE的活动,指数据库任务或应用运行过程中发生的等待,这些等待事件是在调整数据库的时候需要关注与研究的

常见的空闲事件有:

* dispatcher timer
* lock element cleanup
* Null event
*
parallel query dequeue wait
* parallel query idle wait - Slaves
* pipe
get
* PL/SQL lock timer
* pmon timer- pmon
* rdbms ipc
message
* slave wait
* smon timer
* SQL*Net break/reset to
client
* SQL*Net message from client
* SQL*Net message to client
*
SQL*Net more data to client
* virtual circuit status
* client
message

一些常见的非空闲等待事件有:

* db file scattered read
* db file sequential read
* buffer busy
waits
* free buffer waits
* enqueue
* latch free
* log file
parallel write
* log file sync

-----等待事件的查看

案例:

Elapsed: 00:00:00.01
13:33:28 [email protected] test1>begin
13:54:46 2 for
i in 1..100000 loop
13:54:46 3 execute immediate ‘insert into t1 values
(‘||i||‘) ‘;
13:54:46 4 end loop;
13:54:46 5 end;
13:54:46
6 /

PL/SQL procedure successfully completed.

Elapsed: 00:01:46.54
13:56:34 [email protected] test1>

-----对某一会话事件监控

13:56:00 [email protected] test1>select username,sid,serial# from v$session where
username is not null;

USERNAME SID
SERIAL#
------------------------------ ---------- ----------
SCOTT
34 52
SYS
23 35
SYS 39 27
SYS
24 785

Elapsed: 00:00:00.17
13:56:16 [email protected] test1>clear scr

13:56:31 [email protected] test1>select sid,event,total_waits,average_wait from
v$session_event where sid=34;

SID EVENT
TOTAL_WAITS AVERAGE_WAIT
----------
---------------------------------------------------------------- -----------
------------
34 latch: cache buffers chains
1 1.03
34 enq: RO - fast object reuse
1 .72
34 log file switch
completion 6 8.52

34 log file sync
3 .14
34 db file sequential read
12 2.32
34 latch: library cache
2 22.59
34
SQL*Net message to client 33
0
34 SQL*Net message from client
32 7703.96
34 events in waitclass Other
4 10.87

9 rows selected.

Elapsed: 00:00:00.03
13:57:03 [email protected] test1>

----对整个实例事件监控
13:53:14 [email protected] test1>col event for a50

----insert事务发生前
13:53:32 [email protected] test1>r
1 select
event,total_waits,total_timeouts,time_waited,average_wait
2 from
v$system_event
3* where event like ‘%buffer%‘

EVENT TOTAL_WAITS TOTAL_TIMEOUTS
TIME_WAITED AVERAGE_WAIT
--------------------------------------------------
----------- -------------- ----------- ------------
latch: cache buffers
chains 1 0 1
1.02
buffer busy waits 1
0 0 .01
log buffer space
2 1 182 90.75
latch: cache buffers
lru chain 5 0 1
.2

Elapsed: 00:00:00.01
13:53:32 [email protected] test1>/

------insert事务发生后
EVENT
TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED
AVERAGE_WAIT
-------------------------------------------------- -----------
-------------- ----------- ------------
latch: cache buffers chains
2 0 2 1.03
buffer busy
waits 2 0 0
.07
log buffer space 2
1 182 90.75
latch: cache buffers lru chain
5 0 1 .2

Elapsed: 00:00:00.01
13:57:28 [email protected] test1>

时间: 2024-10-03 17:18:39

调优1-2的相关文章

JVM原理讲解和调优

一.什么是JVM JVM是Java Virtual Machine(Java虚拟机)的缩写,JVM是一种用于计算设备的规范,它是一个虚构出来的计算机,是通过在实际的计算机上仿真模拟各种计算机功能来实现的. Java语言的一个非常重要的特点就是与平台的无关性.而使用Java虚拟机是实现这一特点的关键.一般的高级语言如果要在不同的平台上运行,至少需要编译成不同的目标代码.而引入Java语言虚拟机后,Java语言在不同平台上运行时不需要重新编译.Java语言使用Java虚拟机屏蔽了与具体平台相关的信息

spark性能调优之资源调优

转https://tech.meituan.com/spark-tuning-basic.html spark作业原理 使用spark-submit提交一个Spark作业之后,这个作业就会启动一个对应的Driver进程.根据你使用的部署模式(deploy-mode)不同,Driver进程可能在本地启动,也可能在集群中某个工作节点上启动.Driver进程本身会根据我们设置的参数,占有一定数量的内存和CPU core.而Driver进程要做的第一件事情,就是向集群管理器(可以是Spark Stand

mysql性能调优与架构设计笔记

1.mysql基本介绍 mysql支持多线程高并发的关系型数据库; 数据库存储引擎InnoDB.MyISAM; mysql快速崛起的原因就是他是开源的; 性能一直是mysql自豪的一大特点; 2.mysql架构组成 麻雀虽小五脏俱全,mysql虽然简单但其内部结构并不简单; mysql物理文件组成之日志文件: 错误日志error log这里记录mysql运行时严重的警告和错误,以及mysql启动和关闭的日志信息 二进制日志 binary log 记录mysql运行时所有的query和query执

网络调优

关于网络调优,尤其是TCP Tuning(你可以以这两个关键词在网上找到很多文章),这里面有很多很多东西可以说.看看Linux下TCP/IP的那么多参数就知道了(顺便说一下,你也许不喜欢Linux,但是你不能否认Linux给我们了很多可以进行内核调优的权力).强烈建议大家看看<TCP/IP详解卷1:协议>这本书.我在这里只讲一些概念上的东西. A)TCP调优 我们知道TCP链接是有很多开销的,一个是会占用文件描述符,另一个是会开缓存,一般来说一个系统可以支持的TCP链接数是有限的,我们需要清楚

【Spark深入学习 -14】Spark应用经验与程序调优

----本节内容------- 1.遗留问题解答 2.Spark调优初体验 2.1 利用WebUI分析程序瓶颈 2.2 设置合适的资源 2.3 调整任务的并发度 2.4 修改存储格式 3.Spark调优经验 3.1 Spark原理及调优工具 3.2 运行环境优化 3.2.1 防止不必要的分发 3.2.2 提高数据本地性 3.2.3 存储格式选择 3.2.4 选择高配机器 3.3 优化操作符 3.3.1 过滤操作导致多小任务 3.3.2 降低单条记录开销 3.3.3 处理数据倾斜或者任务倾斜 3.

iOS应用性能调优的25个建议和技巧

目录 我要给出的建议将分为三个不同的等级: 入门级. 中级和进阶级: 入门级(这是些你一定会经常用在你app开发中的建议) 1. 用ARC管理内存 2. 在正确的地方使用reuseIdentifier 3. 尽可能使Views透明 4. 避免庞大的XIB 5. 不要block主线程 6. 在Image Views中调整图片大小 7. 选择正确的Collection 8. 打开gzip压缩 中级(这些是你可能在一些相对复杂情况下可能用到的) 9. 重用和延迟加载Views 10. Cache, C

php调优

php.ini配置文件调优 1. expose_php off 2. register_globals off(重要,防止GET和POST变量自动注册为全局变量,一定要关闭) 3. 打开magic_quotes_gpc来防止SQL注入,重要!是防止SQL注入的重要手段,该参数将用户提交的SQL查询进行转换,如将'转换为\'. 4. display errors off 4. max_excution_time 30s 最长执行时间. 5. memory_limit,一个脚本能申请到的最大内存,防

mysql数据库调优

最近新到项目上,算是帮忙,遇见性能测试. 测试要求其实不高,现在是单mysql数据库,未分表,四千万数据,四百毫秒,上的压力是一千一百多tps,但是,动态的只占到了百分之二十左右,也就是两百左右的tps吧.服务器还是比较牛逼的,我看到了十几个cpu线程,估计超过一百G内存吧. 大体情况如上. 鄙人之前没优化过mysql,其实,是没调优过sql,只读过部分sql执行的原理,数据库的结构啥的,平常写sql和设计表的时候有些注意,实战调优经验为零,以前就算调了,没测试过,也白搭,这次算是逮到便宜了.

第5章 JVM调优

5.1 Java虚拟机内存模型 Java虚拟机内存模型是Java程序运行的基础.JVM将其内存数据分为程序计数器,虚拟机栈,本地方法栈,Java堆和方法区等部分. 程序计数器:用于存放下一条运行的指令: 虚拟机栈和本地方法栈:用于存放函数调用堆栈信息: Java堆:用于存放Java程序运行时所需的对象等数据: 方法区:用于存放程序的类元数据信息: 5.1.1 程序计数器 程序计数器是一块很小内存空间.由于Java是支持线程的语言,当线程数量超过CPU数量时,线程之间根据时间片轮询抢夺CPU资源.

Tomcat性能调优

常见的做法是从两个方面进行调优: 1) 增加JVM堆内存大小 找到bin/catalina.bat,修改JAVA_OPTS参数(若没有,则在文件第一行加上): set JAVA_OPTS="-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1024m -Xmx1024m -XX:NewSize=512m -XX:MaxNewSize=512m -XXermSize=512m -XX:MaxPermSize=512m -XX:+D