创建逻辑dg

逻辑备用DG

今天是2014-04-29,最近一直忙的事情,也没来的急写点东西。今天继续整理dg的相关内容,要说的是逻辑dg的创建过程和注意事项。

什么是逻辑dg呢?物理dg类似于主库的完整副本,是块对块的重做应用。但是对于逻辑dg,是类似采用日志挖掘技术将主库传来的redo数据,进行构造sql进而在备库进行应用。且备库可以正常打开,并能维护其他不是主库传送的需要维护的对象。

那么就存在一个问题,如果在备库采用rowid获得的数据,有可能和主库的完全不一致。那么当在主库修改数据的时候,我们建议在所修改的对象上,创建非空索引或是主键条件,那么在传送到备库的时候,继而能够应用这些约束,避免数据更新的错误性。但有时候并不是非要说一定要在主库建立对象表的主键或是非空索引,如果没有创建了,那么将会产生大量的无用redo,这就是在所有字段上都进行了日志补充导致。

The physical organization in a logical standby database is different from that of the primary database, even though the logical standby database is created from a backup copy of the primary database. Thus, ROWIDs contained in the redo records generated by
the primary database cannot be used to identify the corresponding row in the logical standby database.

Oracle uses primary-key or unique-constraint/index supplemental logging to logically identify a modified row in the logical standby database. When database-wide primary-key and unique-constraint/index supplemental logging is enabled, eachUPDATE
statement also writes the column values necessary in the redo log to uniquely identify the modified row in the logical standby database.

  • If a table has a primary key defined, then the primary key is logged along with the modified columns as part of theUPDATE statement to identify the modified row.
  • If there is no primary key, then the shortest nonnull unique-constraint/index is logged along with the modified columns as part of theUPDATE statement to identify the modified row.
  • If there is no primary key and no nonnull unique constraint/index, then all columns of bounded size are logged as part of the UPDATE statement to identify the modified row. All columns are logged except the following: LONG, LOB, LONG RAW, object type, and
    collections.
  • A function-based index, even though it is declared as unique, cannot be used to uniquely identify a modified row. However, logical standby databases support replication of tables that have function-based indexes defined, as long as modified rows can be uniquely
    identified.

Oracle recommends that you add a primary key or a nonnull unique index to tables in the primary database, whenever possible, to ensure that SQL Apply can efficiently apply redo data updates to the logical standby database.

那么哪些是sql apply支持的类型和不支持哪些类型呢?查看手册如下:

C.1.1 Supported Datatypes in a Logical Standby Database

Logical standby databases support the following datatypes:

  • BINARY_DOUBLE
  • BINARY_FLOAT
  • BLOB
  • CHAR
  • CLOB and NCLOB
  • DATE
  • INTERVAL YEAR TO MONTH
  • INTERVAL DAY TO SECOND
  • LONG
  • LONG RAW
  • NCHAR
  • NUMBER
  • NVARCHAR2
  • RAW
  • TIMESTAMP
  • TIMESTAMP WITH TIMEZONE
  • TIMESTAMP WITH LOCAL TIMEZONE
  • VARCHAR and VARCHAR2
  • LOBs stored as SecureFiles (requires that the primary database be run at a compatibility of 11.2 or higher.
  • XMLType data for all storage models, assuming the following primary database compatibility requirements:
    • XMLType stored in CLOB format requires primary database to run at a compatibility of 11.1 or higher
    • XMLType stored in object-relational format or as binary XML requires that the primary database be running Oracle Database 11g Release 2 (11.2.0.3) or higher with a redo compatibility setting of 11.2.0.3 or higher

C.1.1.1 Compatibility Requirements

SQL Apply support for the following has compatibility requirements on the primary database:

  • Multibyte CLOB support requires primary database to run at a compatibility of 10.1 or higher.
  • IOT support without LOBs and Overflows requires primary database to run at a compatibility of 10.1 or higher.
  • IOT support with LOB and Overflow requires primary database to run at a compatibility of 10.2 or higher.
  • TDE support requires primary database to run at a compatibility of 11.1 or higher.
  • Segment compression requires primary database to run at a compatibility of 11.1 or higher.
  • Hybrid Columnar Compression support is dependent on the underlying storage system.

通过视图查看主库中的哪些表不能被sql apply :

[[email protected] ~]$ sqlplus  / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Apr 29 21:30:01 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL>
SQL> desc dba_logstdby_unsupported
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(30)
 TABLE_NAME                                         VARCHAR2(30)
 COLUMN_NAME                                        VARCHAR2(30)
 ATTRIBUTES                                         VARCHAR2(39)
 DATA_TYPE                                          VARCHAR2(32)

SQL> select owner,table_name,column_name,attributes,data_type from dba_logstdby_unsupported;

no rows selected
查看存在唯一性问题的表,如下:
SQL> select * from dba_logstdby_not_unique;

OWNER                          TABLE_NAME                     B
------------------------------ ------------------------------ -
SCOTT                          BONUS                          N
SCOTT                          SALGRADE                       N
STDBYPERF                      STATS$MANAGED_STANDBY          N
STDBYPERF                      STATS$RECOVERY_PROGRESS        N

SQL> desc dba_logstdby_not_unique
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(30)
 TABLE_NAME                                         VARCHAR2(30)
 BAD_COLUMN                                         VARCHAR2(1)

好了,现在开始创建逻辑dg,因为我已经创建了物理 dg,那么我们从该基础上转换成逻辑dg(注:不能从逻辑dg转为物理dg)。如果没有创建物理dg,那么可以

直接去创建逻辑dg,但是注意参数的修改。

第一步:停止备库redo应用:

查看数据库模式:

SQL> set linesize 200
SQL> col dest_name for a50
SQL> select database_mode,recovery_mode,protection_mode,dest_name from v$archive_dest_status where database_mode!=‘UNKNOWN‘;

DATABASE_MODE   RECOVERY_MODE           PROTECTION_MODE      DEST_NAME
--------------- ----------------------- -------------------- --------------------------------------------------
OPEN_READ-ONLY  MANAGED REAL TIME APPLY MAXIMUM PERFORMANCE  LOG_ARCHIVE_DEST_1

SQL> 

停止redo应用:

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> 
 

第二步:在主库build逻辑dg所需元数据。

SQL> exec dbms_logstdby.build;

PL/SQL procedure successfully completed.

SQL> 

注:这个过程主要是做了什么操作呢?

A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo. As part of building the LogMiner dictionary, supplemental logging is automatically set up to log primary
key and unique-constraint/index columns. The supplemental logging information ensures each update contains enough information to logically identify each row that is modified by the statement.

To build the LogMiner dictionary, issue the following statement:

SQL> EXECUTE DBMS_LOGSTDBY.BUILD;

The DBMS_LOGSTDBY.BUILD procedure waits for all existing transactions to complete. Long-running transactions executed on the primary database will affect the timeliness of this command.

Note:

In databases created using Oracle Database 11g release 2 (11.2) or later, supplemental logging information is automatically propagated to any existing physical standby databases. However, for databases in earlier releases, or if
the database was created using an earlier release and then upgraded to 11.2, you must check whether supplemental logging is enabled at the physical standby(s) if it is also enabled at the primary database. If it is not enabled at the physical standby(s), then
before performing a switchover or failover, you must enable supplemental logging on all existing physical standby databases. To do so, issue the following SQL command on each physical standby:

SQL>  ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX) COLUMNS;

If you do not do this, then any logical standby that is also in the same Data Guard configuration will be unusable if a switchover or failover is performed to one of the physical standby databases. If a switchover or failover has already occurred and supplemental
logging was not enabled, then you must recreate all logical standby databases

第三步:在备库告诉mrp继续应用redo,达到字典构建时的scn;

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area  313159680 bytes
Fixed Size                  2252824 bytes
Variable Size             171970536 bytes
Database Buffers          134217728 bytes
Redo Buffers                4718592 bytes
Database mounted.

SQL> alter database recover to logical standby Amy;

Database altered.

这时我们修改几个地方,首先是db_name;

SQL> alter system set db_name=Amy scope=spfile;

System altered.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

另外还有log_archive_dest_n 参数配置:

SQL> show parameter log_archive_dest_1

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_1                   string      location=USE_DB_RECOVERY_FILE_
                                                 DEST valid_for=(all_logfiles,a
                                                 ll_roles)
log_archive_dest_10                  string
log_archive_dest_11                  string
log_archive_dest_12                  string
log_archive_dest_13                  string
log_archive_dest_14                  string
log_archive_dest_15                  string
log_archive_dest_16                  string
log_archive_dest_17                  string

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_18                  string
log_archive_dest_19                  string
SQL> show parameter log_archive_dest_2

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2                   string      service=dg1 lgwr sync valid_fo
                                                 r=(online_logfiles,primary_rol
                                                 e) db_unique_name=dg1
log_archive_dest_20                  string

这里需要说明第一个log_archive_dest_1 为什么是all_logfiles呢?因为我们是创建的逻辑备库,逻辑备库除了维护主库的对象外,还可以自己进行其他对象的修改操作那么这些对象的redo,是需要归档的,逻辑备库是一个真实数据库啊,另外如果不指定快速闪回区,那么应该在加一个log_archive_dest_3,将日志分开,另外如果是快速闪回区,可以省略valid_for,因为默认就是all_logfiles,all_roles。

另外要注意:如果逻辑备库和主库在一个相同的数据存储管理上,那么需要使用dbms_logstdby.skip去跳过alter tablespace dml的操作,不然可能产生误修改主库的表空间问题。

第四步:完成逻辑dg的创建:

SQL> shutdown immediat;
SP2-0717: illegal SHUTDOWN option
SQL> shutdown immediate;
ORA-01507: database not mounted

ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area  313159680 bytes
Fixed Size                  2252824 bytes
Variable Size             171970536 bytes
Database Buffers          134217728 bytes
Redo Buffers                4718592 bytes
Database mounted.
SQL> alter database open resetlogs;

Database altered.

SQL> alter database start logical standby apply immediate;

Database altered.

SQL>
SQL> set linesize 200

SQL> select * from v$log

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- --------- ------------ ---------
         1          1          1   52428800        512          1 NO  CURRENT                1335271 29-APR-14   2.8147E+14
         2          1          0   52428800        512          1 YES UNUSED                       0                      0
         3          1          0   52428800        512          1 YES UNUSED                       0                      0

注:resetlogs做什么的?

resetlogs打开讲会对没有归档的redo日志文件中的内容进行覆盖,sequence重新从1开始计算,相当于数据库进入了一个新的化身或是新的生命周期,进入incarnation的目的就是为了不在去应用resetlog之后的日志的lcr。经常有朋友问我resetlog打开数据库了是否就是一个不完全恢复呢?这是一个非常错误的观点,完全恢复和不完全恢复不是依据resetlog定的。而是我们在前滚的时候是否全部应用了日志。如果全部应用了,那么就是一次完全恢复过程。

我们可以查看数据库的原型列表如下;

[[email protected] dbs]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Tue Apr 29 22:20:46 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: AMY (DBID=1205246033)

RMAN> list incarnation
2> ;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       AMY      1205246033       PARENT  1335270    29-APR-14
2       2       AMY      1205246033       CURRENT 1335271    29-APR-14

第五步:验证逻辑dg:

SQL> select * from v$logstdby_progress;

APPLIED_SCN APPLIED_T RESTART_SCN RESTART_T LATEST_SCN LATEST_TI MINING_SCN MINING_TI RESETLOGS_ID
----------- --------- ----------- --------- ---------- --------- ---------- --------- ------------
    1335268               1334503              1334503    

SQL>  select process,status,sequence# from v$managed_standby;

PROCESS   STATUS        SEQUENCE#
--------- ------------ ----------
ARCH      CONNECTED             0
ARCH      CONNECTED             0
ARCH      CONNECTED             0
ARCH      CONNECTED             0
ARCH      CONNECTED             0

SQL> 

咦?发现问题了,查看主数据库日志:

Error 1017 received logging on to the standby
------------------------------------------------------------
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
      returning error ORA-16191
------------------------------------------------------------
PING[ARC1]: Heartbeat failed to connect to standby ‘dg2‘. Error is 16191.

密码文件的问题咯

重建密码文件?那应该是在10G但是到11g不能重建,我们要做的是将主库的密码 文件copy到备库,然后mv。

之后查看主库日志 状态如下(可以发现主库每隔1分钟去主动ping一下备库):

------------------------------------------------------------
PING[ARC1]: Heartbeat failed to connect to standby ‘dg2‘. Error is 16191.
Tue Apr 29 22:39:42 2014
Error 1017 received logging on to the standby
------------------------------------------------------------
Check that the primary and standby are using a password file
and remote_login_passwordfile is set to SHARED or EXCLUSIVE,
and that the SYS password is same in the password files.
      returning error ORA-16191
------------------------------------------------------------
PING[ARC1]: Heartbeat failed to connect to standby ‘dg2‘. Error is 16191.
Tue Apr 29 22:40:46 2014
******************************************************************
LGWR: Setting ‘active‘ archival for destination LOG_ARCHIVE_DEST_2
******************************************************************
LGWR: Standby redo logfile selected for thread 1 sequence 84 for destination LOG_ARCHIVE_DEST_2
Thread 1 advanced to log sequence 84 (LGWR switch)
  Current log# 3 seq# 84 mem# 0: /u01/app/oracle/oradata/dg/redo03.log
Tue Apr 29 22:40:49 2014
Archived Log entry 138 added for thread 1 sequence 83 ID 0x68b85302 dest 1:
Tue Apr 29 22:40:49 2014
ARC3: Standby redo logfile selected for thread 1 sequence 83 for destination LOG_ARCHIVE_DEST_2

查看备库日志:

LOGMINER: Read buffers: 16
LOGMINER: Memory LWM: limit 10M, LWM 24M, 80%
LOGMINER: Memory Release Limit: 1M
RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_82_9ozghfdo_.arc] to LogMiner session id [1]
RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_81_9ozghffw_.arc] to LogMiner session id [1]
Tue Apr 29 22:40:47 2014
Primary database is in MAXIMUM PERFORMANCE mode
RFS[4]: Assigned to RFS process 3779
RFS[4]: Selected log 4 for thread 1 sequence 84 dbid 1756848898 branch 843517891
Tue Apr 29 22:40:48 2014
RFS[5]: Assigned to RFS process 3777
RFS[5]: Opened log for thread 1 sequence 79 dbid 1756848898 branch 843517891
RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_79_9ozghhy9_.arc] to LogMiner session id [1]
Tue Apr 29 22:40:50 2014
LOGMINER: session#=1 (Logical_Standby$), reader MS00 pid=38 OS id=3781 sid=44 started
Tue Apr 29 22:40:50 2014
LOGMINER: session#=1 (Logical_Standby$), builder MS01 pid=39 OS id=3783 sid=45 started
Tue Apr 29 22:40:50 2014
LOGMINER: session#=1 (Logical_Standby$), preparer MS02 pid=40 OS id=3785 sid=48 started
Tue Apr 29 22:40:50 2014
RFS LogMiner: RFS id [3787] assigned as thread [1] PING handler
Tue Apr 29 22:40:51 2014
RFS[6]: Assigned to RFS process 3789
RFS[6]: Selected log 5 for thread 1 sequence 83 dbid 1756848898 branch 843517891
Tue Apr 29 22:40:55 2014
RFS LogMiner: Registered logfile [/u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_83_9ozghp29_.arc] to LogMiner session id [1]
LOGMINER: Begin mining logfile during dictionary load for session 1 thread 1 sequence 79, /u01/app/oracle/fast_recovery_area/DG2/foreign_archivelog/DG1/2014_04_29/o1_mf_1_79_9ozghhy9_.arc
Tue Apr 29 22:41:09 2014
Thread 1 advanced to log sequence 2 (LGWR switch)
  Current log# 2 seq# 2 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_2_9ozd8zc1_.log
Tue Apr 29 22:41:11 2014
Archived Log entry 1 added for thread 1 sequence 1 ID 0x47d6f0d0 dest 1:
Tue Apr 29 22:41:20 2014
Thread 1 advanced to log sequence 3 (LGWR switch)
  Current log# 3 seq# 3 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_3_9ozd904o_.log
Tue Apr 29 22:41:22 2014
Archived Log entry 2 added for thread 1 sequence 2 ID 0x47d6f0d0 dest 1:
Thread 1 cannot allocate new log, sequence 4
Checkpoint not complete
  Current log# 3 seq# 3 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_3_9ozd904o_.log
Thread 1 advanced to log sequence 4 (LGWR switch)
  Current log# 1 seq# 4 mem# 0: /u01/app/oracle/oradata/dg/DG2/onlinelog/o1_mf_1_9ozd8yhq_.log
Tue Apr 29 22:41:31 2014
Archived Log entry 3 added for thread 1 sequence 3 ID 0x47d6f0d0 dest 1:
Tue Apr 29 22:41:47 2014
RFS LogMiner: RFS id [3787] assigned as thread [1] PING handler

确认进程:

SQL> select process,status,sequence# from v$managed_standby;

PROCESS   STATUS        SEQUENCE#
--------- ------------ ----------
ARCH      CLOSING               2
ARCH      CLOSING               3
ARCH      CONNECTED             0
ARCH      CLOSING              83
ARCH      CLOSING               1
RFS       IDLE                  0
RFS       IDLE                  0
RFS       IDLE                  0
RFS       IDLE                  0
RFS       IDLE                 84

10 rows selected.

SQL> 

至此完成逻辑dg的搭建过程:

创建逻辑dg,码迷,mamicode.com

时间: 2024-10-10 06:29:35

创建逻辑dg的相关文章

Linux下创建逻辑卷和增大逻辑卷的方法

第一步 给硬盘分区       fdisk /dev/sdb 第二步 修改硬盘的ID号为8e id默认为83       fdisk /dev/sdb       按t进入       输入分区表号       输入8e 第三步  创建pv 物理卷        pvcreate /dev/sdb1 查看pv  pvdisplay 第四步  创建vg 卷组        vgcreate 卷组的名字 /dev/sdb1 查看vg 卷组 vgdisplay 第五步  扩容卷组        先将分

17.文件系统——逻辑卷、快照以及如何创建逻辑卷(pv、vg、lv)

一.逻辑卷管理器LVM概述 前面我们在Linux系统上模拟软RAID,使用的是md模块,事实上,还有一个模块dm--device mapper,也可以实现软RAID的模拟.md模块的管理工具是mdadm,它专门用来管理RAID设备:dm模块则更为强大,它的管理工具不再是一个单一的工具,而是一组工具,根据不同的规则进行管理(如物理卷的创建.删除.修改:卷组的大小调整.扫描.显示等:逻辑卷的创建.删除和修改等),为dm提供设备映射法则:它支持将多个物理设备以多种不同的方式组合起来,其中包含但不仅限于

vmware上给根分区增加空间以及创建逻辑卷

1.在vmware上给根分区增加空间 509  cd /sys/class/scsi_host/ 查看添加的硬盘信息是否在系统中 521  echo "- - -" > /sys/class/scsi_host/host0/scan加载硬盘信息 522  echo "- - -" > /sys/class/scsi_host/host1/scan 523  echo "- - -" > /sys/class/scsi_host/

在VMware中的linux系统中创建逻辑卷

1.用poweroff将虚拟机关机. 2.给linux虚拟机添加4块1G的硬盘.开机后可用lsblk查看.fdisk -l 也可以. 3.开启linux虚拟机,将4块1G的硬盘做LVM逻辑卷管理. 步骤如下: a:将sdb.sdc.sdd.sde各分一个主分区,使用所有剩余空间. 用lsblk查看所有的硬盘块设备 b:将sdb1.sdc1.sdd1分区创建成一个名称为vg0的卷组. c:将sde1加入到vg0卷组中,然后将sdc1从vg0卷组中删除. d:在vg0卷组中创建一个容量为2G的名称为

RHCE7.0答案之创建逻辑卷

创建逻辑卷配置: fdisk /dev/vdb  n,e,+200M,t,8e,p,w partprobe; ll /devvdb* pvcreate /dev/vdb5; pvscan vgcreate -s 16M datastore /dev/vdb5 vgdisplay lvcreate -l 10 -n database datastore mkfs -t ext4/xfs /dev/datastore/database mkdir /mnt/database vim /etc/fst

基于Centos6系统创建逻辑卷LVM,执行扩容,缩减,删除等操作

什么是逻辑卷? LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性. LVM的工作原理其实很简单,它就是通过将底层的物理硬盘抽象的封装起来,然后以逻辑卷的方式呈现给上层应用.在传统的磁盘管理机制中,我们的上层应用是直接访问文件系统,从而对底层的物理硬盘进行读取,而在LVM中,其通过对底层的硬盘进行封装,当我们对底层的物理硬盘进行操作时,其不再是针对于分区

基于CentOS 6 系统创建逻辑卷LVM,执行扩容,缩减,删除等操作

2016-08-29 05:37:57 标签:Linux LVMFstab VG PV 原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 .作者信息和本声明.否则将追究法律责任.http://51eat.blog.51cto.com/11892702/1843663 什么是逻辑卷? LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分区管理的灵活性. LV

Linux创建逻辑卷详细步骤

1.物理卷物理卷是由磁盘或分区转换得来的,由一个一个pe组成(每个pe默认大小为4MB). 分区的类型必须修改为8e创建pv:格式:pvcreate 分区1 [分区2 分区3 --查看系统中有哪些pv:pvscan查看某个pv的详细信息:pvdisplay 某个pv删除某个pv:pvremove 某个pv 2.卷组卷组由一个或多个物理卷组成创建vg:格式:vgcreate vg名称 pv1 [pv2 pv3 --] 创建vg并将pe的大小修改为8M格式:vgcreate -s 8M vg名称 p

创建磁盘分区,创建逻辑卷组。

1.1 查看新增磁盘fdisk -l 1.2 首先从空的硬盘sdb上创建一个分区sdb1. 为接下来做LVM做准备.[[email protected]_11_3_centos ~]# fdisk /dev/vdb 1.3 为了后期便于维护管理,记得给分区加上标示,这样即使你不在的情况下,别人看到标示了就不会轻易动这块区域了. LVM的标识是8e,设置完成后记得按w保存 1.4 创建逻辑卷将新创建的一个分区/dev/vdb1转化成物理卷,主要是添加LVM属性信息并划分PE存储单元. 1.5 创建