Oracle Maximum Availability Architecture
1
Ricardo Portilho Proni
ricardo@nervinformatica.com.br Esta obra está licenciada sob a licença Creative Commons Atribuição-SemDerivados 3.0 Brasil. Para ver uma cópia desta licença, visite http://creativecommons.org/licenses/by-nd/3.0/br/.
Alta Disponibilidade
• Confiança
• Recuperabilidade
• Detecção de erros em tempo hábil
• Continuidade operacional
Características da Alta Disponibilidade
• Análise de impacto no negócio
• Custo de tempo fora de operação
• Objetivo de tempo de recuperação (RTO)
• Objetivo de ponto de recuperação (RPO)
• Meta de gerenciamento
• Custo total de propriedade (TCO)
• Retorno sobre o investimento (ROI)
Análise de Alta Disponibilidade
• Camada 1 (Faturamento, Vendas)
• Camada 2 (Compras, Estoque)
• Camada 3 (BI, Desenvimento)
Sistemas e Alta Disponibilidade
• Tempo máximo de parada tolerado.
• Frequência máxima de paradas tolerada.
• Custos facilmente mensuráveis (vendas, funcionários ociosos, multas
contratuais)
• Custos dificilmente mensuráveis (processos judiciais)
• Custos não mensusáveis (publicidade negativa, clientes irritados)
Custos e Alta Disponibilidade
Implantação de Alta Disponibilidade
• Fast-Start Fault Recovery
• Oracle Restart
• Oracle Real Application Clusters and Oracle Clusterware
• Oracle RAC One Node
• Oracle Data Guard
• Oracle GoldenGate / Oracle Streams
• Oracle Flashback Technology
• Oracle Automatic Storage Management
• Fast Recovery Area
• Recovery Manager
• Data Recovery Advisor
• Oracle Secure Backup
• Oracle Security Features
• LogMiner
• Oracle Exadata Storage Server Software (Exadata Cell)
• Oracle Exadata Database Machine
• Oracle Database File System (DBFS)
• Oracle Automatic Storage Management Cluster File System (Oracle ACFS)
• Client Failover
• Automatic Block Repair
• Corruption Prevention, Detection, and Repair
Soluções Oracle para Alta Disponibilidade
• Operating system and hardware upgrades -> Oracle RAC • Oracle Database patches -> Oracle RAC
• Oracle Grid Infrastructure upgrades and patches -> Oracle RAC • Storage Migration -> Oracle ASM
• Migrating to Exadata Storage -> Oracle MAA best practices • Upgrading Exadata Storage -> Exadata Patch Manager
• Migrating a single-instance database to Oracle RAC -> Oracle Grid Infrastructure • Migrating to Oracle ASM -> Oracle Data Guard
• Migrating a single-instance database to Oracle RAC -> Oracle Data Guard • Patch set and database upgrades -> Oracle Data Guard using SQL Apply
• Oracle interim patches, Oracle clusterware upgrades and patches, Oracle ASM
upgrades, Operating System and Hardware Upgrades -> Oracle Data Guard Standby-First
Patch Apply
• Migration across Windows and Linux -> Oracle Data Guard
• Platform migration across the same endian format platforms -> Transportable database • Platform migration across different endian format platforms -> Transportable tablespace • Patch set and database upgrades, platform migration, rolling upgrades, and when
different character sets are required -> Oracle GoldenGate and Oracle Streams
• Application upgrades -> Online Application Maintenance and Upgrades
Indisponibilidades Planejadas
• Site Failures -> Oracle Data Guard
• Site Failures -> Oracle GoldenGate and Oracle Streams • Site Failures -> Recovery Manager
• Computer Failures -> Oracle Real Application Clusters and Oracle Clusterware • Computer Failures -> Oracle RAC One Node
• Computer Failures -> Fast-Start Fault Recovery • Computer Failures -> Oracle Data Guard
• Computer Failures -> Oracle GoldenGate and Oracle Streams • Storage Failures -> Oracle Automatic Storage Management • Storage Failures -> Oracle Data Guard
• Storage Failures -> RMAN with Fast Recovery Area and Oracle Secure Backup • Storage Failures -> Oracle GoldenGate and Oracle Streams
• Data Corruption -> Oracle Exadata Storage Server Software (Exadata Cell) and Oracle ASM • Data Corruption -> Corruption Prevention, Detection, and Repair
• Data Corruption -> Data Recovery Advisor and RMAN with Fast Recovery Area • Data Corruption -> Oracle Data Guard
• Data Corruption -> Oracle GoldenGate and Oracle Streams • Human Errors -> Oracle Security Features
• Human Errors -> Oracle Flashback Technology • Human Errors -> LogMiner
• Lost writes -> Oracle Data Guard, RMAN, DB_LOST_WRITE_PROTECT
• Lost writes -> Oracle Data Guard Oracle Exadata Storage Server Software (Exadata Cell) • Hangs or slow down - Oracle Database and Oracle Enterprise Manager
Indisponibilidades Não Planejadas
High Availability Overview:
http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e17157/toc.htm
High Availability Best Practices:
http://www.oracle.com/pls/db112/to_toc?pathname=server.112/e10803/toc.htm
Maiores informações
Ambiente Produção
Oracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08
NFS: nerv09
ASM: nerv09
DNS: nerv09
Ambiente Contingência
Oracle Data Guard Physical Standby: nerv11 / nerv12 / nerv13 / nerv14
NFS: nerv10
ASM: nerv10
DNS: nerv10
Ambiente Observador
Oracle Client: nerv15
Cenário 1: Oracle RAC + Oracle Data Guard
Ambiente Produção
Oracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08
NFS: nerv09
ASM: nerv09
DNS: nerv09
Ambiente Contingência
Oracle Database: nerv11 / nerv12 / nerv13 / nerv14
NFS: nerv10
ASM: nerv10
DNS: nerv10
Cenário 2: Oracle RAC + Oracle Golden Gate
Ambiente Produção
Oracle RAC: nerv01 e nerv02 / nerv03 e nerv04 / nerv05 e nerv06 / nerv07 e nerv08
NFS: nerv09
ASM: nerv09
DNS: nerv09
Ambiente Contingência
Oracle RAC: nerv11 / nerv12 / nerv13 / nerv14
NFS: nerv10
ASM: nerv10
DNS: nerv10
Ambiente Observador
NFS: nerv15
Cenário 3: Oracle RAC Extended
Lab 1 – Instalação OEL 6
Hands On !
16
Lab 1.1: Instalação OEL 6
Nas máquinas
nerv01
,
nerv02 e nerv11
, instale o OEL.
- 1a tela: Install or upgrade an existing system
- 2a tela: Skip
- 3a tela: Next
- 4a tela: English (English), Next
- 5a tela: Brazilian ABNT2, Next
- 6a tela: Basic Storage Devices, Next
- 7a tela: Fresh Installation, Next
- 8a tela:
nerv01
.localdomain, Next
- 9a tela: America/Sao Paulo, Next
- 10a tela: Nerv2014, Nerv2014, Next
- 11a tela: Create Custom Layout, Next
17
Lab 1.2: Instalação OEL 6
17
- 12a tela: Crie as partições como abaixo, e em seguida, Next:
sda1
200 MB
/boot
sda2
100000 MB
/
sda3
20000 MB
/home
sda5
16384 MB
swap
sda6
10000 MB
/var
sda7
10000 MB
/tmp
sda8
Espaço restante /u01
- 13a tela: Format
- 14a tela: Write changes to disk
- 15a tela: Next
- 16a tela: Minimal
- 17a tela: Reboot
- Retire o DVD.
Lab 2 – Configuração OEL 6
Hands On !
Nas máquinas
nerv01
,
nerv02 e nerv11
, configure a placa de rede púplica.
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=00:26:5A:70:E8:6B TYPE=Ethernet UUID=8a9e98e5-0364-4a98-8f18-594ca6895433 ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPADDR=192.168.0.101 NETMASK=255.255.255.0 GATEWAY=192.168.0.1 DNS1=192.168.0.201 DNS2=8.8.8.8 DOMAIN=localdomain19
Lab 2.1 – Configuração OEL 6
Nas máquinas
nerv01
,
nerv02 e nerv11
, configure a placas de rede privada.
# cat /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR=C8:9C:DC:C7:32:45 TYPE=Ethernet UUID=b30878af-a7c2-4464-bbe2-33bc5cc3fa3e ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPADDR=192.168.1.101 NETMASK=255.255.255.0Nas máquinas
nerv01
,
nerv02
e
nerv11
, atualize o sistema operacional e
execute a instalação dos pré-requisitos.
# service network restart # yum -y update
# yum -y install oracle-rdbms-server-11gR2-preinstall # yum -y install oracleasm-support
# yum -y install unzip iscsi-initiator-utils java-1.7.0-openjdk parted
# yum -y install unixODBC unixODBC.i686 unixODBC-devel unixODBC-devel.i686
20
Lab 2.2 – Configuração OEL 6
Nas máquinas
nerv01
,
nerv02
e
nerv11
, ACRESCENTAR AO FINAL do
arquivo /etc/hosts:
# Public
192.168.0.101 nerv01.localdomain nerv01
192.168.0.102 nerv02.localdomain nerv02
192.168.0.121 nerv11.localdomain nerv11
# Private
192.168.1.101 nerv01-priv.localdomain nerv01-priv 192.168.1.102 nerv02-priv.localdomain nerv02-priv 192.168.1.121 nerv11-priv.localdomain nerv11-priv # Virtual
192.168.0.111 nerv01-vip.localdomain nerv01-vip 192.168.0.112 nerv02-vip.localdomain nerv02-vip 192.168.0.131 nerv11-vip.localdomain nerv11-vip # Storage 192.168.0.201 nerv09.localdomain nerv09 192.168.0.202 nerv10.localdomain nerv10 # Client 192.168.0.2 nerv15.localdomain nerv15
21
Lab 2.3 – Configuração OEL 6
Nas máquinas
nerv01,
nerv02
e
nerv11
, executar os comandos abaixo.
# groupadd oper
# groupadd asmadmin # groupadd asmdba # groupadd asmoper
# usermod -g oinstall -G dba,oper,asmadmin,asmdba,asmoper oracle # mkdir -p /u01/app/grid # mkdir -p /u01/app/11.2.0.4/grid # mkdir -p /u01/app/oracle/product/11.2.0.4/db_1 # mkdir -p /u01/shared_config01 # mkdir -p /u01/shared_config02 # mkdir -p /u01/shared_config03 # chown -R oracle:oinstall /u01 # chmod -R 775 /u01
# passwd oracle (Coloque como senha do usuário oracle: Nerv2014)
22
Lab 2.4 – Configuração OEL 6
Nas máquinas
nerv01
,
nerv02
e
nerv11
,
altere o SELinux de “enforcing” para
“permissive”.
# vi /etc/selinux/config
Nas máquinas
nerv01
,
nerv02
e
nerv11
, desabilite o firewall.
# chkconfig iptables off # chkconfig ip6tables off
Nas máquinas
nerv01
,
nerv02
e
nerv11
, desabilite o NTP.
# mv /etc/ntp.conf /etc/ntp.conf.org # reboot
23
Lab 2.5 – Configuração OEL 6
Nas máquinas
nerv01
e
nerv02
, com o usuário oracle, ACRESCENTAR NO
FINAL do arquivo /home/oracle/.bash_profile as linhas abaixo.
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=nerv01.localdomain export ORACLE_UNQNAME=ORCL
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1 export GRID_HOME=/u01/app/11.2.0.4/grid
export CRS_HOME=$GRID_HOME export ORACLE_SID=ORCL1
export ORACLE_TERM=xterm export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
24
Lab 2.6 – Configuração OEL 6
Na máquina
nerv11
, com o usuário oracle, ACRESCENTAR NO FINAL do
arquivo /home/oracle/.bash_profile as linhas abaixo.
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=nerv11.localdomain export ORACLE_UNQNAME=ORCL export ORACLE_BASE=/u01/app export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0.4/db_1 export GRID_HOME=$ORACLE_BASE/grid/11.2.0.4/grid export CRS_HOME=$GRID_HOME export ORACLE_SID=ORCL export ORACLE_TERM=xterm export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
25
Lab 2.7 – Configuração OEL 6
Lab 3 – Storage
Hands On !
Nas máquinas
nerv09
e
nerv10
, crie 3 diretórios.
# mkdir /shared_config01 # mkdir /shared_config02 # mkdir /shared_config03
Nas máquinas
nerv09
e
nerv10
, adicionar no arquivo /etc/exports:
/shared_config01 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) /shared_config02 *(rw,sync,no_wdelay,insecure_locks,no_root_squash) /shared_config03 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Nas máquinas
nerv09
e
nerv10
, iniciar o NFS Server:
# service nfs start # chkconfig nfs on
27
Lab 3.1 – Storage (NFS)
Nas máquinas
nerv01,
nerv02
e
nerv11
, adicionar no arquivo /etc/fstab as TRÊS
linhas abaixo.
nerv09:/shared_config01 /u01/shared_config01 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0
nerv09:/shared_config02 /u01/shared_config02 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0
nerv09:/shared_config03 /u01/shared_config03 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0
28
Lab 3.2 – Storage (NFS)
Nas máquinas
nerv01,
nerv02
e
nerv11
, executar:
# mount /u01/shared_config01 # mount /u01/shared_config02 # mount /u01/shared_config03
Na máquina
nerv01
, executar:
# mkdir /u01/shared_config01/rac01
# mkdir /u01/shared_config02/rac01
# mkdir /u01/shared_config03/rac01
# chown -R oracle:oinstall /u01/shared_config01/rac01
# chown -R oracle:oinstall /u01/shared_config02/rac01
# chown -R oracle:oinstall /u01/shared_config03/rac01
Na máquina
nerv11
, executar:
# mkdir /u01/shared_config01/rac11
# mkdir /u01/shared_config02/rac11
# mkdir /u01/shared_config03/rac11
# chown -R oracle:oinstall /u01/shared_config01/rac11
# chown -R oracle:oinstall /u01/shared_config02/rac11
# chown -R oracle:oinstall /u01/shared_config03/rac11
29
Lab 3.3 – Storage (NFS)
30
Lab 3.4 – Storage (ASM)
Nas máquinas
nerv09
e
nerv10
, crie 4 partições de 10GB, sem formatar.
Nas máquinas
nerv09
e
nerv10
, configure o iSCSI server.
# cat /etc/tgt/targets.conf
<target iqn.2010-10.com.nervinformatica:storage.asm01-01> backing-store /dev/sda5
initiator-address 192.168.0.101
initiator-address 192.168.0.102
</target>
<target iqn.2010-10.com.nervinformatica:storage.asm01-02> backing-store /dev/sda6 initiator-address 192.168.0.101 initiator-address 192.168.0.102 </target> ... # service tgtd start # chkconfig tgtd on
30
31
Lab 3.5 – Storage (ASM)
Nas máquinas
nerv01,
nerv02
e
nerv11
, ative o pacote iSCSI Initiator.
# service iscsid start # chkconfig iscsid on
Nas máquinas
nerv01,
nerv02
e
nerv11
, verifique os Discos exportados no
Storage.
# iscsiadm -m discovery -t sendtargets -p 192.168.0.201
# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-01 -p 192.168.0.201 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-02 -p 192.168.0.201 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-03 -p 192.168.0.201 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-04 -p 192.168.0.201 -l
Nas máquinas
nerv01,
nerv02
e
nerv11
, deixe APENAS os novos discos no
arquivo /etc/iscsi/initiatorname.iscsi.
InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-01 InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-02 InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-03 InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm01-0431
32
Lab 3.6 – Storage (ASM)
Nas máquinas
nerv01,
nerv02
e
nerv11
, REINICIE, e verifique se os discos
foram configurados localmente.
# fdisk -l
Nas máquinas
nerv01
e
nerv11
, particione os novos discos.
# fdisk /dev/sdb n <enter> p <enter> 1 <enter> <enter> <enter> w <enter> # fdisk /dev/sdc n <enter> p <enter> 1 <enter> <enter> <enter> w <enter>
32
33
Lab 3.7 – Storage (ASM)
Nas máquinas
nerv01
e
nerv11
, particione os novos discos.
# fdisk /dev/sdd n <enter> p <enter> 1 <enter> <enter> <enter> w <enter> # fdisk /dev/sde n <enter> p <enter> 1 <enter> <enter> <enter> w <enter>
Nas máquinas
nerv02
, execute a detecção dos novos discos.
# partprobe /dev/sdb # partprobe /dev/sdc # partprobe /dev/sdd # partprobe /dev/sde
Nas máquinas
nerv01,
nerv02
e
nerv11
, configure a ASMLib.
# /etc/init.d/oracleasm configure oracle <enter> oinstall <enter> y <enter> y <enter> # /etc/init.d/oracleasm statusNas máquinas
nerv01
e
nerv11
, crie os discos do ASM.
# /etc/init.d/oracleasm createdisk DISK00 /dev/sdb1 # /etc/init.d/oracleasm createdisk DISK01 /dev/sdc1 # /etc/init.d/oracleasm createdisk DISK02 /dev/sdd1 # /etc/init.d/oracleasm createdisk DISK03 /dev/sde1
Na máquina
nerv02
, execute a detecção dos discos criados.
# /etc/init.d/oracleasm scandisks
34
Lab 3.8 – Storage (ASM)
Nas máquinas
nerv01,
nerv02
e
nerv11
, verifique se os discos estão corretos.
# /etc/init.d/oracleasm listdisks
# /etc/init.d/oracleasm querydisk -v -p DISK00 # /etc/init.d/oracleasm querydisk -v -p DISK01 # /etc/init.d/oracleasm querydisk -v -p DISK02 # /etc/init.d/oracleasm querydisk -v -p DISK03
Nas máquinas
nerv01,
nerv02
e
nerv11
, verifique se os discos estão corretos.
# ls -lh /dev/oracleasm/disks/
brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00 brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK01 brw-rw----. 1 oracle oinstall 8, 49 Mar 3 08:40 DISK02 brw-rw----. 1 oracle oinstall 8, 65 Mar 3 08:40 DISK03
35
Lab 3.9 – Storage (ASM)
Lab 4 - Grid Infraestructure
Hands On !
Na máquina
nerv01
, configure o SSH sem senha.
[oracle@nerv01 ~]$ ssh-keygen -t rsa <enter>
<enter> <enter>
[oracle@nerv01 ~]$ ssh oracle@nerv02 mkdir -p .ssh
[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys' [oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys' [oracle@nerv01 ~]$ ssh nerv02
[oracle@nerv02 ~]$ ssh-keygen -t rsa <enter>
<enter> <enter>
[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys' [oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys' [oracle@nerv02 ~]$ ssh nerv01
Nas máquinas
nerv01
e
nerv11
, com o usuário oracle, altere o Response
File , descompacte e execute o instalador do Grid Infrastructure.
$ unzip -q p13390677_112040_Linux-x86-64_3of7.zip
$ /home/oracle/grid/runInstaller -silent -responseFile /home/oracle/grid_nerv01.rsp
37
Lab 4.1 – Grid Infrastructure
Na máquina
nerv01
, com o usuário root, execute os seguintes scripts.
# /u01/app/oraInventory/orainstRoot.sh # /u01/app/11.2.0.4/grid/root.sh
Na máquina
nerv02
, com o usuário root, execute os seguintes scripts.
# /u01/app/oraInventory/orainstRoot.sh # /u01/app/11.2.0.4/grid/root.sh
Na máquina
nerv11
, com o usuário root, execute os seguintes scripts.
# /u01/oraInventory/orainstRoot.sh # /u01/app/grid/11.2.0.4/grid/root.sh
38
Lab 4.2 – Grid Infrastructure
Na máquina
nerv01
, atualize o Cluster Inventory.
$ /u01/app/11.2.0.4/grid/oui/bin/runInstaller -silent -updateNodeList \ ORACLE_HOME="/u01/app/11.2.0.4/grid" LOCAL_NODE=nerv01 \ CLUSTER_NODES=nerv01,nerv02 CRS=true
Na máquina
nerv01
, crie o LISTENER do Grid.
$ /u01/app/11.2.0.4/grid/bin/netca \
-silent -responseFile /u01/app/11.2.0.4/grid/assistants/netca/netca.rsp
Na máquina
nerv11
, atualize o Cluster Inventory.
$ /u01/app/grid/11.2.0.4/grid/oui/bin/runInstaller -silent -updateNodeList \ ORACLE_HOME="/u01/app/grid/11.2.0.4/grid" LOCAL_NODE=nerv11 \ CLUSTER_NODES=nerv11 CRS=true
Na máquina
nerv11
, crie o LISTENER do Grid.
$ /u01/app/grid/11.2.0.4/grid/bin/netca \
-silent -responseFile /u01/app/grid/11.2.0.4/grid/assistants/netca/netca.rsp
39
Lab 4.3 – Grid Infrastructure
Lab 5 – Oracle Database Software
Hands On !
Nas máquinas
nerv01
e
nerv11
, com o usuário oracle, adeque o Response File,
descompacte e execute o instalador do Oracle Database Software.
$ unzip -q p13390677_112040_Linux-x86-64_1of7.zip $ unzip -q p13390677_112040_Linux-x86-64_2of7.zip
$ /home/oracle/database/runInstaller -silent -responseFile /home/oracle/db_nerv01.rsp
Na máquina
nerv01
, com o usuário root, execute o script abaixo.
# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Na máquina
nerv02
, com o usuário root, execute o script abaixo.
# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Na máquina
nerv11
, com o usuário root, execute o script abaixo.
# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
41
Lab 5.1 – Oracle Database Software
LAB 6 – ASM
Hands On !
Nas máquinas
nerv01
e
nerv11
, crie a Instância ASM, com o DISKGROUP DATA.
$ $GRID_HOME/bin/crsctl status res -t
$ $GRID_HOME/bin/asmca -silent -configureASM -sysAsmPassword Nerv2014 \ -asmsnmpPassword Nerv2014 -diskGroupName DATA -diskList \
'/dev/oracleasm/disks/DISK00,/dev/oracleasm/disks/DISK01' -redundancy NORMAL \ -au_size 1 -compatible.asm '11.2.0.0.0' -compatible.rdbms '11.2.0.0.0' \
-diskString '/dev/oracleasm/disks/*' $ $GRID_HOME/bin/crsctl status res -t
43
Lab 6.1 – ASM
Na máquina
nerv01
, crie o DISKGROUP FRA, e o habilite no outro Nó.
$ export ORACLE_HOME=$GRID_HOME $ export ORACLE_SID=+ASM1
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK
'/dev/oracleasm/disks/DISK02' NAME DATA_0002, '/dev/oracleasm/disks/DISK03' NAME DATA_0003;
SQL> EXIT;
$ srvctl start diskgroup -g FRA -n nerv02 $ srvctl enable diskgroup -g FRA -n nerv02 $ $GRID_HOME/bin/crsctl status res -t
Na máquina
nerv11
, crie o DISKGROUP FRA.
$ export ORACLE_HOME=$GRID_HOME $ export ORACLE_SID=+ASM
$ sqlplus / AS SYSASM
SQL> CREATE DISKGROUP FRA NORMAL REDUNDANCY DISK
'/dev/oracleasm/disks/DISK02' NAME DATA_0002, '/dev/oracleasm/disks/DISK03' NAME DATA_0003;
SQL> EXIT;
$ $GRID_HOME/bin/crsctl status res -t
44
Lab 6.2 – ASM
Lab 7 – Oracle Database
Hands On !
Na máquina
nerv01
, crie o Banco de Dados.
$ export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0.4/db_1 $ export ORACLE_SID=ORCL1
$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \ -gdbName ORCL -sid ORCL \
-sysPassword Nerv2014 -systemPassword Nerv2014 \ -storageType ASM -asmsnmpPassword Nerv2014 \ -diskGroupName DATA -recoveryAreaDestination FRA \ -nodelist nerv01,nerv02 \
-characterSet WE8IS08859P15 -listeners LISTENER \ -memoryPercentage 40 \
-sampleSchema true \ -emConfiguration NONE \
-continueOnNonFatalErrors false
46
Lab 7.1 – Oracle Database
Na máquina
nerv11
, crie o Banco de Dados.
$ export ORACLE_HOME=$ORACLE_BASE/oracle/product/11.2.0.4/db_1 $ export ORACLE_SID=ORCL
$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \ -gdbName ORCL -sid ORCL \
-sysPassword Nerv2014 -systemPassword Nerv2014 \ -storageType ASM -asmsnmpPassword Nerv2014 \ -diskGroupName DATA -recoveryAreaDestination FRA \ -nodelist nerv11 \
-characterSet WE8IS08859P15 -listeners LISTENER \ -memoryPercentage 20 \
-sampleSchema true \ -emConfiguration NONE \
-continueOnNonFatalErrors false
47
Lab 7.2 – Oracle Database
Lab 8 – RAC + Data Guard
Hands On !
49
Lab 8.1 – Data Guard
Nas máquinas
nerv01
,
nerv02
e
nerv11
, deixe o tnsnames do ORACLE_HOME
como abaixo.
PROD =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac01-scan.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORCL) ) ) DR = (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = nerv11.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = DR) ) )
49
50
Lab 8.2 – Data Guard
Nas máquinas
nerv01
,
nerv02
e
nerv11
, acrescente as linhas abaixo no final do
arquivo $GRID_HOME/network/admin/listener.ora.
SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=ORCL_DGMGRL) (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/db_1) (SID_NAME=ORCL1) ) )Nas máquinas
nerv01
,
nerv02
e
nerv11
, teste a nova configuração do LISTENER.
$ export ORACLE_HOME=$GRID_HOME
$ $GRID_HOME/bin/lsnrctl status
$ $GRID_HOME/bin/lsnrctl stop
$ $GRID_HOME/bin/lsnrctl start
$ $GRID_HOME/bin/lsnrctl status
50
51
Lab 8.3 – Data Guard
Na máquina
nerv01
, habilite os pré-requisitos do Data Guard.
$ export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1 $ export ORACLE_SID=ORCL1
$ srvctl stop database -d ORCL
$ srvctl start instance -d ORCL -i ORCL1 -o mount $ sqlplus / AS SYSDBA
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA'; SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G; SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE FORCE LOGGING; SQL> ALTER DATABASE FLASHBACK ON; SQL> EXIT;
$ srvctl stop database -d ORCL $ srvctl start database -d ORCL
Na máquina
nerv01
, altere a localização do SNAPSHOT CONTROLFILE.
$ rman target /
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+FRA/snapcf.f';
52
Lab 8.4 – Data Guard
Na máquina nerv01, crie um STANDBY CONTROLFILE.
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/home/oracle/stb.ctl';
Na máquina nerv01, execute um Backup do Banco de Dados e Archives.
RMAN> BACKUP DATABASE FORMAT '/home/oracle/Backup_Banco_%U.rman';
RMAN> BACKUP ARCHIVELOG ALL FORMAT '/home/oracle/Backup_Archives_%U.rman';
Na máquina nerv01, copie o PASSWORD FILE para a máquina nerv11.
$ scp $ORACLE_HOME/dbs/orapwORCL1 nerv11:$ORACLE_HOME/dbs/orapwORCL
Na máquina nerv01, copie o STANDBY CONTROLFILE para a máquina nerv11.
$ scp /home/oracle/stb.ctl nerv11:/home/oracle/
Na máquina nerv01, copie o BACKUP para a máquina nerv11.
$ scp /home/oracle/Backup_*.rman nerv11:/home/oracle/
53
Lab 8.5 – Data Guard
Na máquina nerv11, altere o parâmetro DB_UNIQUE_NAME.
SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='DR' SCOPE=SPFILE; SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP NOMOUNT;
Na máquina nerv11, restaure o CONTROLFILE.
RMAN> RESTORE CONTROLFILE FROM '/home/oracle/stb.ctl'; SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Na máquina nerv11, corrija os metadados do RMAN.
RMAN> CROSSCHECK BACKUP;
RMAN> CROSSCHECK ARCHIVELOG ALL;
RMAN> DELETE NOPROMPT EXPIRED BACKUP;
RMAN> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL; RMAN> CATALOG START WITH '/home/oracle/Backup';
Na máquina nerv11, restaure o banco de dados.
RMAN> RESTORE DATABASE; RMAN> RECOVER DATABASE;
54
Lab 8.6 – Data Guard
Teste o
Na máquina nerv11, habilite os pré-requisitos do Data Guard.
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA'; SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G; SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE FLASHBACK ON;
Na máquina nerv01, configure o Data Guard Broker.
SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1 = '+FRA/DR1.DAT' SCOPE=BOTH; SQL> ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2 = '+FRA/DR2.DAT' SCOPE=BOTH;
Teste a resolução de nomes (ping) entre as três máquinas.
Teste a configuração SQL*Net (tnsping) entre as três máquinas. Teste a autenticação via PASSWORD FILE entre as três máquinas.
[oracle@nerv01 ~]$ sqlplus SYS/Nerv2014@DR AS SYSDBA [oracle@nerv02 ~]$ sqlplus SYS/Nerv2014@DR AS SYSDBA [oracle@nerv11 ~]$ sqlplus SYS/Nerv2014@PROD AS SYSDBA
Nas máquinas nerv01 e nerv11, inicie o Data Guard Broker.
SQL> ALTER SYSTEM SET DG_BROKER_START=TRUE;
55
Lab 8.7 – Data Guard
Teste o
Na máquina nerv01, crie a configuração do Data Guard Broker.
$ dgmgrl SYS/Nerv2014@PROD
DGMGRL> CREATE CONFIGURATION 'DRSolution' AS PRIMARY DATABASE IS ORCL CONNECT IDENTIFIER IS PROD;
Na máquina nerv01, adicione a máquina nerv11 na configuração.
DGMGRL> ADD DATABASE DR AS CONNECT IDENTIFIER IS DR;
Na máquina nerv01, habilite a configuração.
DGMGRL> SHOW CONFIGURATION; DGMGRL> ENABLE CONFIGURATION; DGMGRL> SHOW CONFIGURATION;
Nas três máquinas, acompanhe o Alert Log.
56
Lab 8.8 – Data Guard
Teste o
Na máquina nerv11, crie STANDBY LOGFILEs.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; SQL> ALTER DATABASE ADD STANDBY LOGFILE;
SQL> ALTER DATABASE ADD STANDBY LOGFILE; SQL> ALTER DATABASE ADD STANDBY LOGFILE; SQL> ALTER DATABASE ADD STANDBY LOGFILE;
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Nas máquina nerv01 crie STANDBY LOGFILEs.
SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2; SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 2;
Na máquina nerv01, verifique detalhes de um banco de dados.
DGMGRL> SHOW DATABASE VERBOSE ORCL;
DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'ArchiveLagTarget'=600;
DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'StandbyFileManagement'=AUTO; DGMGRL> SHOW DATABASE ORCL 'ArchiveLagTarget';
DGMGRL> SHOW DATABASE ORCL 'StandbyFileManagement';
Na máquina nerv01, altere o Protection Mode.
DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC'; DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY;
Na máquina nerv01, verifique se o Protection Mode foi alterado.
DGMGRL> SHOW CONFIGURATION;
DGMGRL> SHOW DATABASE VERBOSE ORCL; DGMGRL> SHOW DATABASE VERBOSE DR;
DGMGRL> SHOW INSTANCE VERBOSE “ORCL1” ON DATABASE ORCL; DGMGRL> SHOW INSTANCE VERBOSE “ORCL2” ON DATABASE ORCL; DGMGRL> SHOW INSTANCE VERBOSE “ORCL” ON DATABASE DR;
Lab 8.9 – Data Guard
Na máquina
nerv01
, execute SWITCHOVER para a máquina
nerv11
, sempre
acompanhando os Alert Logs.
DGMGRL> SHOW CONFIGURATION; DGMGRL> SWITCHOVER TO DR;
Na máquina
nerv11
, execute SWITCHBACK para a máquina
nerv01
, sempre
acompanhando os Alert Logs.
DGMGRL> SHOW CONFIGURATION; DGMGRL> SWITCHOVER TO ORCL;
Lab 8.10 – Data Guard
Desligue as máquinas
nerv01
e
nerv02
.
Execute FAILOVER para a máquina
nerv11
.
$ dgmgrl SYS/Nerv2014@DR DGMGRL> FAILOVER TO DR;
DGMGRL> SHOW CONFIGURATION;
Ligue as máquinas
nerv01
e
nerv02
, e na máquina
nerv11
, execute o REISNTATE.
DGMGRL> REINSTATE DATABASE ORCL;
Na máquina
nerv11
, execute o SWITCHOVER.
DGMGRL> SWITCHOVER TO ORCL;
Lab 8.11 – Data Guard
Lab 9 – Fast-Start Failover
Hands On !
Lights out administration
Na máquina nerv01, configure o Fast-Start Failover.
DGMGRL> EDIT DATABASE ORCL SET PROPERTY 'LogXptMode'='SYNC'; DGMGRL> EDIT DATABASE DR SET PROPERTY 'LogXptMode'='SYNC';
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY; DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverLagLimit=600; DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverThreshold = 30;
DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverAutoReinstate = TRUE; DGMGRL> EDIT CONFIGURATION SET PROPERTY FastStartFailoverPmyShutdown = TRUE; DGMGRL> EDIT DATABASE ORCL SET PROPERTY FastStartFailoverTarget=DR;
DGMGRL> ENABLE FAST_START FAILOVER; DGMGRL> SHOW CONFIGURATION;
DGMGRL> SHOW FAST_START FAILOVER;
Lab 9.1: Fast-Start Failover
Na máquina nerv15, inicie o observador.
$ cd /home/oracle/dba01
$ dgmgrl -logfile /home/oracle/dba01/dba01.log SYS/Nerv2014@DR_RAC01
DGMGRL> START OBSERVER;
Desligue as máquinas nerv01 e nerv02, e aguarde o FAILOVER. Ligue as máquinas nerv01 e nerv02.
Aguarde o REINSTATE. Execute o SWITCHOVER.
Lab 9.2: Fast-Start Failover
Na máquina
nerv01
, crie dois Services como abaixo.
$GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLTP -l PRIMARY -w 1 -z 10 $GRID_HOME/bin/srvctl add service -d ORCL -r ORCL1,ORCL2 -s OLAP -l PHYSICAL_STANDBY -w 1 -z 10
Na máquina
nerv11
, crie dois Services como abaixo.
$GRID_HOME/bin/srvctl add service -d ORCL -s OLTP -l PRIMARY -w 1 -z 10
$GRID_HOME/bin/srvctl add service -d ORCL -s OLAP -l PHYSICAL_STANDBY -w 1 -z 10
Na máquina
nerv01
, inicie os dois Services.
$GRID_HOME/bin/srvctl start service -d ORCL -s OLTP $GRID_HOME/bin/srvctl start service -d ORCL -s OLAP
Na máquina
nerv01
, gere alguns Archived Redo Logs e espere replicar para o DR.
Na máquina
nerv01
, pare o Service OLAP.
$GRID_HOME/bin/srvctl stop service -d ORCL -s OLAP
Na máquina
nerv11
, inicie o Service OLAP.
$GRID_HOME/bin/srvctl start service -d ORCL -s OLAP
Lab 9.3: Fast-Start Failover
Lab 9.4: Fast-Start Failover
Na máquina nerv15, adicione estas duas entradas ao tnsnames.ora, e teste sua conexão após um novo Failover. OLTP_RAC01 = (DESCRIPTION= (LOAD_BALANCE=OFF) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OLTP)) ) OLAP_RAC01 = (DESCRIPTION= (LOAD_BALANCE=OFF) (FAILOVER=ON) (ADDRESS=(PROTOCOL=TCP)(HOST=nerv11)(PORT=1521)) (ADDRESS=(PROTOCOL=TCP)(HOST=rac01-scan)(PORT=1521)) (CONNECT_DATA=(SERVICE_NAME=OLAP)) )
65
Lab 10 – ACFS
Hands On !
• Mirroring
• Stripping
• Replicação
• Snapshots
• Alta Disponibilidade
Vantagens ACFS
67
• Produto recente (11gR2)
• Instalação complexa
• Dependência do Kernel
• Depêndencia dos componentes do Grid Infrastructure
• Ainda não suportado pelo UEK 3
Bug ID 16318126
Oracle ASM Cluster File System (ACFS) is currently not supported for use with UEK R3.
http://docs.oracle.com/cd/E37670_01/E51472/E51472.pdf
Desvantagens ACFS
Na máquina nerv09, crie 1 diretório.
# mkdir /shared_ogg
Na máquina nerv09, adicionar no arquivo /etc/exports:
/shared_ogg *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Na máquina nerv09, reiniciar o NFS Server:
# service nfs restart
Nas máquinas nerv01 e nerv02, adicionar no arquivo /etc/fstab a linha abaixo. nerv09:/shared_ogg /u01/shared_ogg nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
Na máquina nerv01, executar:
# mkdir /u01/shared_ogg # mount /u01/shared_ogg
# mkdir /u01/shared_ogg/rac01
# chown -R oracle:oinstall /u01/shared_ogg/rac01 Na máquina nerv02, executar:
# mkdir /u01/shared_ogg # mount /u01/shared_ogg
69
Lab 10.1: NFS
Lab 11 – Golden Gate Unidirecional
Hands On !
Lab 11.1: Golden Gate Unidirecional
Na máquina nerv11, crie um novo banco de dados.$ export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1
$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc \ -gdbName BI -sid BI \
-sysPassword Nerv2014 -systemPassword Nerv2014 \ -storageType ASM -asmsnmpPassword Nerv2014 \ -diskGroupName DATA -recoveryAreaDestination FRA \ -nodelist nerv11 \
-characterSet WE8IS08859P15 -listeners LISTENER \
-memoryPercentage 20 -sampleSchema true -emConfiguration NONE \ -continueOnNonFatalErrors false
Na máquina nerv11, coloque o banco BI em modo ARCHIVELOG.
export ORACLE_SID=BI
SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST='+FRA'; SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=10G; SQL> ALTER DATABASE FORCE LOGGING;
SQL> ALTER DATABASE OPEN;
Lab 11.2: Golden Gate Unidirecional
Na máquina nerv01, instale o Golden Gate.$ cd /home/oracle
$ unzip 121200_fbo_ggs_Linux_x64_shiphome.zip
$ cd /home/oracle/fbo_ggs_Linux_x64_shiphome/Disk1/
$ ./runInstaller -silent -responseFile /home/oracle/ogg_nerv01.rsp
Na máquina nerv11, instale o Golden Gate.
$ cd /home/oracle
$ unzip 121200_fbo_ggs_Linux_x64_shiphome.zip
$ cd /home/ora5cle/fbo_ggs_Linux_x64_shiphome/Disk1/
$ ./runInstaller -silent -responseFile /home/oracle/ogg_nerv11.rsp
Na máquina nerv01, verifique se o MANAGER está em funcionamento.
cd /u01/shared_ogg/rac01
$ ./ggsci
GGSCI> info all
Na máquina nerv11, verifique se o MANAGER está em funcionamento.
cd /u01/app/oracle/product/11.2.0.4/ogg $ ./ggsci
GGSCI> info all
Lab 11.3: Golden Gate Unidirecional
Na máquina nerv01, habilite os pré-requisitos do Golden Gate.SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE; SQL> ALTER SYSTEM SET RECYCLEBIN=OFF SCOPE=SPFILE;
$GRID_HOME/bin/srvctl stop database -d ORCL; $GRID_HOME/bin/srvctl start database -d ORCL
Na máquina nerv01, crie o usuário para o Golden Gate.
cd /u01/shared_ogg/rac01/
$ORACLE_HOME/bin/sqlplus / AS SYSDBA SQL> CREATE TABLESPACE OGG;
SQL> CREATE USER OGG IDENTIFIED BY Nerv2014 DEFAULT TABLESPACE OGG TEMPORARY TABLESPACE TEMP;
SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG; SQL> GRANT EXECUTE ON UTL_FILE TO OGG;
@marker_setup.sql OGG <enter> @ddl_setup.sql OGG <enter> @role_setup.sql OGG <enter> @ddl_enable.sql
73
Lab 11.4: Golden Gate Unidirecional
Na máquina nerv11, habilite os pré-requisitos do Golden Gate.SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER SYSTEM SET ENABLE_GOLDENGATE_REPLICATION=TRUE; SQL> ALTER SYSTEM SET RECYCLEBIN=OFF SCOPE=SPFILE;
$GRID_HOME/bin/srvctl stop database -d BI; $GRID_HOME/bin/srvctl start database -d BI
Na máquina nerv11, crie o usuário para o Golden Gate.
cd /u01/app/oracle/product/11.2.0.4/ogg export ORACLE_SID=BI
$ORACLE_HOME/bin/sqlplus / AS SYSDBA SQL> CREATE TABLESPACE OGG;
SQL> CREATE USER OGG IDENTIFIED BY Nerv2014 DEFAULT TABLESPACE OGG TEMPORARY TABLESPACE TEMP;
SQL> GRANT CONNECT, RESOURCE, UNLIMITED TABLESPACE TO OGG; SQL> GRANT EXECUTE ON UTL_FILE TO OGG;
@marker_setup.sql OGG <enter> @ddl_setup.sql OGG <enter> @role_setup.sql OGG <enter> @ddl_enable.sql
74
Lab 11.5: Golden Gate Unidirecional
Na máquina nerv01, adicione o processo EXTRACT.GGSCI> add extract ext1, tranlog, THREADS 2, begin now
GGSCI> add exttrail /u01/app/oracle/product/11.2.0.4/ogg/dirdat/lt, extract ext1
Na máquina nerv01, edite o arquivo de parâmetros do processo EXTRACT.
GGSCI> edit params ext1 extract ext1
userid OGG@ORCL, password Nerv2014 rmthost nerv11, mgrport 7809
rmttrail /u01/app/oracle/product/11.2.0.4/ogg/dirdat/lt
TRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2014 ddl include mapped objname SCOTT.*;
table SCOTT.*;
Lab 11.6: Golden Gate Unidirecional
Na máquina nerv11, edite o arquivo de parâmetros GLOBAL.GGSCI> edit params ./GLOBAL
GGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint
Na máquina nerv11, crie a tabela de CHECKPOINT.
GGSCI> dblogin userid OGG Nerv2014 <enter>
GGSCI> add checkpointtable OGG.checkpoint
Na máquina nerv11, adicione o processo REPLICAT.
GGSCI> add replicat rep1, exttrail /u01/app/oracle/product/11.2.0.4/ogg/dirdat/lt, checkpointtable OGG.checkpoint
Na máquinas nerv11, edite o arquivo de parâmetros do processo REPLICAT.
GGSCI> edit params rep1 replicat rep1
ASSUMETARGETDEFS
userid OGG@BI, password Nerv2014
discardfile /u01/app/oracle/product/11.2.0.4/ogg/dircrd/rep1_discard.txt, append, megabytes 10 DDL
map SCOTT.*, target SCOTT.*;
Lab 11.7: Golden Gate Unidirecional
Nas máquinas nerv01, nerv02 e nerv11, adicione o ASM ao tnsnames.ora do ORACLE_HOME.
ASM =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = nerv01.localdomain)(PORT = 1521)) (CONNECT_DATA =
(SERVER = DEDICATED) (SERVICE_NAME = +ASM) )
)
Nas máquinas nerv01 e nerv11, habilite o usuário SCOTT.
SQL> ALTER USER SCOTT IDENTIFIED BY TIGER ACCOUNT UNLOCK;
Lab 11.8: Golden Gate Unidirecional
Na máquina nerv01, inicie o processo EXTRACT.GGSCI> info all
GGSCI> start extract ext1 GGSCI> info all
Na máquina nerv11, inicie o processo REPLICAT.
GGSCI> info all
GGSCI> start replicat rep1 GGSCI> info all
Na máquina nerv01, acompanhe o log de erros.
$ tail -f /u01/shared_ogg/rac01/ggserr.log
Na máquina nerv11, acompanhe o log de erros.
$ tail -f /u01/app/oracle/product/11.2.0.4/ogg/ggserr.log
Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11. Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11.
Lab 12 – Golden Gate Bidirecional
Hands On !
Lab 12.1: Golden Gate Bidirecional
Na máquina nerv11, verifique se o MANAGER e o REPLICAT estão em funcionamento.
cd /u01/app/oracle/product/11.2.0.4/ogg $ ./ggsci
GGSCI> info all
Na máquina nerv11, adicione o processo EXTRACT.
GGSCI> add extract ext2, tranlog, THREADS 1, begin now
GGSCI> add exttrail /u01/shared_ogg/rac01/dirdat/lt, extract ext2
Na máquina nerv11, edite o arquivo de parâmetros do processo EXTRACT.
GGSCI> edit params ext2 extract ext2
userid OGG@BI, password Nerv2014 rmthost nerv01-vip, mgrport 7809
TRANLOGOPTIONS EXCLUDEUSER OGG ASMUSER SYS@ASM, ASMPASSWORD Nerv2014 rmttrail /u01/shared_ogg/rac01/dirdat/lt
ddl include mapped objname SCOTT.*; table SCOTT.*;
Lab 12.2: Golden Gate Bidirecional
Na máquina nerv01, edite o arquivo de parâmetros GLOBAL.GGSCI> edit params ./GLOBAL
GGSCHEMA OGG CHECKPOINTTABLE OGG.checkpoint
Na máquina nerv01, crie a tabela de CHECKPOINT.
GGSCI> dblogin userid OGG Nerv2014 <enter>
GGSCI> add checkpointtable OGG.checkpoint
Na máquina nerv01, adicione o processo REPLICAT.
GGSCI> add replicat rep2, exttrail /u01/shared_ogg/rac01/dirdat/lt, checkpointtable OGG.checkpoint
Na máquinas nerv01, edite o arquivo de parâmetros do processo REPLICAT.
GGSCI> edit params rep2 replicat rep2
ASSUMETARGETDEFS
userid OGG@ORCL, password Nerv2014
discardfile /u01/shared_ogg/rac01/dircrd/rep1_discard.txt, append, megabytes 10 DDL
map SCOTT.*, target SCOTT.*;
Lab 12.3: Golden Gate Bidirecional
Na máquina nerv11, inicie o processo EXTRACT.GGSCI> info all
GGSCI> start extract ext2 GGSCI> info all
Na máquina nerv01, inicie o processo REPLICAT.
GGSCI> info all
GGSCI> start replicat rep2 GGSCI> info all
Na máquina nerv01, acompanhe o log de erros.
$ tail -f /u01/shared_ogg/rac01/ggserr.log
Na máquina nerv11, acompanhe o log de erros.
$ tail -f /u01/app/oracle/product/11.2.0.4/ogg/ggserr.log
Teste a replicação de dados do usuário SCOTT entre as máquinas nerv01 e nerv11. Teste a replicação de dados do usuário SCOTT entre as máquinas nerv02 e nerv11. Teste a replicação de dados do usuário SCOTT entre as máquinas nerv11 e nerv01.
Lab 13 – Golden Gate High Availability
Hands On !
84
Lab 13.1: Golden Gate HA
Na máquina nerv01, crie um VIP e um Resource para o Golden Gate.
# /u01/app/11.2.0.4/grid/bin/appvipcfg create -network=1 -ip=192.168.0.141 -vipname=rac01 -ogg-vip -user=root
# /u01/app/11.2.0.4/grid/bin/crsctl start resource rac01-ogg-vip -n nerv01
# vi /u01/shared_ogg/rac01/ogg_action.sh
# chmod +x /u01/shared_ogg/rac01/ogg_action.sh
# chown oracle:oinstall /u01/shared_ogg/rac01/ogg_action.sh
# /u01/app/11.2.0.4/grid/bin/crsctl add resource ogg -type cluster_resource -attr
"ACTION_SCRIPT=/u01/shared_ogg/rac01/ogg_action.sh, CHECK_INTERVAL=30, START_DEPENDENCIES='hard(rac01-ogg-vip,ora.orcl.db) pullup(rac01-ogg-vip)', STOP_DEPENDENCIES='hard(rac01-ogg-vip)'"
# /u01/app/11.2.0.4/grid/bin/crsctl setperm resource rac01-ogg-vip -u user:oracle:r-x # /u01/app/11.2.0.4/grid/bin/crsctl setperm resource ogg -o oracle
85
Lab 13.2: Golden Gate HA
Na máquina nerv01, verifique e inicie o Resource do Golden Gate.
$ $GRID_HOME/bin/crsctl status res -t $ $GRID_HOME/bin/crsctl stop res ogg $ $GRID_HOME/bin/crsctl status res -t $ $GRID_HOME/bin/crsctl start res ogg
Reinicie a máquina nerv01, e verifique se o Golden Gate é iniciado na máquina nerv02.
Lab 14 – RAC Extended
Hands On !
87
Lab 14.0: RAC Extended
Na máquina nerv01, desabilite a replicação via Data Guard.
DGMGRL> DISABLE FAST_START FAILOVER;
DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPERFORMANCE; DGMGRL> REMOVE CONFIGURATION;
SQL> ALTER SYSTEM SET DG_BROKER_START=FALSE;
Na máquina nerv01, desabilite a replicação via Golden Gate.
# /u01/app/11.2.0.4/grid/bin/crsctl stop resource ogg # /u01/app/11.2.0.4/grid/bin/crsctl delete resource ogg
# /u01/app/11.2.0.4/grid/bin/crsctl stop resource rac01-ogg-vip
# /u01/app/11.2.0.4/grid/bin/appvipcfg delete -vipname=rac01-ogg-vip
88
Lab 14.1: RAC Extended
Na máquina nerv11, execute novamente o Lab 1 e 2.Na máquina nerv10, apague o conteúdo dos discos iSCSI.
# dd if=/dev/zero of=/dev/sda5 bs=512 count=10000
Na máquina nerv09, altere o Servidor iSCSI para permitir acesso aos discos para a máquina
nerv11.
# cat /etc/tgt/targets.conf
<target iqn.2010-10.com.nervinformatica:storage.asm01-01> backing-store /dev/sda5 initiator-address 192.168.0.101 initiator-address 192.168.0.102 initiator-address 192.168.0.121 </target> ... # service tgtd restart
Também na máquina nerv10, altere o Servidor iSCSI para permitir acesso aos discos para as máquinas nerv01 e nerv02.
89
Lab 14.2: RAC Extended
Nas máquinas
nerv01,
nerv02
e
nerv11
, verifique os Discos exportados no
Storage.
# iscsiadm -m discovery -t sendtargets -p 192.168.0.202
# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-01 -p 192.168.0.202 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-02 -p 192.168.0.202 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-03 -p 192.168.0.202 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm11-04 -p 192.168.0.202 -l # iscsiadm -m discovery -t sendtargets -p 192.168.0.201
# iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-01 -p 192.168.0.201 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-02 -p 192.168.0.201 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-03 -p 192.168.0.201 -l # iscsiadm -m node -T iqn.2010-10.com.nervinformatica:storage.asm01-04 -p 192.168.0.201 -l
Nas máquinas
nerv01,
nerv02
e
nerv11
, adicione os novos discos no arquivo
/etc/iscsi/initiatorname.iscsi.
... InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-01 InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-02 InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-03 InitiatorName=iqn.2010-10.com.nervinformatica:storage.asm11-0489
90
Lab 14.3: RAC Extended
Nas máquinas
nerv01,
nerv02
e
nerv11
, verifique se os discos foram
configurados localmente.
# fdisk -l
Na máquina
nerv01
, particione os novos discos.
# fdisk /dev/sdf (e sdg, sdh, sdi) n <enter> p <enter> 1 <enter> <enter> <enter> w <enter>
90
91
Lab 14.4: RAC Extended
Nas máquinas
nerv02
e
nerv11
, execute a detecção dos novos discos.
# partprobe /dev/sdb # partprobe /dev/sdc # partprobe /dev/sdd # partprobe /dev/sde # partprobe /dev/sdf # partprobe /dev/sdg # partprobe /dev/sdh # partprobe /dev/sdi
Na máquinas
nerv11
, configure a ASMLib.
# /etc/init.d/oracleasm configure oracle <enter> oinstall <enter> y <enter> y <enter> # /etc/init.d/oracleasm status
91
Na máquina
nerv01
, crie os novos discos do ASM.
# /etc/init.d/oracleasm createdisk DISK04 /dev/sdf1 # /etc/init.d/oracleasm createdisk DISK05 /dev/sdg1 # /etc/init.d/oracleasm createdisk DISK06 /dev/sdh1 # /etc/init.d/oracleasm createdisk DISK07 /dev/sdi1
Nas máquinas
nerv02 e nerv11
, execute a detecção dos discos criados.
# /etc/init.d/oracleasm scandisks
Nas máquinas
nerv01,
nerv02
e
nerv11
, verifique se os discos estão corretos.
# /etc/init.d/oracleasm listdisks
# /etc/init.d/oracleasm querydisk -v -p DISK04 # /etc/init.d/oracleasm querydisk -v -p DISK05 # /etc/init.d/oracleasm querydisk -v -p DISK06 # /etc/init.d/oracleasm querydisk -v -p DISK07
Nas máquinas
nerv01,
nerv02
e
nerv11
, verifique se os discos estão corretos.
# ls -lh /dev/oracleasm/disks/
brw-rw----. 1 oracle oinstall 8, 17 Mar 3 08:40 DISK00 brw-rw----. 1 oracle oinstall 8, 33 Mar 3 08:40 DISK01
...
92
Lab 14.5: RAC Extended
93
Lab 14.6: RAC Extended
Na máquina
nerv11
, adicionar no arquivo /etc/fstab as TRÊS linhas abaixo.
nerv09:/shared_config01 /u01/shared_config01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0 nerv09:/shared_config02 /u01/shared_config02 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0 nerv09:/shared_config03 /u01/shared_config03 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0,noac 0 0
Na máquinas
nerv11
, executar:
# mount /u01/shared_config01 # mount /u01/shared_config02 # mount /u01/shared_config03
Nas máquinas
nerv01,
nerv02
e
nerv11
, remova a pasta /home/oracle/.ssh.
$ rm -rf .ssh
Na máquina
nerv01
, configure o SSH sem senha.
[oracle@nerv01 ~]$ ssh-keygen -t rsa <enter>
<enter> <enter>
[oracle@nerv01 ~]$ ssh oracle@nerv02 mkdir -p .ssh [oracle@nerv01 ~]$ ssh oracle@nerv11 mkdir -p .ssh
[oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys' [oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys' [oracle@nerv01 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'
94
Lab 14.7: RAC Extended
Na máquina
nerv02
, configure o SSH sem senha.
[oracle@nerv02 ~]$ ssh-keygen -t rsa <enter>
<enter> <enter>
[oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys' [oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys' [oracle@nerv02 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'
95
Lab 14.8: RAC Extended
Na máquina
nerv11
, configure o SSH sem senha.
[oracle@nerv11 ~]$ ssh-keygen -t rsa <enter>
<enter> <enter>
[oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv01 'cat >> .ssh/authorized_keys' [oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv02 'cat >> .ssh/authorized_keys' [oracle@nerv11 ~]$ cat .ssh/id_rsa.pub | ssh oracle@nerv11 'cat >> .ssh/authorized_keys'
96
Lab 14.9: RAC Extended
Na máquina
nerv01
, execute a instalação do Grid na máquina
nerv11
.
$ cd $GRID_HOME/oui/bin
$ ./addNode.sh -silent “CLUSTER_NEW_NODES={nerv11}” “CLUSTER_NEW_VIRTUAL_HOSTNAMES={nerv11-vip}”
Na máquina
nerv11
, com o usuário root, execute os seguintes scripts.
# /u01/app/oraInventory/orainstRoot.sh # /u01/app/11.2.0.4/grid/root.sh
Na máquina
nerv01
, execute instalação do Oracle na máquina
nerv11
.
$ cd $ORACLE_HOME/oui/bin
$ ./addNode.sh -silent "CLUSTER_NEW_NODES={nerv11}"
Na máquina
nerv11
, com o usuário root, execute o script abaixo.
# /u01/app/oracle/product/11.2.0.4/db_1/root.sh
Na máquina
nerv01
, execute a adição da instância.
$ $GRID_HOME/bin/srvctl add instance -d ORCL -i ORCL3 -n nerv11
97
Lab 14.10: RAC Extended
Na máquina
nerv01
, conclua a adição do nó.
SQL> ALTER SYSTEM SET INSTANCE_NUMBER=3 SID='ORCL3' SCOPE=SPFILE; SQL> ALTER DATABASE ADD LOGFILE THREAD 3;
SQL> ALTER DATABASE ADD LOGFILE THREAD 3; SQL> CREATE UNDO TABLESPACE UNDOTBS3;
SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS3 SID='ORCL3' SCOPE=SPFILE; $ $GRID_HOME/bin/srvctl start instance -d ORCL -i ORCL3
98
Lab 14.11: RAC Extended
Na máquina nerv01, faça a preparação para a criação dos novos FAILGROUPs.
$ export ORACLE_HOME=$ORACLE_BASE/product/11.2.0.4/db_1 $ export ORACLE_SID=ORCL1
RMAN> DELETE NOPROMPT ARCHIVELOG ALL; $GRID_HOME/bin/srvctl stop database -d ORCL $ export ORACLE_HOME=$GRID_HOME
$ export ORACLE_SID=+ASM1
SQL> ALTER SYSTEM SET ASM_POWER_LIMIT = 11;
SQL> ALTER DISKGROUP DATA REBALANCE POWER 11; SQL> ALTER DISKGROUP FRA REBALANCE POWER 11;
99
Lab 14.12: RAC Extended
Na máquina nerv01, crie os novos FAILGROUPs.
SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK '/dev/oracleasm/disks/DISK04' NAME DATA_0004;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK05' NAME DATA_0005;
SQL> ALTER DISKGROUP DATA DROP DISK DATA_0000; SQL> ALTER DISKGROUP DATA DROP DISK DATA_0001; SQL> SELECT * FROM V$ASM_OPERATION;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK00' NAME DATA_0000;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPA DISK '/dev/oracleasm/disks/DISK01' NAME DATA_0001;
SQL> ALTER DISKGROUP DATA DROP DISK DATA_0005;
SQL> ALTER DISKGROUP DATA ADD FAILGROUP FAILGROUPB DISK '/dev/oracleasm/disks/DISK05' NAME DATA_0005;
SQL> SELECT GROUP_NUMBER, PATH, NAME, FAILGROUP FROM V$ASM_DISK ORDER BY 1,2;