Skip to content

Posts from the ‘12c’ Category

How Perform Cross-Platform Database Transport to different Endian Platform with RMAN Backup Sets

GOAL

This Document explains the new 12c feature:  Cross-Platform Data transport Using rman Backup Sets across different Endian Platform.

RMAN can transport databases, data files, and tablespaces across platforms using backup sets. Performing cross-platform data transport with backup sets enables you to use block compression to reduce the size of backups. This improves backup performance and reduces the time taken to transport backups over the network.


A cross-platform backup is an RMAN backup that can be restored on a destination platform that is different from the source platform

SOLUTION

For the purposes of this document, the following fictitious environment is used as an example to describe the procedure:

Source database: AIX-Based Systems (64-bit) (Big Endian format)
Destination database: Linux x86 64-bit (Little Endian format)
User RMAN_TEST is granted SYSBACKUP privilege in both source and destination database
Tablespace been transported is TEST1
Source database name is SOURCE
Destination database name is DEST

*******************


Pre-requisite

  • To perform cross-platform data transport using backup sets, the version of the destination database must be Oracle Database 12c Release 1 (12.1) or later.
  • The COMPATIBLE parameter in the server parameter file of the source database and the destination database must be set to 12.0.0 or higher.
  • The tablespaces to be transported are in read-only mode, unless the ALLOW INCONSISTENT clause is used in the BACKUP command.
  • The tablespaces to be transported are self-contained.
  • Execute the DBMS_TTS.TRANSPORT_SET_CHECK procedure to check for dependencies. If the TRANSPORT_SET_VIOLATIONS view contains rows corresponding to the specified tablespaces, then you must resolve the dependencies before creating the cross-platform backup. 

Terms you need to be aware of

Foreign Data File

Data files that do not belong to the destination database are called foreign data files. These data files are being plugged in to the destination database as part of a data transfer to the destination database. In the source database, this data file is identified by its original data file number.

Foreign Tablespace

A foreign tablespace is a set of foreign data files that comprise a tablespace in the source database. These foreign data files do not belong to the destination database, but are being transported into the destination database and are identified by the original tablespace name in the source database.

Foreign Data File Copy

A foreign data file copy is a data file that was restored from a cross-platform backup. It cannot be directly plugged in to the destination database because it is inconsistent. You must apply a cross-platform incremental backup to this data file and recover it before you can plug it in to the destination database.

Data Pump Destination

A Data Pump destination is a location on the disk of the server host of the destination database on which the Data Pump export dump file and the Data Pump log files are stored.

NOTE:  If the tablespace which is part of TTS/migration contains an object owned by a schema which does not exist in the destination database, the following error may occur:

ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role ‘DMSTEAM’ does not exist
Failing sql is:GRANT DELETE ON “ADVTVS”.”FT_ENQUIRY_PLAN” TO “DMSTEAM”
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role ‘DMSTEAM’ does not exist

To avoid this issue, identify all the schema(s) on the source and create them on the destination server, if necessary, before executing the plugin process. The following query may be of help:

select distinct OWNER from dba_segments where tablespace_name in (‘<tablespace name which is part of TTS/migration>’);

Steps to Transport Read-Only Tablespaces to a Different Platform Using Backup Sets

Step 1: Connect to the source database (AIX) from which you need to transport tablespaces as TARGET.

$ rman

Recovery Manager: Release 12.1.0.2.0 – Production on Thu May 28 14:38:06 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

RMAN> Connect target “rman_test@source AS SYSBACKUP”;

Enter the password for the RMAN_TEST user in source when prompted.

Step 2:  Place the tablespaces to be transported in read-only mode.

RMAN> alter tablespace test1 read only;

using target database control file instead of recovery catalog
Statement processed

Step 3:  Back up the tablespace on the source database using the BACKUP command with the TO PLATFORM or FOR TRANSPORT clause.

Use the DATAPUMP clause to indicate that an export dump file for the tablespaces must be created RMAN> BACKUP
TO PLATFORM ‘Linux x86 64-bit’
FORMAT ‘/tmp/xplat_backups/trans_ts.bck’
DATAPUMP FORMAT ‘/tmp/xplat_backups/trans_ts_dmp.bck’
TABLESPACE test1;


This command results in the following output:

RMAN>
Starting backup at 22-MAY-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=184 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfullyPerforming export of metadata for specified tablespaces…
   EXPDP> Starting “SYSBACKUP”.”TRANSPORT_EXP_SOURCE_asbF”:  
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table “SYSBACKUP”.”TRANSPORT_EXP_SOURCE_asbF” successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYSBACKUP.TRANSPORT_EXP_SOURCE_asbF is:
   EXPDP>   /<oracle_home path>/dbs/backup_tts_SOURCE_93693.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TEST1:
   EXPDP>   /<path>/oradata/SOURCE/test01.dbf
   EXPDP> Job “SYSBACKUP”.”TRANSPORT_EXP_SOURCE_asbF” successfully completed at Fri May 22 11:54:53 2015 elapsed 0 00:02:06
Export completed

channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/<path>/oradata/SOURCE/test01.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-15
channel ORA_DISK_1: finished piece 1 at 22-MAY-15
piece handle=/<path>/oradata/trans_ts.bck tag=TAG20150522T115158 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
input Data Pump dump file=/<oracle_home path>/dbs/backup_tts_SOURCE_93693.dmp
channel ORA_DISK_1: starting piece 1 at 22-MAY-15
channel ORA_DISK_1: finished piece 1 at 22-MAY-15
piece handle=/<path>/oradata/trans_ts_dmp.bck tag=TAG20150522T115158 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 22-MAY-15

When the 'TO PLATFORM' clause is used, conversion to the endian format of the destination database is performed on the source database.

Exit this Rman session connected to the source.

Please note :- For Step 3 the export file is created in $ORACLE_HOME/dbs until it completes and then it is moved to the location specified by DATAPUMP FORMAT.  

Ensure this location has enough space.  

Enhancement request 25909942 – RMAN’S BACKUP TO PLATFORM SHOULD NOT CREATE FILE IN $ORACLE_HOME/DBS has been logged for this issue so that file is written directly to DATAPUMP FORMAT location.   
Currently its still in Initial review status

Step 4: Move the backupsets to Destination server

Move the backup sets created by the BACKUP command and the Data Pump export dump file to the destination host. An operating system utility is used to move
the backup sets from the source host to the destination host.

Step 5: Connect to destination database (Linux x86 64-bit)

Connect to the destination database, in which the tablespaces must be transported, as TARGET. The destination database is open in read write mode.

In this example, a TEST1 user is granted the SYSBACKUP privilege within the destination database:

$ rman

Recovery Manager: Release 12.1.0.2.0 – Production on Thu May 28 14:38:06 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

RMAN> connect target “rman_test@dest AS SYSBACKUP”;

Enter the password for the RMAN_TEST user in destination when prompted.

Step 6: Restore the tablespace from backupset.

Restore the datafiles from the backup sets transported from the source database using the RESTORE command. Use the DUMP FILE clause to import the export dump file containing the tablespace metadata and plug the tablespaces in to the destination database.

In this example the backuppiece and dump file were copied to /u01/app/oracle/oradata/test/.

If db_create_file_dest set in the destination database execute:RMAN> RESTORE
FOREIGN TABLESPACE TEST1 to NEW
FROM BACKUPSET ‘/u01/app/oracle/oradata/test/trans_ts.bck’
DUMP FILE FROM BACKUPSET ‘/u01/app/oracle/oradata/test/trans_ts_dmp.bck’;

NOTE: The use of “to NEW” restores the datafiles in the db_create_file_dest specification.

If using multiple backupsets, specify each like:RMAN> RESTORE
ALL FOREIGN DATAFILES TO NEW
FROM BACKUPSET ‘/mnt/staging_clt_target/tts/all_3tbs_uhrdm8f5_1_1.bck’ BACKUPSET ‘/mnt/staging_clt_target/tts/all_3tbs_uirdm8f6_1_1.bck’
DUMP FILE FROM BACKUPSET ‘/mnt/staging_clt_target/tts/all_3tbs_uhrdm8f5_1_1.bck’ ;

If using multiple backup pieces for each backupset, specify the backup pieces names within each backupset like:RMAN> RESTORE
ALL FOREIGN DATAFILES TO NEW
FROM BACKUPSET ‘/mnt/staging_clt_target/tts/RTODP-a0r8utgu_1_1′,’/mnt/staging_clt_target/tts/RTODP-a0r8utgu_2_1’,
BACKUPSET ‘/mnt/staging_clt_target/tts/RTODP-a1r8utgu_1_1′,’/mnt/staging_clt_target/tts/RTODP-a1r8utgu_2_1’
DUMP FILE FROM BACKUPSET ‘/mnt/staging_clt_target/tts/RTODP_metadata_1.bck’;

If db_create_file_dest is not set you can specify the location of the datafile using:RMAN> RESTORE
FOREIGN TABLESPACE TEST1 format ‘/u01/app/oracle/oradata/%U’
FROM BACKUPSET ‘/u01/app/oracle/oradata/dest/trans_ts.bck’
DUMP FILE FROM BACKUPSET ‘/u01/app/oracle/oradata/dest/trmp.bck’;

The command results in the following output:

Starting restore at 22-MAY-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace TEST1
channel ORA_DISK_1: reading from backup piece/u01/app/oracle/oradata/dest/trans_ts.bck
channel ORA_DISK_1: restoring foreign file 2 to/u01/app/oracle/oradata/dest/data_D-dest_I-2390668590_TS-TEST1_FNO-2_jvq7is5b
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/oradata/dest/trans_ts.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to/u01/app/oracle/product/c12101/dbs/backup_tts_dest_31167.dmp
channel ORA_DISK_1: reading from backup piece/u01/app/oracle/oradata/dest/trans_ts_dmp.bck
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/oradata/dest/trans_ts_dmp.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata…
IMPDP> Master table “SYSBACKUP”.”TSPITR_IMP_dest_jykD” successfully loaded/unloaded
IMPDP> import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
export done in US7ASCII character set and AL16UTF16 NCHAR character set
IMPDP> Starting “SYSBACKUP”.”TSPITR_IMP_dest_jykD”:
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job “SYSBACKUP”.”TSPITR_IMP_dest_jykD” successfully completed at Fri May 22 12:19:08 2015 elapsed 0 00:00:05
Import completed

Finished restore at 22-MAY-15

Exit this Rman session connected to the destination.

Step 7: Alter the tablespace to read write

SQL> select status,tablespace_name from dba_tablespaces;

STATUS TABLESPACE_NAME
——— ——————————
ONLINE SYSTEM
ONLINE SYSAUX
ONLINE UNDOTBS1
ONLINE TEMP
ONLINE USERS
READ ONLY TEST1

SQL> Alter tablespace test1 read write ;

Tablespace altered.

SQL> select status,tablespace_name from dba_tablespaces;

STATUS TABLESPACE_NAME
——— ——————————
ONLINE SYSTEM
ONLINE SYSAUX
ONLINE UNDOTBS1
ONLINE TEMP
ONLINE USERS
ONLINE TEST1

Additional information (Difference between TO PLATFORM and FOR Transport)

You can use the BACKUP command to create backup sets for cross-platform data transportation. To indicate that a backup is for cross-platform transportation,
use either the FOR TRANSPORT or TO PLATFORM clauses.

When you use TO PLATFORM, the endian format conversion is performed on the source database.
Therefore, you can restore this cross-platform backup only on the specified platform. If you omit TO PLATFORM, you can restore this cross-platform backup on any supported platform.The target platform specified by the TO PLATFORM clause must be a supported platform for cross-platform transport. The V$TRANSPORTABLE_PLATFORM view
contains the list of supported platforms.

When you use FOR TRANSPORT, the backup set that is created can be transported to any destination database. If the destination database uses an endian format that is
different from that of the source database, then the required endian format conversion is performed on the destination database (This is applicable for tablespace level transport).You can transport a entire database only if the source and destination use the same endian format.. The benefit of this method is that the processing
overhead of the conversion operation is offloaded to the destination database.

RMAN Cross-Platform Transport of PDB into Destination CDB

Example

For the purposes of this document, the following fictitious environment is used as an example to describe the procedure:

Database names: SRC122, DEST122
Pluggable database name: PDB1
New container database name: DEST_DB

********



Using RMAN, Pluggable database (PDBs) can be transported and plugged in to a destination multitenant container database (CDB) which is on a different platform than the source CDB. In addition to backup of the PDB, RMAN also copies the metadata required to plug the PDB into the destination CDB. The source CDB and the destination CDB must use the same endian format.

Below steps show migration of closed PDB from source platform Solaris (Big Endian) to AIX (Big Endian)

SQL> select name,open_mode,platform_name from v$database;

Source:NAME      OPEN_MODE     PLATFORM_NAME
——— ————- ————————
SRC122    READ WRITE    Solaris[tm] OE (64-bit)

Destination:

NAME      OPEN_MODE    PLATFORM_NAME
——— ———— —————————
DEST122   READ WRITE   AIX-Based Systems (64-bit) 

SOLUTION

1. Close the source pdb:SQL> alter pluggable database PDB1 close immediate;

2. Backup the source PDB:RMAN> backup for transport
2> unplug into ‘/<path>/backup/PDB1_Metadata.xml’
3> format ‘/<path>/backup/PDB1_BKP_%U’
4> pluggable database PDB1;

3. SCP the backup-piece and PDB metadata files to destination

4. Check on destination whether the PDB can be plugged in using dbms_pdb.check_plug_compatibilityset serveroutput on
declare
c boolean;
begin
c:=dbms_pdb.check_plug_compatibility(‘/<path>/backup/PDB1_Metadata.xml’,’PDB1′);
if (c) then dbms_output.put_line(‘True’);
else dbms_output.put_line(‘False’);
end if;
end;
/

5. Restore the PDB on destination:RMAN> restore using ‘/<path>/backup/PDB1_Metadata.xml’
2> foreign pluggable database PDB1
3> format ‘/<path>/oradata/DEST_DB/%U’
4> from backupset ‘/<path>/backup/PDB1_BKP_02s9sj0u_1_1’;

6. Open the pluggable database PDB1 on destination:SQL> alter pluggable database PDB1 open;NOTE: This document covers PDB cross platform migration strategy using consistent backups i.e. the PDB is closed prior to backup. To reduce downtime, migration can also be achieved using inconsistent backups where, PDB level 0 backup is taken using clauses FOR TRANSPORT and ALLOW INCONSISTENT while the PDB is open in READ WRITE mode. Thereafter, PDB can be closed and a level 1 backup can be performed using FROM SCN clause and UNPLUG INTO (new in 12.2) clause to perform a final level 1 and also get the PDB metadata.

Remote_listener Getting Reset To Blank After Cluster Startup

APPLIES TO:

Oracle Database – Enterprise Edition – Version 12.2.0.1 to 19.3.0.0.0 [Release 12.2 to 18]
Information in this document applies to any platform.

SYMPTOMS

users get error while connecting to database

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor


remote_listener value got reset to blank after db node patching on all databases of the cluster causing databases not able to connect from outside, and databases are not getting registered with scan_listeners.
crsd_oraagent_oracle.trc inside <oracle base>/diag/crs/<HOSTNAME>/crs/trace location indicate following messages2019-05-11 19:41:49.764 : USRTHRD:3059689216: {2:49138:41584} LsnrRegThd::setRemoteListener 391 tmpSet endp:<scan name>:<PORT>
2019-05-11 19:41:49.764 : USRTHRD:3059689216: {2:49138:41584} LsnrRegThd::setRemoteListener 440 skip remote listener registration m_runRegRLUpdate:0 isForceRemoteListenerSet:0 isRemoteListenerSet:1
2019-05-11 19:41:49.774 : USRTHRD:3059689216: {2:49138:41584} ORA-01405: fetched column value is NULL

CHANGES

CAUSE

This is due to unpublished Bug 29637215 – FAEOPS: SHELL DB DOESN’T HAVE REMOTE_LISTENER VALUE WITH GRID 19.1.0.3
 

SOLUTION

Bug 29637215 fixed in future release. Apply interim fix 29637215, if available for your platform and Oracle version.

If no patch exists for your version, please contact Oracle Support for a backport request

As a workaround,

Manually update the remote_listener values every time bounce the instances from this cluster.

alter system set remote_listener=’racscan.com:1521′ scope=both sid=’*’;


 

RMAN Cross Platform Tablespace Transport Over Network

12.1 introduced cross platform data transport using backup-sets. We could perform database/tablespace backups using FOR TRANSPORT / TO PLATFORM clause and restore the same to a different OS Platform. This typically involved 3 steps: create backup on source, copy backup-piece to destination and perform restoration on destination.

Starting from 12.2, we can perform cross platform transport of datafiles directly over the network using the FROM SERVICE clause along with the RESTORE FOREIGN DATAFILE command. This command internally performs a cross platform backup of the requested datafiles on source, the backup-piece chunks are transferred to destination over network and restoration is performed on destination.

SOLUTION

Below steps demonstrate a cross platform transport of tablespace EXAMPLE from source platform AIX (Big Endian) to destination Linux (Little Endian)

1. On source, check the datafiles which belong to tablespace EXAMPLE

SQL> select FILE_ID from DBA_DATA_FILES where TABLESPACE_NAME=’EXAMPLE’;

FILE_ID
———-
2

2. On destination, perform a cross platform restore of datafiles belonging to source tablespace EXAMPLE using FROM SERVICE clause:

Note: Start RMAN and connect to the destination database (not the source database) as TARGET.

rman target sys/<password>@<destination_tns>

RMAN> restore foreign datafile 2
2> format ‘/<destination_path>/example01.dbf’
3> from service <source_service_name>;

We can perform the migration using consistent or inconsistent backups. If using consistent backup, put the tablespace EXAMPLE in READ ONLY mode before running above restore command. If using inconsistent backups, the tablespace can remain in READ WRITE mode while performing initial restore.

3. If above restoration was done with tablespace in READ ONLY mode, then recovery is not required. You can skip to next step i.e. step# 4. However, if restoration was done with tablespace in READ WRITE mode, then we need to perform recovery. This can also be done directly over the network. Multiple recovery attempts can be performed while the source tablespace is in READ WRITE mode to sync it with latest changes being done on source.

Start RMAN and connect to the destination database (not the source database) as TARGET and execute RMAN recover command.

rman target sys/<password>@<destination_tns>

RMAN> recover foreign datafilecopy ‘/<destination_path>/example01.dbf’
2> from service <source_service_name>;

Before, performing final recovery, put source tablepace in READ ONLY modeSQL> alter tablespace EXAMPLE read only;

And, perform a final recovery on destination

rman target sys/<password>@<destination_tns>

RMAN> recover foreign datafilecopy ‘/<destination_path>/example01.dbf’
2> from service <source_service_name>;

4. We now have a consistent datafile on destination but it’s not yet plugged into the destination database. For this, we need to perform metadata export on source and metadata import in destination. We can also combine these 2 steps using network_link option of impdp. For this, create a database link in the destination database pointing to source database:SQL> create public database link <dblink> connect to system identified by oracle using ‘<source_service_name>’;

And then, perform the metadata import (user system is used in this example)impdp system/<password> network_link=<dblink> transport_tablespaces=EXAMPLE transport_datafiles=’/<destination_path>/example01.dbf’

DBCA silent Mode

Drop Database

dbca -silent -deleteDatabase -sourceDB cdb1 -sysDBAUserName sys -sysDBAPassword  abcd
Connecting to database
9% complete
14% complete
19% complete
23% complete
28% complete
33% complete
38% complete
47% complete
Updating network configuration files
48% complete
52% complete
Deleting instances and datafiles
66% complete
80% complete
95% complete
100% complete
Look at the log file “/home/cfgtoollogs/dbca/cdb1.log” for further details.

$ more /home/cfgtoollogs/dbca/cdb1.log
[ 2020-04-17 11:26:57.709 CDT ] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be deleted. Do you
want to proceed?
[ 2020-04-17 11:26:57.737 CDT ] Connecting to database
DBCA_PROGRESS : 9%
DBCA_PROGRESS : 14%
DBCA_PROGRESS : 19%
DBCA_PROGRESS : 23%
DBCA_PROGRESS : 28%
DBCA_PROGRESS : 33%
DBCA_PROGRESS : 38%
DBCA_PROGRESS : 47%
[ 2020-04-17 11:28:00.106 CDT ] Updating network configuration files
DBCA_PROGRESS : 48%
DBCA_PROGRESS : 52%
[ 2020-04-17 11:28:01.007 CDT ] Deleting instances and datafiles
DBCA_PROGRESS : 66%
DBCA_PROGRESS : 80%
DBCA_PROGRESS : 95%
DBCA_PROGRESS : 100%
[ 2020-04-17 11:28:10.797 CDT ] Database deletion completed.

 

 

 

 

 

 

 

12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN Backup Sets

GOAL

This Document explains the new 12c feature:  Cross-Platform Data transport Using rman Backup Sets across different Endian Platform.

RMAN can transport databases, data files, and tablespaces across platforms using backup sets. Performing cross-platform data transport with backup sets enables you to use block compression to reduce the size of backups. This improves backup performance and reduces the time taken to transport backups over the network.
A cross-platform backup is an RMAN backup that can be restored on a destination platform that is different from the source platform

SOLUTION

NOTE: In the images and/or the document content below, the user information and environment data used represents fictitious data from the Oracle sample schema(s), Public Documentation delivered with an Oracle database product or other training material. Any similarity to actual environments, actual persons, living or dead, is purely coincidental and not intended in any manner.

For the purposes of this document, the following fictitious environment is used as an example to describe the procedure:

Source database: AIX-Based Systems (64-bit) (Big Endian format)
Destination database: Linux x86 64-bit (Little Endian format)
User RMAN_TEST is granted SYSBACKUP privilege in both source and destination database
Tablespace been transported is TEST1
Source database name is SOURCE
Destination database name is DEST

*******************


Pre-requisite

  • To perform cross-platform data transport using backup sets, the version of the destination database must be Oracle Database 12c Release 1 (12.1) or later.
  • The COMPATIBLE parameter in the server parameter file of the source database and the destination database must be set to 12.0.0 or higher.
  • The tablespaces to be transported are in read-only mode, unless the ALLOW INCONSISTENT clause is used in the BACKUP command.
  • The tablespaces to be transported are self-contained.
  • Execute the DBMS_TTS.TRANSPORT_SET_CHECK procedure to check for dependencies. If the TRANSPORT_SET_VIOLATIONS view contains rows corresponding to the specified tablespaces, then you must resolve the dependencies before creating the cross-platform backup.

Terms you need to be aware of

Foreign Data File

Data files that do not belong to the destination database are called foreign data files. These data files are being plugged in to the destination database as part of a data transfer to the destination database. In the source database, this data file is identified by its original data file number.

Foreign Tablespace

A foreign tablespace is a set of foreign data files that comprise a tablespace in the source database. These foreign data files do not belong to the destination database, but are being transported into the destination database and are identified by the original tablespace name in the source database.

Foreign Data File Copy

A foreign data file copy is a data file that was restored from a cross-platform backup. It cannot be directly plugged in to the destination database because it is inconsistent. You must apply a cross-platform incremental backup to this data file and recover it before you can plug it in to the destination database.

Data Pump Destination

A Data Pump destination is a location on the disk of the server host of the destination database on which the Data Pump export dump file and the Data Pump log files are stored.

 

NOTE:  If the tablespace which is part of TTS/migration contains an object owned by a schema which does not exist in the destination database, the following error may occur:

ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role ‘DMSTEAM’ does not exist
Failing sql is:GRANT DELETE ON “ADVTVS”.”FT_ENQUIRY_PLAN” TO “DMSTEAM”
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role ‘DMSTEAM’ does not exist

To avoid this issue, identify all the schema(s) on the source and create them on the destination server, if necessary, before executing the plugin process. The following query may be of help:

select distinct OWNER from dba_segments where tablespace_name in (‘<tablespace name which is part of TTS/migration>’);

Steps to Transport Read-Only Tablespaces to a Different Platform Using Backup Sets

Step 1: Connect to the source database (AIX) from which you need to transport tablespaces as TARGET.

$ rman

Recovery Manager: Release 12.1.0.2.0 – Production on Thu May 28 14:38:06 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

RMAN> Connect target “rman_test@source AS SYSBACKUP”;

Enter the password for the RMAN_TEST user in source when prompted.

Step 2:  Place the tablespaces to be transported in read-only mode.

RMAN> alter tablespace test1 read only;

using target database control file instead of recovery catalog
Statement processed

 

Step 3:  Back up the tablespace on the source database using the BACKUP command with the TO PLATFORM or FOR TRANSPORT clause.

Use the DATAPUMP clause to indicate that an export dump file for the tablespaces must be created

RMAN> BACKUP
TO PLATFORM ‘Linux x86 64-bit’
FORMAT ‘/tmp/xplat_backups/trans_ts.bck’
DATAPUMP FORMAT ‘/tmp/xplat_backups/trans_ts_dmp.bck’
TABLESPACE test1;

This command results in the following output:

RMAN>
Starting backup at 22-MAY-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=184 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces…
EXPDP> Starting “SYSBACKUP”.”TRANSPORT_EXP_SOURCE_asbF”:
EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
EXPDP> Master table “SYSBACKUP”.”TRANSPORT_EXP_SOURCE_asbF” successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYSBACKUP.TRANSPORT_EXP_SOURCE_asbF is:
EXPDP>   /<oracle_home path>/dbs/backup_tts_SOURCE_93693.dmp
EXPDP> ******************************************************************************
EXPDP> Datafiles required for transportable tablespace TEST1:
EXPDP>   /<path>/oradata/SOURCE/test01.dbf
EXPDP> Job “SYSBACKUP”.”TRANSPORT_EXP_SOURCE_asbF” successfully completed at Fri May 22 11:54:53 2015 elapsed 0 00:02:06
Export completed

channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/<path>/oradata/SOURCE/test01.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-15
channel ORA_DISK_1: finished piece 1 at 22-MAY-15
piece handle=/<path>/oradata/trans_ts.bck tag=TAG20150522T115158 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
input Data Pump dump file=/<oracle_home path>/dbs/backup_tts_SOURCE_93693.dmp
channel ORA_DISK_1: starting piece 1 at 22-MAY-15
channel ORA_DISK_1: finished piece 1 at 22-MAY-15
piece handle=/<path>/oradata/trans_ts_dmp.bck tag=TAG20150522T115158 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 22-MAY-15

When the 'TO PLATFORM' clause is used, conversion to the endian format of the destination database is performed on the source database.

Exit this Rman session connected to the source.

Please note :- For Step 3 the export file is created in $ORACLE_HOME/dbs until it completes and then it is moved to the location specified by DATAPUMP FORMAT.

Ensure this location has enough space.

Enhancement request 25909942 – RMAN’S BACKUP TO PLATFORM SHOULD NOT CREATE FILE IN $ORACLE_HOME/DBS has been logged for this issue so that file is written directly to DATAPUMP FORMAT location.
Currently its still in Initial review status

Step 4: Move the backupsets to Destination server


Move the backup sets created by the BACKUP command and the Data Pump export dump file to the destination host. An operating system utility is used to move
the backup sets from the source host to the destination host.

Step 5: Connect to destination database (Linux x86 64-bit)

Connect to the destination database, in which the tablespaces must be transported, as TARGET. The destination database is open in read write mode.

In this example, a TEST1 user is granted the SYSBACKUP privilege within the destination database:

$ rman

Recovery Manager: Release 12.1.0.2.0 – Production on Thu May 28 14:38:06 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

RMAN> connect target “rman_test@dest AS SYSBACKUP”;

Enter the password for the RMAN_TEST user in destination when prompted.

 

Step 6: Restore the tablespace from backupset.

Restore the datafiles from the backup sets transported from the source database using the RESTORE command. Use the DUMP FILE clause to import the export dump file containing the tablespace metadata and plug the tablespaces in to the destination database.

In this example the backuppiece and dump file were copied to /u01/app/oracle/oradata/test/.

If db_create_file_dest set in the destination database execute:

RMAN> RESTORE
FOREIGN TABLESPACE TEST1 to NEW
FROM BACKUPSET ‘/u01/app/oracle/oradata/test/trans_ts.bck’
DUMP FILE FROM BACKUPSET ‘/u01/app/oracle/oradata/test/trans_ts_dmp.bck’;

NOTE: The use of “to NEW” restores the datafiles in the db_create_file_dest specification.

If using multiple backupsets, specify each like:

RMAN> RESTORE
ALL FOREIGN DATAFILES TO NEW
FROM BACKUPSET ‘/mnt/staging_clt_target/tts/all_3tbs_uhrdm8f5_1_1.bck’ BACKUPSET ‘/mnt/staging_clt_target/tts/all_3tbs_uirdm8f6_1_1.bck’
DUMP FILE FROM BACKUPSET ‘/mnt/staging_clt_target/tts/all_3tbs_uhrdm8f5_1_1.bck’ ;

If using multiple backup pieces for each backupset, specify the backup pieces names within each backupset like:

RMAN> RESTORE
ALL FOREIGN DATAFILES TO NEW
FROM BACKUPSET ‘/mnt/staging_clt_target/tts/RTODP-a0r8utgu_1_1′,’/mnt/staging_clt_target/tts/RTODP-a0r8utgu_2_1’,
BACKUPSET ‘/mnt/staging_clt_target/tts/RTODP-a1r8utgu_1_1′,’/mnt/staging_clt_target/tts/RTODP-a1r8utgu_2_1’
DUMP FILE FROM BACKUPSET ‘/mnt/staging_clt_target/tts/RTODP_metadata_1.bck’;

If db_create_file_dest is not set you can specify the location of the datafile using:

RMAN> RESTORE
FOREIGN TABLESPACE TEST1 format ‘/u01/app/oracle/oradata/%U’
FROM BACKUPSET ‘/u01/app/oracle/oradata/dest/trans_ts.bck’
DUMP FILE FROM BACKUPSET ‘/u01/app/oracle/oradata/dest/trmp.bck’;

The command results in the following output:

Starting restore at 22-MAY-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace TEST1
channel ORA_DISK_1: reading from backup piece/u01/app/oracle/oradata/dest/trans_ts.bck
channel ORA_DISK_1: restoring foreign file 2 to/u01/app/oracle/oradata/dest/data_D-dest_I-2390668590_TS-TEST1_FNO-2_jvq7is5b
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/oradata/dest/trans_ts.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to/u01/app/oracle/product/c12101/dbs/backup_tts_dest_31167.dmp
channel ORA_DISK_1: reading from backup piece/u01/app/oracle/oradata/dest/trans_ts_dmp.bck
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/oradata/dest/trans_ts_dmp.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata…
IMPDP> Master table “SYSBACKUP”.”TSPITR_IMP_dest_jykD” successfully loaded/unloaded
IMPDP> import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
export done in US7ASCII character set and AL16UTF16 NCHAR character set
IMPDP> Starting “SYSBACKUP”.”TSPITR_IMP_dest_jykD”:
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job “SYSBACKUP”.”TSPITR_IMP_dest_jykD” successfully completed at Fri May 22 12:19:08 2015 elapsed 0 00:00:05
Import completed

Finished restore at 22-MAY-15

Exit this Rman session connected to the destination.

Step 7: Alter the tablespace to read write

SQL> select status,tablespace_name from dba_tablespaces;

STATUS TABLESPACE_NAME
——— ——————————
ONLINE SYSTEM
ONLINE SYSAUX
ONLINE UNDOTBS1
ONLINE TEMP
ONLINE USERS
READ ONLY TEST1

SQL> Alter tablespace test1 read write ;

Tablespace altered.

SQL> select status,tablespace_name from dba_tablespaces;

STATUS TABLESPACE_NAME
——— ——————————
ONLINE SYSTEM
ONLINE SYSAUX
ONLINE UNDOTBS1
ONLINE TEMP
ONLINE USERS
ONLINE TEST1

 

Additional information (Difference between TO PLATFORM and FOR Transport)

You can use the BACKUP command to create backup sets for cross-platform data transportation. To indicate that a backup is for cross-platform transportation,
use either the FOR TRANSPORT or TO PLATFORM clauses.

When you use TO PLATFORM, the endian format conversion is performed on the source database.
Therefore, you can restore this cross-platform backup only on the specified platform. If you omit TO PLATFORM, you can restore this cross-platform backup on any supported platform.The target platform specified by the TO PLATFORM clause must be a supported platform for cross-platform transport. The V$TRANSPORTABLE_PLATFORM view
contains the list of supported platforms.

When you use FOR TRANSPORT, the backup set that is created can be transported to any destination database. If the destination database uses an endian format that is
different from that of the source database, then the required endian format conversion is performed on the destination database (This is applicable for tablespace level transport).You can transport a entire database only if the source and destination use the same endian format.. The benefit of this method is that the processing
overhead of the conversion operation is offloaded to the destination database.

Add Single or Non RAC Database in SRVCTL or Cluster

srvctl add database -d singledbc -o /opt/oracle/ora12201 -dbtype single -node node1server.com
srvctl add service -d singledbc -pdb singledb -s singledbsvc
srvctl start service -d singledbc -s singledbsvc

srvctl stop database -d singledb -o immediate;
srvctl start database -d singledb

alter system set remote_listener=’scanlistener:1521′;

Deprecated view V$XS_SESSION_ROLE still exists after migration

SYMPTOMS

View V$XS_SESSION_ROLE still exists after migration from 11g to 12c and it is INVALID.

CAUSE

This view is not required in 12c and should have been dropped during the upgrade from 11g to 12c.

SOLUTION

A. To avoid this issue, please apply patch 22393016 as below:

1. Take a COLD database backup.

2. Apply patch 22393016 as below:

– install 12c software
– apply patch 22393016, which will provide the correct c1201000.sql
– upgrade database, this will use the corrected post-install scripts.

B. If the migration to 12c has been done and the view is present in the database, you have below options:

Option 1. Drop the view by running below workaround:

1. Take a COLD database backup.

2. Apply patch 22393016 as below:

– apply patch 22393016, which will provide the correct c1201000.sql
– shutdown database
– startup migrate
– @c1102000.sql
– shutdown database
– startup

Note: Do not run c1102000.sql on 12c database without applying patch 22393016 first. This patch provides the correct c1201000.sql.

If the patch is not applied before running @c1102000.sql, you will have invalid objects in the database.  If this accidentally happens, you can run catalog, catproc and utlrp.sql as below, for validating invalid objects:

NOTE: the database must be started in upgrade mode

connect / as sysdba
startup upgrade
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
@?/rdbms/admin/utlrp.sql
shutdown immediate
startup

Option 2. You can drop the view by following below steps:

1. Take a COLD database backup.

2. Delete the object from the data dictionary:

connect / as sysdbadelete from sys.objauth$
where obj# IN (SELECT obj# FROM obj$ WHERE subname = ‘$DEPRECATED$’);

Please do not skip step 1, as you are dropping from the database dictionary.

Option 3. You can ignore the view, as it is not used in 12c.

Option 4. You can manually change the status in the Data Dictionary

— check with CDB and PDB

connect / as sysdba

update obj$ set status = 1 where name = ‘V$XS_SESSION_ROLE’ and subname = ‘$DEPRECATED$’;
commit;

select * from dba_invalid_objects;

ORA-16698: member has a LOG_ARCHIVE_DEST_n parameter with SERVICE attribute set

Problem:

data guard configuration throws error

on primary

DGMGRL> ADD DATABASE stby AS CONNECT IDENTIFIER IS stby MAINTAINED AS PHYSICAL;
Error: ORA-16698: member has a LOG_ARCHIVE_DEST_n parameter with SERVICE attribute set

Failed.

Solution:

Save log_Archive_dest_2 old value and reset on both Primary and Standby

alter system set log_archive_dest_2=” scope=both sid=’*’;

## Add database again in dgmgrl

DGMGRL> ADD DATABASE stby AS CONNECT IDENTIFIER IS stby MAINTAINED AS PHYSICAL;

## put old value back on log_archive_dest_2 both primary and standby

#on primary

alter system set log_Archive_dest_2=’SERVICE=stby LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stby REOPEN=60′ scope=both sid=’*’;

#on standby

alter system set log_Archive_dest_2=’SERVICE=prmy LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=prmy REOPEN=60′ scope=both sid=’*’;

The instance prefix is invalid. Give a prefix with maximum 8 characters

Problem:

converting single instance database to RAC ONE node using RCONFIG gives error

-bash-4.2$ rconfig ConvertToRAC_AdminManaged_orcl.xml output_orcl.xml
<?xml version=”1.0″ ?>
<RConfig>
<Response>
<Result code=”1″ >
Got Exception
</Result>
<ErrorDetails>
The instance prefix is invalid. Give a prefix with maximum 8 characters.It may be different from SID of existing single-instance database.
</ErrorDetails>
</Response>
</RConfig>

 

Solution:

Provide instance prefix less then 8 character in rconfig xml file

<!–Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name.–>
<n:InstancePrefix>orcl</n:InstancePrefix>

and run again

rconfig ConvertToRAC_AdminManaged_orcl.xml output_orcl.xml