Tuesday, January 17, 2017

ASMCMD cp Fails on CIFS

Copying a file using asmcmd cp to a CIFS location fails as follows
ASMCMD> cp annnf0_ARCHIVE_LOG_0.382.923545643 /mnt/dbbackup
copying +flash/livedb/backupset/2016_09_26/annnf0_ARCHIVE_LOG_0.382.923545643 -> /mnt/dbbackup/annnf0_ARCHIVE_LOG_0.382.923545643
ASMCMD-08016: copy source->'+flash/livedb/backupset/2016_09_26/annnf0_ARCHIVE_LOG_0.382.923545643' and target->'/mnt/dbbackup/annnf0_ARCHIVE_LOG_0.382.923545643' failed
ORA-19510: failed to set size of 173851 blocks for file "/mnt/dbbackup/annnf0_ARCHIVE_LOG_0.382.923545643" (block size=512)
ORA-27045: unable to close the file
Linux-x86_64 Error: 5: Input/output error
Additional information: 4
ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 258
ORA-06512: at line 3 (DBD ERROR: OCIStmtExecute)
However if the copied location is examined the file is there with same size as source. However integrity of the file cannot be guaranteed as cp command has failed. The error message suggests "unable to close the file" so it could be that file is copied but file descriptor could not be closed. Copying to other file systems (ext3) worked fine.
ASMCMD> cp annnf0_ARCHIVE_LOG_0.382.923545643 /home/oracle
copying +flash/livedb/backupset/2016_09_26/annnf0_ARCHIVE_LOG_0.382.923545643 -> /home/oracle/annnf0_ARCHIVE_LOG_0.382.923545643
So issue was assumed not to be with asmcmd but the file system.
Trying to create a backup on the CIFS location also failed
RMAN> backup current controlfile format '/mnt/dbbackup/controlc.tl';

Starting backup at 28-Sep-2016 09:14:12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1029 instance=livedb2 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 28-Sep-2016 09:14:15
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ORA_DISK_1 channel at 09/28/2016 09:14:16
ORA-19504: failed to create file "/mnt/dbbackup/controlc.tl"
ORA-27040: file create error, unable to create file
Linux-x86_64 Error: 22: Invalid argument


Looking in MOS several notes were found which suggest limitations of using certain Oracle utilities with CIFS. For backups it was suggested to change the filesystemio_options to asynch or failing that to mount the CIFS location with file type cifs and oracle user's user id , group id
mount -t cifs -o username=oracle,uid=oracle,gid=oinstall //cfis location /mount point
Second solution had already been implemented and asmcmd cp and rman backup still failed. However after changing filesystemio_options to asynch, rman backups succeded but asmcmd cp continue to fail. It seems asmcmd is another utility that cannot be used with CIFS (disclaimer: could not find any official oracle document to verify this though).
Beside asmcmd , datapump is also not supported on CIFS (793805.1). CIFS could be used for RMAN backups but it is not a certified file system. Which means if there's any issue when using RMAN with CIFS there won't be any oracle support to resolve the issue (444809.1).

Useful Metalink Notes
Data Pump Fails With Errors ORA-39097 ORA-39065 ORA-31643 ORA-19510 Using CIFS [793805.1]
Is use of CIFS Protocol for RMAN backups supported? [444809.1]
'cp' Fails with 'cp: cannot create symbolic link' When Copying Files to CIFS Filesystem [728223.1]
ORA-19504 ORA-27040 During Rman Backup To A Mounted Windows Directory In Linux [464267.1]
Which are the mount settings for SAMBA? [557057.1]
Product Support Document for Common Internet File System (CIFS)[1398067.1]

How to mount Windows share on Red Hat Enterprise Linux system using CIFS?

Thursday, December 1, 2016

ORA-39127: unexpected error from call to "SYS"."DBMS_JVM_EXP_PERMS"."GRANT_SYSPRIVS_EXP"

While doing an export (expdp) following error was encountered.
Processing object type DATABASE_EXPORT/ROLE
ORA-39127: unexpected error from call to "SYS"."DBMS_JVM_EXP_PERMS"."GRANT_SYSPRIVS_EXP"
ORA-44002: invalid object name
ORA-06512: at "SYS.DBMS_ASSERT", line 383
ORA-06512: at "SYS.DBMS_METADATA", line 9408
The reason for this is the java components are removed but DBMS_JVM_EXP_PERMS package remains in the database.
SQL> select comp_name,status from dba_registry order by 2;

COMP_NAME                                STATUS
---------------------------------------- --------
Oracle Database Java Packages            REMOVED
JServer JAVA Virtual Machine             REMOVED
Oracle XDK                               REMOVED
Oracle Real Application Clusters         VALID
Oracle Database Catalog Views            VALID
Oracle Workspace Manager                 VALID
Oracle XML Database                      VALID
Oracle Database Packages and Types       VALID
To fix this, remove the DBMS_JVM_EXP_PERMS package from the database. Refer 1095533.1 for exact steps. Removing this package doesn't make any changes to component status. Java components will be shown as removed. However the packages left behind during the removal process is cleaned up.
Afterwards expdp continue without any issue.



Useful metalink note
DataPump Export (EXPDP) Failed On Identifier SYS.DBMS_JVM_EXP_PERMS Must Be Declared [ID 1095533.1]

Related Posts
ORA-39127: unexpected error from call to export_string :=WMSYS.LT_EXPORT_PKG.SCHEMA_INFO_EXP
ORA-39127: unexpected error from call to export_string :=SYS.DBMS_CUBE_EXP.SCHEMA_INFO_EXP while Exporting

Tuesday, November 1, 2016

Plugging a Non-CDB (pre-12c) into CDB Using Transportable DB (TDB) / Tablespaces (TTS)

In previous post it was shown how a 12c non-CDB could be plugged in as a PDB. This post shows steps for plugging a pre-12c DB (anything pre-12c is by default non-CDB) as a PBD using transportable DB (TDB) and tablespace (TTS) methods. Both methods have similar steps, only difference being in TBD entire DB is transported while in TTS method only a set of tablespaces plugged in. In this case enterprise edition RAC DB (called ent114) of 11.2.0.4 version will be plugged into a single instance CDB using TDB/TTS methods.

Both methods requires creating a PDB before the transporting could begin. The tablespaces are attached to this "pre-existing" PDB. In this case it is called PDB114.
SQL> create pluggable database pdb114 admin user admin identified by admin;
SQL> alter pluggable database pdb114 open;
Following information and steps are common for both methods. The 11.2 database has following tablespaces. The user defined tablespaces are in bold.
SQL> select TABLESPACE_NAME,BLOCK_SIZE from dba_tablespaces order by 1;

TABLESPACE_NAME                BLOCK_SIZE
------------------------------ ----------
EXAMPLE                              8192
NEWTBS                               8192
SQLT                                 8192
SYSAUX                               8192
SYSTEM                               8192
TBS2K                                2048
TBS32K                              32768
TEMP                                 8192
TEST                                 8192
UNDOTBS1                             8192
UNDOTBS2                             8192
USERS                                8192
There are two tablespaces with non-default block sizes. Before the transport create non-default memory pools in the CDB as well.
Secondly also check if the endieness of the source and target systems are the same. If they are different, conversions is required before transported data files could be attached.
SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;  

PLATFORM_NAME        ENDIAN_FORMAT
-------------------- --------------
Linux x86 64-bit     Little
This is where the steps common for both TDB and TTS ends. Next is the steps specific to TDB method.

Plugging pre-12c DB using TDB
Put the user defined tablespaces to read only mode.
alter tablespace EXAMPLE read only;
alter tablespace NEWTBS read only;
alter tablespace SQLT read only;
alter tablespace TBS2K read only;
alter tablespace TBS32K read only;
alter tablespace TEST read only;
alter tablespace USERS read only;

SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
UNDOTBS1                       ONLINE
TEMP                           ONLINE
UNDOTBS2                       ONLINE
USERS                          READ ONLY
EXAMPLE                        READ ONLY
TEST                           READ ONLY
SQLT                           READ ONLY
NEWTBS                         READ ONLY
TBS2K                          READ ONLY
TBS32K                         READ ONLY
While the user defined tablespaces are in read only mode run a full export of the DB with transportable = always and version = 12 options. This option is supported on standard edition databases. Following expdp command is used in this case and tail end of the export log output is shown as well.
expdp system full=y dumpfile=ent114.dmp directory=EXEC_DIR transportable=always version=12 logfile=ent114.log

******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /usr/local/exdata/ent114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_FULL_01" completed with 1 error(s) at Mon Sep 5 14:52:08 2016 elapsed 0 00:05:34
The data files listed for transport must be copied out of the ASM disk group. There are many ways to do this, easiest could be to use cp with asmcmd. One example is shown here
ASMCMD> cp TBS2K.272.921767965 /home/grid/backup/
copying +DATA/ent11g2/datafile/TBS2K.272.921767965 -> /home/grid/backup/TBS2K.272.921767965
Once all the data files required for transport is copied out of ASM, the tablespaces made read only earlier could be made read write again.
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;
Next step is moving of data files copied out of ASM eariler to the actual location where PDB reside. In this case the PDB reside on a remote server and data file location for the PDB is
/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile
Therefore the data files are copied over the same location.
scp * oracle@192.168.0.99:/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/
Create a database directory in the PDB and copy the dump file created by the expdp to location refereed by the DB directory. This will be used during the import. Since there's no conversion needed due to endian format being the same the import can go ahead without any additional work. The parameter file used for import is shown below. The TRANSPORT_DATAFILES has comma separated list of datafiles with their new paths.
cat import.par
FULL=Y
DUMPFILE=ent114.dmp
DIRECTORY=ora_dump
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC31BB9D1A967C7E0536300A8C0D124/datafile/USERS.264.785694801'
LOGFILE=ent114imp.log
The DB is imported using the parameter file. The objects that already exists could result in warnings however the import completes successfully.
impdp system@pdb114 parfile=import.par
At the end of the import the user defined tablespaces are plugged in and online.
SQL> SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
All users in the pre-12c DB are created as local users of the PDB. In the default database properties are not affected, for example in the pre-12c DB the default DB tablespace was users but after the transport the PDB still retain it's original default tablespace.
SQL> SELECT PROPERTY_NAME,PROPERTY_VALUE FROM database_properties where property_name like 'DEFAULT%';

PROPERTY_NAME                  PROPERTY_VALUE
------------------------------ ----------------
DEFAULT_TEMP_TABLESPACE        TEMP
DEFAULT_PERMANENT_TABLESPACE   SYSTEM
At this stage the PDB is ready for use with pre-12c DB plugged in.



Plugging pre-12c DB using TTS
This method uses transportable tablespaces to copy the user defined tablespaces to the PDB location and plug them to the PDB created. Same user defined tablespaces mentioned in earlier methods are used in this case as well. Before tablespaces could be transported verify they are self contained. DBMS_TTS.TRANSPORT_SET_CHECK could be used for this, however unlike in previous post strict or full containment check is also performed. Oracle admin guide says "for strict or full containment, you must additionally set the TTS_FULL_CHECK parameter to TRUE. The strict or full containment check is for cases that require capturing not only references going outside the transportable set, but also those coming into the set. Tablespace Point-in-Time Recovery (TSPITR) is one such case where dependent objects must be fully contained or fully outside the transportable set".
SQL> exec DBMS_TTS.TRANSPORT_SET_CHECK('EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS', TRUE,TRUE);
Check for any violations
SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;
no rows selected
If there's no violation proceed with the next steps. Similar to earlier method it's important to check if the source and the target both have the same endian.
Put the user defined tablespace to read only mode and run the export. The parameter file content is shown below.
cat tts.par
dumpfile=tts114.dmp
logfile=tts114.log
directory=EXEC_DIR
transport_tablespaces=EXAMPLE,NEWTBS,SQLT,TBS2K,TBS32K,TEST,USERS
transport_full_check=y

expdp system parfile=tts.par
End of the export the output log will list the data files for transport
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01":  system/******** parfile=tts.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
  /usr/local/exdata/tts114.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  +DATA/ent11g2/datafile/example.268.785857285
Datafiles required for transportable tablespace NEWTBS:
  +DATA/ent11g2/datafile/newtbs.273.888232727
Datafiles required for transportable tablespace SQLT:
  +DATA/ent11g2/datafile/sqlt.269.826463789
Datafiles required for transportable tablespace TBS2K:
  +DATA/ent11g2/datafile/tbs2k.272.921767965
Datafiles required for transportable tablespace TBS32K:
  +DATA/ent11g2/datafile/tbs32k.271.921768047
Datafiles required for transportable tablespace TEST:
  +DATA/ent11g2/datafile/test.274.888250337
Datafiles required for transportable tablespace USERS:
  +DATA/ent11g2/datafile/users.264.785694801
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Mon Sep 5 16:36:07 2016 elapsed 0 00:01:41
Same as previous method copy the data files out of ASM. Once copied out put the tablespace to read write mode. Transfer the data files copied out of ASM to PDB's data file location (same as earlier method).
On the PDB create a DB directory and transfer the exported dump file. Unlike the TDB method, under TTS method before the import the users must exist in the PDB. If not import will fail
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29342: user ASANGA does not exist in the database
Once all requried users are created in the PDB begin the import. The content of the import parameter file is given below. The TRANSPORT_DATAFILES reflect the actual file paths of the data files.
cat ttsimp.par
DUMPFILE=tts114.dmp
DIRECTORY=tts_dir
TRANSPORT_DATAFILES=
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/EXAMPLE.268.785857285',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/NEWTBS.273.888232727',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/SQLT.269.826463789',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS2K.272.921767965',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TBS32K.271.921768047',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/TEST.274.888250337',
'/opt/app/oracle/oradata/CXCDB/3BC5DE03FC5E720DE0536300A8C07F97/datafile/USERS.264.785694801'
LOGFILE=tts114imp.log

impdp system@pdb114 parfile=ttsimp.par
After the import the tbalespaces will be plugged but read only mode.
SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES READ ONLY
NEWTBS                         YES READ ONLY
SQLT                           YES READ ONLY
TBS2K                          YES READ ONLY
TBS32K                         YES READ ONLY
TEST                           YES READ ONLY
USERS                          YES READ ONLY
Change the read only mode to read write
alter tablespace EXAMPLE read write;
alter tablespace NEWTBS read write;
alter tablespace SQLT read write;
alter tablespace TBS2K read write;
alter tablespace TBS32K read write;
alter tablespace TEST read write;
alter tablespace USERS read write;

SQL>  SELECT tablespace_name, plugged_in, status FROM   dba_tablespaces;

TABLESPACE_NAME                PLU STATUS
------------------------------ --- ---------
SYSTEM                         NO  ONLINE
SYSAUX                         NO  ONLINE
TEMP                           NO  ONLINE
EXAMPLE                        YES ONLINE
NEWTBS                         YES ONLINE
SQLT                           YES ONLINE
TBS2K                          YES ONLINE
TBS32K                         YES ONLINE
TEST                           YES ONLINE
USERS                          YES ONLINE
At this stage the PDB is ready for use with pre-12c DB plugged in.

Related Posts
Remote Cloning of a PDB
Move a PDB Between Servers
Plugging a SE2 non-CDB as an EE PDB Using File Copying and Remote Link