So, here is the story - or to be precise two stories.
- Production/Stage/External Test - these ones were handled by our partner DBAs, but the idea was pretty straightforward: copy cold backup to the new environment + run in-place upgrade (non-CDB). Same OS (Linux-to-Linux), preserving of character sets etc. I.e. everything is well-documented and clear. So, nothing much to write about.
- Development and Internal Test - that one was a real head-scratcher that I'd like to share with everybody. The task was as follows:
- Starting points: Windows 2008 Server with 11.2.0.4 database and Win1252 charset, Windows 2012 Server with 19c database win Win1252 charset (non-CDB). Each DB is about 200 GB
- Required destination: Amazon AWS Linux instance with 19c database with two environments as PDBs that would allow creating ALUTF32 PDBs if needed.
I started from the bottom - discussed with sysadmins a good and reliable way to moving something to AWS. The answer I've got was interesting: there was a simple process to port existing virtual machine from our internal cloud to AWS. That made my life much simple - it meant that we could build a local VM, play with it as much as needed (with a very easy way to create multiple saverpoints/restore to any point etc.) and eventually declare it ready to be transferred. Ok, admins got Oracle Linux ISO and made me in vSphere a new server - now I've got a destination.
Next step was to see whether my understanding of OS internal was true and it was possible to move PDB from Windows to Linux without any major hustle... To do that I first converted my 19c instance from non-CDB to a PDB within newly created PDB. That was just a couple of commands:
-- non-CDB instance
shutdown;
startup open read only
/
BEGIN
DBMS_PDB.DESCRIBE(
pdb_descr_file => 'f:\oradata\test01.xml');
END;
/
shutdown immediate
/
-- CDB instance
CREATE PLUGGABLE DATABASE test01
using 'f:\oradata\test01.xml'
NOCOPY
tempfile reuse
/
ALTER SESSION SET CONTAINER=test01;
@$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql;
ALTER PLUGGABLE DATABASE OPEN;
Now it was time to move PDB to Linux box, that already had CDB configured as AL32UTF... And after not finding the answer on whether file_name_convert operator can change Windows-style file name to Linux-style I just went into moveText.xml file and manually updated all locations to which I transferred PDB files (btw, later I tested the operator - and I couldn't make it do what I wanted, so, hint: sometimes manual approach is your choice ;) )
-- Windows CDB
ALTER PLUGGABLE DATABASE test01 CLOSE;
ALTER PLUGGABLE DATABASE test01 UNPLUG INTO 'F:\moveTest.xml';
-- Linux CDB
CREATE PLUGGABLE DATABASE test01
using '/home/oracle/test01_updated.xml' -- Linux locations
NOCOPY
tempfile reuse
/
ALTER SESSION SET CONTAINER=test01;
ALTER PLUGGABLE DATABASE OPEN;
The last command finished - but with the warning. It was time to review existing violations
select *
from pdb_plug_in_violations
where name = 'TEST01'
and status !='RESOLVED'
The most only errors I found was the mismatch of patches - and that was simply fixed by running datapatch - verbose on the Linux side: the system easily rolled back all of the incorrect patches within PDB and placed what was needed. Seemingly the system was ready to operate - I could query data, run some sample procedures... But the application didn't start! After some digging I found a somewhat unexpected reason - even though you could move Windows PDB to Linux, if you are using PL/SQL objects with Native compilation - you have to recompile everything (well, makes sense, different platforms = different low-level code!). The following loop did that:
begin
begin
for c in (
select owner, name, type
from dba_plsql_object_settings
where plsql_code_type !='INTERPRETED'
)
loop
begin
if c.type = 'PACKAGE BODY' then
execute immediate 'alter '||c.type||' ' ||c.owner||'.'||c.name||' compile body';
else
execute immediate 'alter '||c.type||' ' ||c.owner||'.'||c.name||' compile';
end if;
exception
when others then null;
end;
end loop;
end;
That was easy enough - so I decided to replicate exactly the same pattern for the main development database (that was 11.2.0.4). So, in my Test database I dropped PDB Test01 with all datafiles (I had to do it because Dev files names overlapped with Test01 and I didn't want to shuffle anything around) and used good-old-and tried approach:
- Original DB
- Create PFILE from SPFILE
- Run preupgrade on the source and follow its recommendations
- Shutdown the instance
- Transfer files to the new location preserving the file structure (as for the cold backup):
- ORADATA\ora11g folder
- FAST_RECOVERY AREA\ora11g folder with all corresponding directories
- PFILE
- Control files from other locations (btw, you can integrate them into the same folder by modifying PFILE)
- Server with 19c
- Create new instance:
- orapwd file=? password=? format=12 (lower format if you want simple password for now)
- oradim
- startup upgrade
- Upgrade via dbupgrade utility
- Run post-fixes, including DST upgrade
- Recompile whatever needed and resolve obscure cases if exist (Java classes, SDO_Geometry etc.)
Now I was pretty exactly where I started with Test environment - non-CDB instance + empty container. So, the following steps are exactly above: convert instance into PDB, plug it in as Dev01, upgrade, fix PL/SQL native-compiled objects. That's all, folks - two environments became one on the different platform! After that it was just lift-and-shift to AWS - and we've got our new environment.
Summary: if you can use somebody else's experience - use it! Oracle documentation, especially about upgrades is huge, but sometimes you may be looking for something very specific and it may be hard to detect exactly your use-case in the superset of the knowledge.
P.s. VM Snapshots are your saviors (that's why I love the cloud). You can't imagine how many various issues I've got in process - and without snapshot recovery it would've take wa-a-a-a-y longer to do anything meaningful.
No comments:
Post a Comment