Quantcast
Channel: FA – ATeam Chronicles
Viewing all 49 articles
Browse latest View live

Expiration Checklist for Fusion Applications

$
0
0

Two main things when expired will significantly affect the operations of Fusion Applications. These are database passwords and certificates. As such these expiration dates need to be checked and maintained properly.

Check for expiring database account passwords

Fusion Applications have many schema users in the Fusion Application database.  Many of these schema users by default have no expiry date, however some do.  You can check the expiration date for these passwords using sqlplus and connecting to the FA database as sys.  Use the following command to check the expiry_date:
 
select username, account_status, expiry_date, sysdate from dba_users where expiry_date is not null;
 
TODO:  Keep track of when database accounts will expire.  When the database accounts will soon expire, update the accounts and reset the expiry_date according to your established corporate security policy requirements.  Note: You can reuse the existing password when resetting these schema accounts.
 

Check for expiring certificates

Fusion Application will fail when certificates expire.  It’s important to check all certificate stores (JKS for WebLogic and PKCS#12 for OHS) for expiring keys and certificates so that they can be renewed in a controlled and timely manner.

 

For Fusion JKS Certificates Stores

You should maintain a list of all certificate stores so that they can be located easily.  
The fusion jks stores are fusion_trust.jksand <hostname>_fusion_identity.jks in APPLICATIONS_BASE/fusionapps/wlserver_10.3/server/lib
 
For each JKS store, use keytool to examine the contents, noting the expiration date for each key and certificate:
 
$JAVA_HOME/bin/keytool -list -v -keystore <keystore filename>

 

 
Note:  fusion_trust.jks contains the keys and certificates in each of the <hostname>_fusion_identity.jks.  When replacing the key and certificates, you must replace each <hostname>_fusion_identity.jks and fusion_trust.jks separately.
 

For Webgate Certificate

You should note down the expiration date of the webgate certificate and replace them as appropriate.  The webgate certificate is in APPLICATIONS_CONFIG/CommonDomain_webtier/config/OHS/ohs1/webgate/config/simple. To check the certificate expiration date, use keytool to examine the contents:

$JAVA_HOME/bin/keytool -printcert -v -file aaa_cert.pem

 

 

For PKCS#12 Certificates Stores

The location of the certificate stores used by FA OHS instances can be found in the OHS configuration files. The following example shows how to determine this:
cd APPLICATIONS_CONFIG/CommonDomain_webtier/config/OHS/ohs1

 

cat *.conf ./moduleconf/*.conf | grep SSLWallet filename
 
Each of these should be opened with the orapki utility to examine the content and verify the certificate expiration. The orapki utility is described in detail here:
 
 
 

 

 

Fusion Applications

$
0
0

Provides information about Fusion Application architecture with respect to High Availability, RAC, Exadata, Exalogic, Disaster Recovery, Performance, Virtualization options, OVM Templates, Topology considerations and general information on Fusion Application’s use of Fusion Middleware.

The information found here is intended for on-premise customers. The best practices and operational aspects of the content in this site is a service provided as a part of the Fusion Applications Cloud offering.

Topics

Picture01
Architecture >

Things that you must consider before you fill out configuration forms, and purchase licenses. (Includes considerations for H/A and D/R)

Picture02
BI / Reporting >

Provides information on how Fusion Applications makes use of BI, both in terms of OTBI as well as the warehouse option for Fusion Applications.

Picture03
Diagnose and Troubleshoot >

Highlights diagnostic best practices and provides useful tips and tricks designed to speed time to resolution.

Picture04
Extend and Customize >

Fusion Apps will become part of a larger architecture that supports your enterprise. Understanding how to extend, and customize Fusion Apps will help you solve for the future in less time.

FA_Install.png
Install and Provision >

Fusion App’s is a tightly integrated, loosely coupled set of services and applications. You will benefit from understanding how to “get it right the first time” with information in this area.

FA_Integrate.png
Integrate and Co-exist >

Since Fusion Applications is built on Fusion Middleware, it provides a cohesive technical stack upon which to integrate.  This allows for customers to uptake Fusion Applications over time through standard co-existence strategies and integration techniques.  This section incorporates articles on integration strategies and use cases and how to solve them.

Picture10
Life-cycle Management >

Provides information about common activities associated with operationalizing Fusion Applications. Topics covered will include cloning, refreshing data in non-production environments, patching, upgrading, and starting/stopping the environment.

Picture08
Performance >

Any suite of applications is expected to run fast, but knowing how to quantify and improve the performance of Enterprise App’s is often called a black art, with very few experts available to do what’s necessary to deliver the performance you need. We show you pragmatic ways to think about, and improve the performance of Fusion App’s.

Picture09
Security >

In-depth information about the Fusion App’s security model, including how to integrate it with existing Identity and Access Management systems.

OAM and OIM Config changes for Split Profile ( Split Profile Configuration -Part 2)

$
0
0

Introduction

In my previous post I discussed split profile set up scenario with AD and OID in Fusion Applications IDM Environment and how to create Adapters in OVD  for consolidating the two directory servers AD and OID.However configuring adapters alone is not sufficient to allow split profile to function.Configuration of rest of the IDM Components in the integration needs update to communicate with directory layer. In this post i will try to highlight the configuration changes needed in the rest of IDM Components involved in Fusion Applications Integration.

Please refer to the first picture in split profile part1   which shows the consolidated view of the Directory Tree to set context in further configurations in this post. As always take a back up  of the existing IDM environment before making any changes.Backup would include IDM Middleware , IDM Database and Enterprise Directory.

Let us review for which components the configuration will change:

  1. 1. WLS :  During Initial set up without split profile , OVD Authenticator Provider would refer to OID  via OVD alone or OID directly via OID Authenticator, now OVD Authenticator should refer to both OID and AD.
  2. 2. OAM : Similar case as WLS ,  user and Group base need to be set to consolidated base for Authenticating Users from both OID and AD
  3. 3. OIM  : User Base needs to be new consolidated base, Rules need to be modified for Target User Base and Target Groups Base during user creation etc.

Main Article

Here are the detailed changes by component for this scenario

WLS

1. Login to oim-domain wls console, User=<oim_admin_user>, Password=<Password>

2. Go to Security Realms –> myrealm –> providers –>

3. Remove OIDAuthenticator and save [ You will see OIDAuthenticator If IDM Environment was configured with IDStore as OID and not OVD]

4. Create [If step 3 is True] / Edit OVDauthenticator and make sure control flag = “SUFFICIENT”

authnctrs

5. Make sure the Providers list has the correct order, If they are not , reorder them

6. Click on OVDAuthenticator —–> Provider Specific

7. host= <ovd host>, port=<ovd port>, principal=<cn=oamLDAP,cn=users,dc=us,dc=oididm,dc=com> , in my environment i have used ‘cn=orcladmin’ for quick set up.

8. User base dn: dc=oididm,dc=com  [ Again this based on the example configuration i have used, please see Split-profile-part1 ]

9. All Users Filter: (&(uid=*)(objectclass=person))

10. User Name Attribute = uid

11. Group Base DN: dc=oididm,dc=com

12. Static Group Object Class: groupofuniquenames

userbase

13. Save the changes and shutdown wls admin console and restart

14. check if the ovdauthenticator is working by accessing WLS Console :   Security Realms —> myrealm —> Users and Groups ——>

You will see users from both OID and AD  

Snap6

OAM

  1. 1.Login to the OAM11g console.
  2. 2.Go to System Configuration–>Common Configuration–>Data Sources
  3. 3.Open the ‘OIMIDStore instance’–>Change the “Store Type” to “OVD: Oracle Virtual Directory” from “OID: Oracle Internet Directory” ( only if OID was set as IDStore originally)
  4. 4.Change the “Location” to <ovdhost>:<ovdport>
  5. 5.Change the BindDN to a User from OVD who has appropriate ACIs  ( i used orcladmin for quick setup but this can be oimLDAP or oamLDAP , if ACIs are granted)
  6. 6.Provide password to that of the user used above
  7. Snap5
  8. 7.Change the User search base, to the base of the OVD , dc=oididm,dc=com [ same as we set in OVDAuthenticator in WLS earlier ]
  9. 8.Change the Group search base, to the base of the OVD, dc=oididm,dc=com
  10. user_base
  11. 9.Make sure to Test Connect and Apply
  12. 10.Also for quick check testing I added a user from AD  ‘ad_user1′  as access system administrator confirming my config was fine and able to retrieve users
  13. 11.Also tried a login to oamconsole with ‘ad_user1′ and ‘oamadmin’  to confirm Authentication of users from both AD and OID is successful

 OIM

1.Change the Search Base

1.1. Log on to OIM http://<oimhost>:<oim_port>/oim as xelsysadm

1.2. Click on “Advanced” on top right side of your screen

1.3. Click on “Manage IT Resource” link under “Configuration” section

1.4. In query screen, In IT Resource Type field, choose “Directory server” from drop down and search

1.5.  In the directory server ,Click on Edit button for directory server

1.6. In Search Base field, update the search base [ same OVD base as in previous steps for WLS and OAM] to “dc=oididm,dc=com”

1.7. Also update reserve container base to absolute value.

directoryserver

1.8. Click Update. Close window.

2. Update Container Rules in MDS for Split profile

2.1 Create LDAPContainerRules.xml with new rules that you want to import into LDAP. This file contains the rules for user creation and role creation and corresponding containers in LDAP where they should be created/target to. For current split profile scenario, i have set only default rules as below:

<?xml version=’1.0′ encoding=’UTF-8’?> 
<container-rules> 
<user> 
 
<rule> 
<expression>Default</expression> <container>cn=Users,dc=us,dc=oididm,dc=com</container> <description>UserContainer</description> 
</rule> 
</user> 
 
<role> 
 
<rule> 
<expression>Default</expression> <container>cn=Groups,dc=us,dc=oididm,dc=com</container> <description>RoleContainer</description> 
</rule> 
</role> 
 
</container-rules>

2.2. Modify <OIM_ORACLE_HOME>/bin/weblogic.properties file present in to import the above LDAPContainerRules.xml file for following data wls_servername=<oim server name>, for example wls_oim1

2.3. Set OIM_ORACLE_HOME environment variable.

2.4. Run weblogicImportMetadata.sh from <OIM_ORACLE_HOME>/bin to import the configuration file into MDS

2.5. Input weblogic login Creds when prompted. Please enter your username [weblogic] : <weblogic_user> Please enter your password [weblogic] :<password> Please enter your server URL [t3://localhost:7001] :t3://oimadmin.mycompany.com:7001

2.6. Restart OIM Server for new rules to take effect

3. Update Username generation policy to accommodate AD

This change is due to AD Limitation only ,AD has a username limitation of 20 characters for Windows 2000 and earlier . Hence Username generation policy in OIM has to be updated to accommodate this AD limitation.

3.1. Log on to OIM http://<oimhost>:<oim_port>/oim

3.2. Click on “Advanced” on top right side of your screen

3.3. Click on “Search System properties”

3.4. On left navigation bar, Search on “Username Generation”

3.5. Click on “Default policy for username generation”

3.6. In Value field, update entry from “oracle.iam.identity.usermgmt.impl.plugins.DefaultComboPolicy” to “oracle.iam.identity.usermgmt.impl.plugins.FirstNameLastNamePolicyForAD”

3.7. Click “Save”

usernamepolicy

That completes the needed configuration changes for WLS, OAM and OIM  for IDM Environment.    As a last step, for each Fusion Application Domain please change the OIDAuthenticator  to be an OVDAuthenticator for those domains where the identity store was OID previously.

Index of Architecture articles

$
0
0

Below you will find a variety of articles that help you design an Architecture for Fusion Apps that is in line with Oracle’s best practices.  In addition, you will also find articles that are related to Fusion Apps, and also to Architecture, but which address a specific issue by describing ways of solving the issue once an Architectural model has been adopted, and is in operation.

The information found here is intended for on-premise customers. The best practices and operational aspects of the content in this site is a service provided as a part of the Fusion Applications Cloud offering.

  • Architecture
  • Index of BI and Reporting articles

    $
    0
    0

    Provides information on how Fusion Applications makes use of BI, both in terms of OTBI as well as the warehouse option for Fusion Applications.

    The information found here is intended for on-premise customers. The best practices and operational aspects of the content in this site is a service provided as a part of the Fusion Applications Cloud offering.

  • BI/Reporting
  • Index of Diagnosis and Troubleshooting articles

    $
    0
    0

    Highlights diagnostic best practices and provides useful tips and tricks designed to speed time to resolution.

    The information found here is intended for on-premise customers. The best practices and operational aspects of the content in this site is a service provided as a part of the Fusion Applications Cloud offering.

  • Diagnose and Troubleshoot
  • Index of Extensibility and Customization articles

    $
    0
    0

    Fusion Apps will become part of a larger architecture that supports your enterprise. Understanding how to extend, and customize Fusion Apps will help you solve for the future in less time.

    The information found here is intended for on-premise customers. The best practices and operational aspects of the content in this site is a service provided as a part of the Fusion Applications Cloud offering.

  • Extend and Customize
  • Index of Security articles

    $
    0
    0

    In-depth information about the Fusion App’s security model, including how to integrate it with existing Identity and Access Management systems.

    The information found here is intended for on-premise customers. The best practices and operational aspects of the content in this site is a service provided as a part of the Fusion Applications Cloud offering.

  • Security

  • Index of Lifecycle Management articles

    $
    0
    0

    Provides information about common activities associated with operationalizing Fusion Applications. Topics covered will include cloning, refreshing data in non-production environments, patching, upgrading, and starting/stopping the environment.

    The information found here is intended for on-premise customers. The best practices and operational aspects of the content in this site is a service provided as a part of the Fusion Applications Cloud offering.

  • Lifecycle Management
  • Fusion Applications – All Articles

    $
    0
    0
  • Fusion Applications
  • Architecture
  • BI/Reporting
  • Diagnose and Troubleshoot
  • E-Business Suite
  • Extend and Customize
  • Install and Provision
  • Integrate and Co-exist
  • Lifecycle Management
  • PeopleSoft
  • Performance
  • Security
  • Fusion BI RPD and Webcat files

    $
    0
    0

    Introduction

    Oracle Fusion Applications have a built in Business Intelligence framework based on Oracle Business Intelligence Enterprise Edition (OBIEE). With this framework, organizations are provided with the capabilities of transforming their business process automation systems into powerful Business Intelligence systems. There are many components which constitute the Business Intelligence suite within Fusion Applications but two of them play significant roles in transforming the business data into powerful Intelligence information. This blog provides an overview of these two components, namely the RPD file and the Web Catalog, and the role they plan within the BI framework of Oracle Fusion Applications.

    (Please note that this overview is confined to Oracle Fusion Application framework and does not apply to other versions of OBIEE).

    Main Article

    Whether you are a new to Oracle Fusion Business Intelligence (Oracle BI), or have been using it some time now, you are likely already familiar with the Oracle BI Analytics page.

    Analytics First Page

     

    This page contains  multiple components like folders, menu items, reports, filters, prompts and many more.

    Looking at these objects on the screen, we might be left with questions such as,

    • What is the logical foundation for the pre-built BI reports?
    • When I create new answers I see different subject areas, where do these reside?
    • Where is the underlying Model which creates the SQL queries that return the data for my Answers?
    • How do I customize this Model?
    • Where are my objects stored internally?
    • What tool do I need if I wish to rearrange the way my reports and dashboards are organized?

    The following two key components mentioned earlier provide answers to these questions and in the coming paragraphs, we will learn more about their significance.

    1. RPD – BI Repository
      Web Catalog –Oracle BI Presentation catalog

     

    RPD

    The RPD is a repository which stores all the information including the metadata associated with Oracle Fusion BI. It contains the connectivity information between the BI server and various data sources, details of Physical Data, Business Models & Mappings and Presentation structures that are available for users to build their analyses (answers as it is called in OBIEE). In Fusion applications, the RPD forms the foundation on which Business Intelligence is modeled. It contains pre-built physical data sources that access the Fusion Application transactional data in the form of ADF (Application Development Framework) View objects. In addition, it contains connections to the Fusion Applications Datawarehouse (if the warehouse has been enabled).  All of the pre-built Oracle Transactional Business Intelligence (OTBI) reports that are rendered as part of the Fusion Install are designed against the model in the RPD file.

    Layers of RPD

    The RPD contains three different layers. The Physical, Business model & Mapping and Presentation.

    OBIEE Admin tool New

     

    Physical Layer : As the name suggests, this layer defines the physical data that will be used for building the business model, which will serve as the foundation for Oracle BI. It provides complete information on how the data is sourced from the back end databases and objects that are available, relationship between these objects, and features and rules that are specific to database types. For example, relational databases have physical tables and joins, while multi-dimensional databases, have cubes and hierarchies.

    These objects and their relationships are in turn used by the BI Server for generating the physical SQL queries that run against the data sources to retrieve the specified data/information. The primary task within this layer is to import the physical objects, after the data source is defined.

    Data can be sourced from different places such as relational databases, Oracle ADF, data Warehouses, non-Oracle databases, Essbase data sources, and flat files etc.

    Business Model and Mapping Layer : This defines the logical model, also known as the Business model of the data as it would be seen and used by the business users. Unlike the physical layer, which shows the data in its organized structure in the back end, the Business Model and Mapping layer models and defines the data in business terms, as facts and dimensions. Data structures are grouped on a common business they address and collectively constitute a business model. The  attributes defined within this layer, have a direct mapping to their physical layer counterpart. Just as in the physical layer, joins can be defined between the fact and the dimension tables.

    Mappings to the physical schema are defined, and used by the BI server while evaluating the logical SQL requests.  These mappings may contain transformations and/or calculations. Metrics within a fact table are also defined in this layer.  Lastly, this layer also defines how the data is viewed on the Analytics page by the user when running analyses. For example, data can be viewed as individual columns,  hierarchies or drill down columns.

    Presentation Layer: This layer defines the customized views (subject areas) of the business model. It provides secure, role-based subject areas for users to build their own queries.  In addition this layer provides the user the facility to organize the views (subject areas), define dictionary entries, assign security to different groups of users and assign custom naming conventions.

    So how do these layers tie up and provide the user the platform that generates queries and return the data set?

     

     

    RPDWebcatBlog_pic2

     

    The image on the left, displays how a logical query, initiated by a user accesses the various layers of the RPD to produce the necessary information.

    Users initiate logical queries from the views (subject areas) in the presentation layer. During run time, the business model is read by the BI Server for this logical request to fetch the corresponding mappings to the physical schema. Based on the mappings to the physical schema, the best set of tables, cubes or ADF view objects are determined by the BI server to execute the queries and return the information.

     Location of RPD

    The RPD is stored in the following location within Fusion Applications (11g)

     ORACLE_INSTANCE/bifoundation/OracleBIServerComponent/coreapplication_obis1/repository

    Managing RPD

    The RPD is managed  by a windows based tool called the OBIEE Admin tool.  Users customize the RPD using this tool. They can define their custom physical data sources, create newe or modify existing Business Models and design their own subject areas in the Presentation Layer.

    For instructions on, installing and configuring the BI Admin tool, refer to the following blog.

    Installing and Configuring BI Admin Tool for Fusion Applications

    Operation Modes

    The RPD can be accessed in Online or Offline mode using the Admin tool.

    Online Mode: This mode is used to view and modify a repository that is deployed on a Oracle BI Server.  The following tasks can only be performed in the online mode.

    • Manage Schedule jobs
    • Manage user sessions
    • Manage the query cache
    • Manage clustered servers

    Offline Mode: In this mode, the user can view and modify a repository that resides locally.  If an attempt is made to open a repository in offline mode that is deployed on a Oracle BI server, then the repository is opened in READ-ONLY mode.

     

    Now that we’ve seen how the RPD provides the model and foundation for building the analyses of BI, let us look into the second component, the  Web Catalog, and understand how it helps in organizing the pre-built and user generated objects in BI.

    Webcat (Oracle BI Presentation catalog)

    Web catalog or Oracle BI Presentation Catalog stores the BI objects in the form of a file based directory structure. During an install of Fusion applications, the web catalog files provide the organization information of the pre-built Business Intelligence objects that are rendered as a part of the install. They also extend this functionality to any custom reports, analyses or BI objects users may build in the future. In general, the web catalog file contains all of the following objects

    Folders, Shortcuts, Analyses, Reports, Filters, Prompts, Dashboards, KPIs.

    Directory Structure

    A web catalog consists of  the following folders:

    Shared Folder – Contains the shared objects of all catalog users. The pre-built dashboards, analyses, and objects that are shared among all the users are also stored here.

    System Folder – Internal folder (Not to be modified) that contains the privileges configured by the administrator as well as some that are distributed with the original product.

    Users Folder – Contains the individual analyses of the users along with their allied objects like filters and prompts.

    Catalog Files

    The object (analyses) is stored in a folder and contains 2 components; The object itself and an attribute file with “.atr” extension. The object component  is an xml file that provides the details of the object like query details for an analyses object. The attribute file provides the description and the access control for the object. A third, temporary lock file is generated when an object is being edited by a user but gets deleted on the user exiting their editing process. However, in rare situation like system crash, this temporary lock file will not be deleted and requires a manual delete.

    Location of the catalog

    The default location of the catalog within Oracle BI is

    ORACLE_INSTANCE/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1/catalog

    Managing Catalog

    Oracle BI Presentation catalogs can be managed using a tool called Catalog Manger. With Catalog Manger you can,

    • Manage all the folders, shortcuts and objects (filters, analyses, dashboards..).
    • View and Edit the objects in XML.
    • Preview objects such as analyses
    • Do mass changes to catalog objects like search and replace text.
    • Localize captions.

    Catalog Manager is available for both Linux and Windows platform and is installed (or available) as part of regular installation. To start the Catalog Manager in,

    Windows

    Option 1 : from the Start menu, select Oracle Business Intelligence and then Catalog Manger

    Option 2 : Use the command line and run 

    “runcat.cmd”

    from  location

    “ORACLE_INSTANCE\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obipsn\catalogmanager”

    Linux

    Run 

    “runcat.sh”

    from location

    “ORACLE_INSTANCE\bifoundation\OracleBIPresentationServicesComponent\coreapplication_obips1\catalogmanager”

    CatalogManager

    Operation Mode

    Web catalog can be accessed by Catalog Manager in online or offline mode.

    Online mode – Connects to a catalog that is running a the BI server. In this mode, permissions are verified when accessing objects. The user can only see those objects for which they have permission. This mode is used to make incremental changes, additions to catalog, changes to permission, updates to single object  and migration of objects between environments.

    Offline mode – Connects to a local catalog that does not require any BI server to be running. All the objects are visible to the user. This mode is used for mass changes to the Catalog and moving multiple objects to reorganize the catalog structure.

    Summary

    This post is an overview of two of the most significant components of Oracle Fusion BI, the RPD and the Oracle BI Presentation catalog (Web Catalog). Though they are independent objects, they compliment each other and provide the foundation for the users to source, design, extract, present and manage BI information that help the business with their analysis and decision making.

    The answers to the questions mentioned in the beginning of the blog can be found in these two components.

    In Oracle Fusion Application, RPD provides the logical foundation for the pre-built BI reports that are delivered as part of Fusion Apps install. In addition, it provides the base for the users to design and generate analyses and/or reports. The different subject areas on which the user constructs their analyses are in the Presentation Layer section within the RPD. All the three sections in the RPD (namely the Physical, Business and Presentation) together constitute the model, which generates SQL queries that return the data for the Answers created by users. Users can customize this model using the OBIEE admin tool.

    Web catalog (Oracle BI Presentation catalog) provides the storage and directory structure for the users to organize and manage their BI content which includes, but is not limited to analysis, reports, dashboards, filters, prompts and KPIs (Key performance indicators). The catalog folder within the BI Instance home contains these objects . Users can rearrange their dashboards, organize their folders and objects using Catalog Manager tool.

    For additional information refer to Oracle Fusion Documentation.

    Validating the Fusion Applications Security Components During Installations and Upgrades

    $
    0
    0

    Introduction

     

    When installing or upgrading Fusion Applications, it is necessary to validate the security components to ensure that they are functioning correctly. This article provides a list of tasks that can be performed to accomplish this. The order of tasks below follow the dependency that the components have on each other so that if a fault is found the problematic component can be more easily identified. Prior to beginning validation, the components should be started in the following order:

     

    1. Database Listener
      Database
      Oracle Internet Directory Server (OID)
      Oracle Virtual Directory Server (OVD)
      Node Manager
      WebLogic Server (WLS)
      WLS Managed Servers (Oracle Directory Services Manager, Oracle Access Manager, Oracle Identity Manager, Oracle Service Oriented Architecture)
      Oracle HTTP Server (OHS)

     

    Database

     

    1. Check Database Listener

    Check that the listener process is up:

    [oracle@tester bin]$ ps -ef | grep LISTENER
    oracle    5211     1  0 09:17 ?        00:00:00 /u01/app/oracle/idmdb/dbhome_1/bin/tnslsnr LISTENER -inherit
    oracle    5238  5118  0 09:19 pts/1    00:00:00 grep LISTENER

    Confirm that the listener is listening on the expected TCP port for the database:

    [oracle@tester bin]$ netstat -an 1521 | grep 1521
    tcp        0      0 :::1521                     :::*                        LISTEN
    unix  2      [ ACC ]     STREAM     LISTENING     19247  /var/tmp/.oracle/sEXTPROC1521

    2. Check Database Processes

    Check that the database processes are up:

    [oracle@tester bin]$ ps -ef | grep idmdb
    oracle    5211     1  0 09:17 ?        00:00:00 /u01/app/oracle/idmdb/dbhome_1/bin/tnslsnr LISTENER -inherit
    oracle    5389     1  0 09:23 ?        00:00:00 ora_pmon_idmdb
    oracle    5391     1  0 09:23 ?        00:00:00 ora_psp0_idmdb
    oracle    5394     1  0 09:23 ?        00:00:00 ora_vktm_idmdb
    oracle    5398     1  0 09:23 ?        00:00:00 ora_gen0_idmdb
    oracle    5400     1  0 09:23 ?        00:00:00 ora_diag_idmdb
    oracle    5402     1  0 09:23 ?        00:00:00 ora_dbrm_idmdb
    oracle    5404     1  0 09:23 ?        00:00:00 ora_dia0_idmdb
    oracle    5406     1  9 09:23 ?        00:00:10 ora_mman_idmdb
    oracle    5408     1  0 09:23 ?        00:00:00 ora_dbw0_idmdb
    oracle    5410     1  0 09:23 ?        00:00:00 ora_lgwr_idmdb
    oracle    5412     1  0 09:23 ?        00:00:00 ora_ckpt_idmdb
    oracle    5414     1  0 09:23 ?        00:00:00 ora_smon_idmdb
    oracle    5416     1  0 09:23 ?        00:00:00 ora_reco_idmdb
    oracle    5418     1  0 09:23 ?        00:00:00 ora_mmon_idmdb
    oracle    5420     1  0 09:23 ?        00:00:00 ora_mmnl_idmdb
    oracle    5422     1  0 09:23 ?        00:00:00 ora_d000_idmdb
    oracle    5424     1  0 09:23 ?        00:00:00 ora_s000_idmdb
    oracle    5538     1  0 09:24 ?        00:00:00 ora_qmnc_idmdb
    oracle    5553     1  0 09:24 ?        00:00:00 ora_cjq0_idmdb
    oracle    5598     1  0 09:24 ?        00:00:00 ora_q000_idmdb
    oracle    5602     1  0 09:24 ?        00:00:00 ora_q001_idmdb
    oracle    5625     1  0 09:25 ?        00:00:00 ora_j000_idmdb
    oracle    5627     1  0 09:25 ?        00:00:00 ora_j001_idmdb
    oracle    5629     1  0 09:25 ?        00:00:00 ora_j002_idmdb
    oracle    5635  5118  0 09:25 pts/1    00:00:00 grep idmdb

    3. Perform tnsping on Database from Database and OID Servers

    On the database server:

    [oracle@tester bin]$ ./tnsping idmdb
    TNS Ping Utility for Linux: Version 11.2.0.3.0 – Production on 21-OCT-2013 09:28:21
    Copyright (c) 1997, 2011, Oracle.  All rights reserved.
    Used parameter files:
    Used TNSNAMES adapter to resolve the alias
    Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = tester.mycompany.com)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = idmdb)))
    OK (30 msec)

    On the OID server:

    [oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm
    [oracle@tester config]$ $ORACLE_HOME/bin/tnsping //tester.mycompany.com:1521/idmdb
    TNS Ping Utility for Linux: Version 11.1.0.7.0 – Production on 21-OCT-2013 09:38:18
    Copyright (c) 1997, 2008, Oracle.  All rights reserved.
    Used parameter files:
    Used HOSTNAME adapter to resolve the alias
    Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=idmdb))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.217.142)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.217.142)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.217.142)(PORT=1521)))
    OK (10 msec)

    If OAM, OIM and SOA are on different servers, it is recommended that a similar check be made for them as well.

     

    Oracle Internet Directory (OID)

     

    1. Check that LDAP/LDAPS Listeners and Processes are Up

    For OID, use opmnctl and netstat to check the ports:

    [oracle@tester bin]$ ./opmnctl status -l
    Processes in Instance: oid1
    ———————————+——————–+———+———-+————+———-+———–+——
    ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports
    ———————————+——————–+———+———-+————+———-+———–+——
    oid1                             | oidldapd           |    6135 | Alive    |  345332946 |   846788 |   0:00:26 | N/A
    oid1                             | oidldapd           |    6131 | Alive    |  345332945 |   846916 |   0:00:26 | N/A
    oid1                             | oidldapd           |    6127 | Alive    |  345332944 |   909764 |   0:00:26 | N/A
    oid1                             | oidldapd           |    6115 | Alive    |  345332943 |   845864 |   0:00:27 | N/A
    oid1                             | oidldapd           |    6105 | Alive    |  345332942 |   325448 |   0:00:30 | N/A
    oid1                             | oidmon             |    6074 | Alive    |  345332941 |   380332 |   0:00:34 | LDAPS:3131,LDAP:3060
    EMAGENT                          | EMAGENT            |    6075 | Alive    |  345332940 |    63848 |   0:00:33 | N/A

    [oracle@tester bin]$ netstat -an | grep 3060
    tcp        0      0 :::3060                     :::*                        LISTEN
    [oracle@tester bin]$ netstat -an | grep 3131
    tcp        0      0 :::3131                     :::*                        LISTEN

    2. Perform ldapbind over LDAP/LDAPS Ports

    [oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm
    [oracle@tester config]$ cd $ORACLE_HOME/bin/
    [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 3060
    Please enter bind password:
    bind successful
    [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 3131 -U 1
    Please enter bind password:
    bind successful

    3. Perform ldapsearch over LDAP/LDAPS Ports

    [oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm
    [oracle@tester config]$ cd $ORACLE_HOME/bin/
    [oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 3060 -s sub -b “cn=users,dc=mycompany,dc=com” “cn=oaamadmin”
    Please enter bind password:
    cn=oaamadmin,cn=Users,dc=mycompany,dc=com
    obpasswordexpirydate=2033-01-19T15:23:41Z
    objectclass=top
    objectclass=person
    objectclass=organizationalPerson
    objectclass=inetorgperson
    objectclass=orcluser
    objectclass=orcluserV2
    objectclass=orclIDXPerson
    objectclass=oblixPersonPwdPolicy
    objectclass=oblixOrgPerson
    objectclass=OIMPersonPwdPolicy
    userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow==
    orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF
    authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw==
    authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg==
    authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg==
    authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw==
    authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ==
    authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw==
    authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC
    authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58
    orclsamaccountname=oaamadmin
    mail=oaamadmin@company.com
    orclisenabled=ENABLED
    uid=oaamadmin
    givenname=oaamadmin
    sn=oaamadmin
    cn=oaamadmin

    [oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 3131 -U 1 -s sub -b “cn=users,dc=mycompany,dc=com” “cn=oaamadmin”
    Please enter bind password:
    cn=oaamadmin,cn=Users,dc=mycompany,dc=com
    obpasswordexpirydate=2033-01-19T15:23:41Z
    objectclass=top
    objectclass=person
    objectclass=organizationalPerson
    objectclass=inetorgperson
    objectclass=orcluser
    objectclass=orcluserV2
    objectclass=orclIDXPerson
    objectclass=oblixPersonPwdPolicy
    objectclass=oblixOrgPerson
    objectclass=OIMPersonPwdPolicy
    userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow==
    orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF
    authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw==
    authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg==
    authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg==
    authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw==
    authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ==
    authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw==
    authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC
    authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58
    orclsamaccountname=oaamadmin
    mail=oaamadmin@company.com
    orclisenabled=ENABLED
    uid=oaamadmin
    givenname=oaamadmin
    sn=oaamadmin
    cn=oaamadmin

     

    Oracle Virtual Directory (OVD)

     

    1. Check that LDAP/LDAPS/Admin Listeners and Processes are Up

    For OVD, use opmnctl and netstat to check the ports. Note that OVD also has an Admin port for ODSM connections to OVD:

    [oracle@tester bin]$ cd /u01/app/oracle/admin/ovd1/bin/
    [oracle@tester bin]$ ./opmnctl startall
    opmnctl startall: starting opmn and all managed processes…
    [oracle@tester bin]$ ./opmnctl status -l
    Processes in Instance: ovd1
    ———————————+——————–+———+———-+————+———-+———–+——
    ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports
    ———————————+——————–+———+———-+————+———-+———–+——
    ovd1                             | OVD                |   14828 | Alive    |  391195326 |   662832 |   0:00:30 | https:8899,ldap:6501,ldaps:7501
    EMAGENT                          | EMAGENT            |   14829 | Alive    |  391195325 |    63848 |   0:00:30 | N/A

    [oracle@tester bin]$ netstat -an | grep 6501
    tcp        0      0 ::ffff:192.168.217.142:6501 :::*                        LISTEN
    [oracle@tester bin]$ netstat -an | grep 7501
    tcp        0      0 ::ffff:192.168.217.142:7501 :::*                        LISTEN
    [oracle@tester bin]$ netstat -an | grep 8899
    tcp        0      0 ::ffff:192.168.217.142:8899 :::*                        LISTEN

    2. Perform ldapbind over LDAP/LDAPS Ports

    [oracle@tester bin]$ export ORACLE_HOME=/u01/app/oracle/product/fmw/idm
    [oracle@tester config]$ cd $ORACLE_HOME/bin/
    [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 6501
    Please enter bind password:
    bind successful
    [oracle@tester bin]$ ./ldapbind -D cn=orcladmin -q -h tester.mycompany.com -p 7501 -U 1
    Please enter bind password:
    bind successful

    3. Perform ldapsearch over LDAP/LDAPS Ports

    [oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 6501 -s sub -b “cn=users,dc=mycompany,dc=com” “cn=oaamadmin”
    Please enter bind password:
    cn=oaamadmin,cn=Users,dc=mycompany,dc=com
    authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw==
    authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ==
    authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw==
    authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC
    authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58
    orclisenabled=ENABLED
    orclsamaccountname=oaamadmin
    sn=oaamadmin
    mail=oaamadmin@company.com
    userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow==
    givenname=oaamadmin
    uid=oaamadmin
    authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw==
    authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg==
    authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg==
    orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF
    obpasswordexpirydate=2033-01-19T15:23:41Z
    cn=oaamadmin
    objectclass=top
    objectclass=person
    objectclass=organizationalPerson
    objectclass=inetorgperson
    objectclass=orcluser
    objectclass=orcluserV2
    objectclass=orclIDXPerson
    objectclass=oblixPersonPwdPolicy
    objectclass=oblixOrgPerson
    objectclass=OIMPersonPwdPolicy

    [oracle@tester bin]$ ./ldapsearch -D cn=orcladmin -q -h tester.mycompany.com -p 7501 -U 1 -s sub -b “cn=users,dc=mycompany,dc=com” “cn=oaamadmin”
    Please enter bind password:
    cn=oaamadmin,cn=Users,dc=mycompany,dc=com
    authpassword;orclcommonpwd={MD5}tW4LTqSWIoO+52JSXC1JDw==
    authpassword;orclcommonpwd={X- ORCLIFSMD5}Qr85fKpR7fSS8bEKLHt+UQ==
    authpassword;orclcommonpwd={X- ORCLWEBDAV}xLe9oAZMJGGaRkYzgWkgPw==
    authpassword;orclcommonpwd={X- ORCLLMV}C23413A8A1E7665FC2265B23734E0DAC
    authpassword;orclcommonpwd={X- ORCLNTV}CF3A5525EE9414229E66279623ED5C58
    orclisenabled=ENABLED
    orclsamaccountname=oaamadmin
    sn=oaamadmin
    mail=oaamadmin@company.com
    userpassword={SSHA}7mkhojy5h/QnOBg6jwN2jGwcMk88DIk1d+p4ow==
    givenname=oaamadmin
    uid=oaamadmin
    authpassword;oid={SASL/MD5}tEPZqagkbB8KzpO3JPZ2Uw==
    authpassword;oid={SASL/MD5-DN}Cor4GYRZnQnQDmihNzBYrg==
    authpassword;oid={SASL/MD5-U}DSUq+epZuKKFAPTX5aIhQg==
    orclpassword={x- orcldbpwd}1.0:8778E460077C8CAF
    obpasswordexpirydate=2033-01-19T15:23:41Z
    cn=oaamadmin
    objectclass=top
    objectclass=person
    objectclass=organizationalPerson
    objectclass=inetorgperson
    objectclass=orcluser
    objectclass=orcluserV2
    objectclass=orclIDXPerson
    objectclass=oblixPersonPwdPolicy
    objectclass=oblixOrgPerson
    objectclass=OIMPersonPwdPolicy

     

    Node Manager

     

    1. Check that Listener and Process are Up

    [oracle@tester oracle]$ ps -ef | grep nodemanager
    oracle   16666     1  4 10:30 pts/4    00:00:05 /u01/app/oracle/product/fmw/jdk6/jre/bin/java -classpath /u01/app/oracle/product/fmw/jdk6/jre/lib/rt.jar:/u01/app/oracle/product/fmw/jdk6/jre/lib/i18n.jar:/u01/app/oracle/product/fmw/patch_wls1036/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/patch_ocp371/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/jdk6/lib/tools.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic_sp.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.jar:/u01/app/oracle/product/fmw/modules/features/weblogic.server.modules_10.3.6.0.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/webservices.jar:/u01/app/oracle/product/fmw/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/u01/app/oracle/product/fmw/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar:/u01/app/oracle/product/fmw/utils/config/10.3/config-launch.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbynet.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbyclient.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbytools.jar -DListenAddress=ADMINVHN.mycompany.com -DNodeManagerHome=/u01/app/oracle/product/fmw/wlserver_10.3/common/nodemanager -DQuitEnabled=true -DListenPort=5556 weblogic.NodeManager -v
    oracle   16895 16543  0 10:32 pts/4    00:00:00 grep nodemanager

    [oracle@tester oracle]$ netstat -an | grep 5556
    tcp        0      0 ::ffff:192.168.217.142:5556 :::*                        LISTEN

    2. Perform nmConnect via WLST

    [oracle@tester bin]$ export MW_HOME=/u01/app/oracle/product/fmw
    [oracle@tester bin]$ cd $MW_HOME/oracle_common/common/bin
    [oracle@tester bin]$ ./wlst.sh
    CLASSPATH=/u01/app/oracle/product/fmw/patch_wls1036/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/patch_ocp371/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/u01/app/oracle/product/fmw/jdk6/lib/tools.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic_sp.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.jar:/u01/app/oracle/product/fmw/modules/features/weblogic.server.modules_10.3.6.0.jar:/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/webservices.jar:/u01/app/oracle/product/fmw/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/u01/app/oracle/product/fmw/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar::/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrf-wlstman.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/lib/adfscripting.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/lib/adf-share-mbeans-wlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/lib/mdswlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/auditwlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/igfwlsthelp.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/jps-wlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/jrf-wlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/oamap_help.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/oamAuthnProvider.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/ossoiap_help.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/ossoiap.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/ovdwlsthelp.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/sslconfigwlst.jar:/u01/app/oracle/product/fmw/oracle_common/common/wlst/resources/wsm-wlst.jar:/u01/app/oracle/product/fmw/utils/config/10.3/config-launch.jar::/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbynet.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbyclient.jar:/u01/app/oracle/product/fmw/wlserver_10.3/common/derby/lib/derbytools.jar::
    Initializing WebLogic Scripting Tool (WLST) …
    Welcome to WebLogic Server Administration Scripting Shell
    Type help() for help on available commands
    wls:/offline> nmConnect(‘nmAdmin’,’Welcome1′,’tester.mycompany.com’,’5556′,’IDMDomain’,’/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain’)
    Connecting to Node Manager …
    Successfully Connected to Node Manager.
    wls:/nm/IDMDomain> nmStart(‘AdminServer’)
    Starting server AdminServer …
    Successfully started server AdminServer …
    wls:/nm/IDMDomain> nmKill(‘AdminServer’)
    Killing server AdminServer …
    Successfully killed server AdminServer …
    wls:/nm/IDMDomain> exit()
    Exiting WebLogic Scripting Tool.
    [oracle@tester bin]$

     

    WebLogic Server (WLS)

     

    1. Check AdminServer Listener and Process

    [oracle@tester ~]$ ps -ef | grep AdminServer
    oracle   18303 18249 23 10:47 ?        00:02:25 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=AdminServer -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/AdminServer/security/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/AdminServer -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/AdminServer/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=AdminServer -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans… -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=true -Dwlw.iterativeDev= -Dwlw.testConsole= -Dwlw.logErrorsToConsole= -Dweblogic.ext.dirs=/u01/app/oracle/product/fmw/patch_wls1036/profiles/default/syse
    oracle   19595 19540  0 10:58 pts/5    00:00:00 grep AdminServer

    [oracle@tester ~]$ netstat -an | grep 7001
    tcp        0      0 ::ffff:192.168.217.142:7001 :::*                        LISTEN

    2. Log in to WLS Console via AdminServer Port

    validation_blog001

    3. Check that All Servers are Up

    Navigate to Summary of Servers and ensure that all managed servers have started:

    validation_blog002

    4. Check that Users/Groups are Visible

    Navigate to Security realms > myrealm and ensure that users and groups from both the Default and OVD Authenticators are visible:

    validation_blog003 5. Log in to FMW Control via AdminServer port

     

    validation_blog004

    6. Check that All Components are Up

     

    validation_blog005

     

    Oracle Access Manager Console (OAM Console)

     

    1. Log in to OAM Console via AdminServer Port

     

    validation_blog006

     

    validation_blog007

    Navigate to some sample configuration screens to ensure that they are properly displayed:

    validation_blog008

     

    validation_blog009

     

    Oracle Directory Services Manager (ODSM)

     

    1. Check ODSM Listener and Process

    [oracle@tester ~]$ ps -ef | grep wls_ods1
    oracle   19093 19039  8 10:54 ?        00:05:44 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_ods1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_ods1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_ods1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_ods1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_ods1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans… -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweblogic.management.server=http://ADMINVHN.mycompany.com:700
    oracle   25148 19540  0 11:58 pts/5    00:00:00 grep wls_ods1

    [oracle@tester ~]$ netstat -an | grep 7006
    tcp        0      0 ::ffff:192.168.217.142:7006 :::*                        LISTEN

    2. Connect to OID via ODSM Port

     

    validation_blog010

     

    validation_blog011

    3. Browse OID Directory Tree

    Ensure that users and groups are populated and visible:

    validation_blog012

    4. Connect to OVD via ODSM Port

     

    validation_blog013

    5. Browse OVD Directory Tree

     

    validation_blog014

     

    validation_blog015

     

    Oracle Access Manager (OAM)

     

    1. Check OAM Server Listener and Process

    [oracle@tester ~]$ ps -ef | grep wls_oam1
    oracle   18766 18712  7 10:51 ?        00:06:32 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_oam1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oam1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_oam1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oam1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_oam1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans… -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweblogic.management.server=http://ADMINVHN.mycompany.com:700
    oracle   26309 19540  0 12:13 pts/5    00:00:00 grep wls_oam1

    [oracle@tester ~]$ netstat -an | grep 14100
    tcp        0      0 ::ffff:192.168.217.14:14100 :::*                        LISTEN

    2. Check that /oam/server via OAM Server Port Responds

    Note that the error below is expected behavior. This test is meant to ensure only that the server responds to the HTTP request.

    [oracle@tester ~]$ wget http://tester.mycompany.com:14100/oam/server
    –2013-10-21 12:15:35–  http://tester.mycompany.com:14100/oam/server
    Resolving tester.mycompany.com… 192.168.217.142
    Connecting to tester.mycompany.com|192.168.217.142|:14100… connected.
    HTTP request sent, awaiting response… 404 Not Found
    2013-10-21 12:15:36 ERROR 404: Not Found.

     

    Oracle Identity Manager (OIM)

     

    1. Check OIM Listener and Process

    [oracle@tester ~]$ ps -ef | grep wls_oim1
    oracle   19390 19336 10 10:56 ?        00:09:04 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_oim1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oim1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Djps.subject.cache.key=5 -Djps.subject.cache.ttl=600000 -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_oim1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_oim1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_oim1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans… -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweb
    oracle   26875 19540  0 12:20 pts/5    00:00:00 grep wls_oim1

    [oracle@tester ~]$ netstat -an | grep 14000
    tcp        0      0 ::ffff:192.168.217.14:14000 :::*                        LISTEN

    2. Log in to OIM Admin Console

     

    validation_blog016

     

    validation_blog017

    3. Look Up User

    Navigate to the Administration console and search for an sample user:

    validation_blog018

    4. Test Role Grant/Revocation

    Assign a role to the sample user:

    validation_blog019

     

    validation_blog020

     

    validation_blog021

    Confirm via ODSM that the user has been added to the associated group in OID:

    validation_blog022

    Revoke the role from the sample user:

    validation_blog023

     

    validation_blog024

    Confirm the revocation via ODSM:

    validation_blog025 5. Test a Reconciliation Process

    Search for a Fusion Applications reconciliation scheduled job and run it:

    validation_blog026

     

    validation_blog027

     

    validation_blog028

     

    validation_blog029

     

    validation_blog030

    Confirm that the reconciliation was successful:

    validation_blog031

     

    Oracle Service Oriented Architecture Suite (SOA)

     

    1. Check SOA Listener and Process

    [oracle@tester ~]$ ps -ef | grep wls_soa1
    oracle   20160 20106 11 11:03 ?        00:10:46 /u01/app/oracle/product/fmw/jdk6/bin/java -jrockit -Xms768m -Xmx1536m -Dweblogic.Name=wls_soa1 -Djava.security.policy=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/weblogic.policy -Dweblogic.system.BootIdentityFile=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_soa1/data/nodemanager/boot.properties -Dweblogic.nodemanager.ServiceEnabled=true -Dweblogic.security.SSL.ignoreHostnameVerification=false -Dweblogic.ReverseDNSAllowed=false -Djps.subject.cache.key=5 -Djps.subject.cache.ttl=600000 -Xverify:none -da -Dplatform.home=/u01/app/oracle/product/fmw/wlserver_10.3 -Dwls.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dweblogic.home=/u01/app/oracle/product/fmw/wlserver_10.3/server -Dcommon.components.home=/u01/app/oracle/product/fmw/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Djrockit.optfile=/u01/app/oracle/product/fmw/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/servers/wls_soa1 -Doracle.domain.config.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig -Digf.arisidbeans.carmlloc=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/carml -Digf.arisidstack.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/servers/wls_soa1/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u01/app/oracle/product/fmw/iam/oam/agent/modules/oracle.oam.wlsagent_11.1.1,/u01/app/oracle/product/fmw/iam/server/loginmodule/wls,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.ossoiap_11.1.1,/u01/app/oracle/product/fmw/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol|oracle.fabric.common.classloaderurl.handler|oracle.fabric.common.uddiurl.handler|oracle.bpm.io.fs.protocol -Dweblogic.jdbc.remoteEnabled=false -DOAM_POLICY_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-policy.xml -DOAM_CONFIG_FILE=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain/config/fmwconfig/oam-config.xml -DOAM_ORACLE_HOME=/u01/app/oracle/product/fmw/iam/oam -Doracle.security.am.SERVER_INSTNCE_NAME=wls_soa1 -Does.jars.home=/u01/app/oracle/product/fmw/iam/oam/server/lib/oes-d8 -Does.integration.path=/u01/app/oracle/product/fmw/iam/oam/server/lib/oeslib/oes-integration.jar -Does.enabled=true -Djavax.xml.soap.SOAPConnectionFactory=weblogic.wsee.saaj.SOAPConnectionFactoryImpl -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Djavax.xml.soap.SOAPFactory=oracle.j2ee.ws.saaj.soap.SOAPFactoryImpl -DXL.HomeDir=/u01/app/oracle/product/fmw/iam/server -Djava.security.auth.login.config=/u01/app/oracle/product/fmw/iam/server/config/authwl.conf -Dorg.owasp.esapi.resources=/u01/app/oracle/product/fmw/iam/server/apps/oim.ear/APP-INF/classes -da:org.apache.xmlbeans… -Didm.oracle.home=/u01/app/oracle/product/fmw/idm -Xms512m -Xmx1024m -Xss512K -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.management.discover=false -Dsoa.archives.dir=/u01/app/oracle/product/fmw/soa/soa -Dsoa.oracle.home=/u01/app/oracle/product/fmw/soa -Dsoa.instance.home=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain -Dtangosol.coherence.clusteraddress=227.7.7.9 -Dtangosol.coherence.clusterport=9778 -Dtangosol.coherence.log=jdk -Djavax.xml.soap.MessageFactory=oracle.j2ee.ws.saaj.soap.MessageFactoryImpl -Dweblogic.transaction.blocking.commit=true -Dweblogic.transaction.blocking.rollback=true -Djavax.net.ssl.trustStore=/u01/app/oracle/product/fmw/wlserver_10.3/server/lib/DemoTrust.jks -Dums.oracle.home=/u01/app/oracle/product/fmw/soa -Dem.oracle.home=/u01/app/oracle/product/fmw/oracle_common -Djava.awt.headless=true -Dweblogic.management.discover=false -Dweb
    oracle   28045 19540  0 12:35 pts/5    00:00:00 grep wls_soa1

    [oracle@tester ~]$ netstat -an | grep 8001
    tcp        0      0 ::ffff:192.168.217.142:8001 :::*                        LISTEN

    2. Log in to SOA Diagnostic Page

    Note that the login credentials to be used here are the WebLogic admin credentials:

    validation_blog032

     

    validation_blog033

     

    Oracle HTTP Server (OHS)

     

    1. Check OHS Listener and Process

    [oracle@tester bin]$ ./opmnctl status -l
    Processes in Instance: web1
    ———————————+——————–+———+———-+————+———-+———–+——
    ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports
    ———————————+——————–+———+———-+————+———-+———–+——
    ohs1                             | OHS                |   22727 | Alive    |  342892205 |   736376 |   1:56:59 | https:9999,https:4443,http:7777

    [oracle@tester bin]$ ps -ef | grep ohs
    oracle   22727 22705  0 11:38 ?        00:00:01 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL
    oracle   22735 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/odl_rotatelogs -l /u01/app/oracle/admin/web1/diagnostics/logs/OHS/ohs1/ohs1-%Y%m%d%H%M%S.log 10M 70M
    oracle   22736 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/odl_rotatelogs /u01/app/oracle/admin/web1/diagnostics/logs/OHS/ohs1/access_log 43200
    oracle   22737 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/odl_rotatelogs -l -h:/u01/app/oracle/admin/web1/config/OHS/ohs1/component_events.xml_ohs1 /u01/app/oracle/admin/web1/auditlogs/OHS/ohs1/audit-pid22727-%Y%m%d%H%M%S.log 1M 4M
    oracle   22738 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL
    oracle   22741 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL
    oracle   22743 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL
    oracle   22855 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL
    oracle   22911 22727  0 11:38 ?        00:00:00 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL
    oracle   22912 22727  0 11:38 ?        00:00:01 /u01/app/oracle/product/fmw/web/ohs/bin/httpd.worker -DSSL
    oracle   32727  5753  0 13:36 pts/2    00:00:00 grep ohs

    [oracle@tester bin]$ netstat -an | grep 7777
    tcp        0      0 :::7777                     :::*                        LISTEN
    [oracle@tester bin]$ netstat -an | grep 4443
    tcp        0      0 :::4443                     :::*                        LISTEN

    2. Log in to OAM Console via SSO

     

    validation_blog034

     

    validation_blog035

     

    validation_blog036

    Be sure to test that the Sign Out link returns the user to the login page:

    validation_blog037

    How to Recover Initial Messages (Payload) from SOA Audit for Mediator and BPEL components

    $
    0
    0

    Introduction

    In Fusion Applications, the status of SOA composite instances are either running, completed, faulted or staled. The composite instances become staled immediately (irrespective of current status) when the respective composite is redeployed with the same version. The messages (payload) are stored in SOA audit tables until they are purged. The users can go through Enterprise Manager and view audit trails and respective messages of each composite. This is good for debugging composite instances. However there are situations where you want to re-submit initiation of SOA composite instances in bulk for the following reasons:

    • The composite was redeployed with the same version number that resulted in all respective instances (completed successfully, faulted or in-flight) becoming stale (“Staled” status)
    • Instances failed because down-stream applications failed and the respective composite did not have an ability to capture the initial message in persistence storage to retry later

    In these cases, it may be necessary to capture the initial message (payload) of many instances in bulk to resubmit them. This can be managed programmatically through SOA Facade API. The Facade API is part of Oracle SOA Suite’s Infrastructure Management Java API that exposes operations and attributes of composites, components, services, references and so on. As long as instances are not purged, the developer can leverage SOA Facade API to retrieve initial messages of either Mediator or BPEL components programmatically. The captured messages can be either resubmitted immediately or stored in persistence storage, such as file, jms or database, for later submission. There are several samples, but this post takes the approach of creating a SOA composite that provides the ability to retrieve initial message of Mediator or BPEL components. The sample provides the frame work and you can tailor it to your requirements.

    Main Article

    SOA Facade API

    Please refer to this for complete SOA Facade API documentation. The SOA audit trails and messages work internally as follows:

    • The “Audit Level” should be either Production or Development to capture the initial payload
    • The “Audit Trail Threshold” determines the location of the initial payload.  If the threshold is exceeded, the View XML link is shown in the audit trail instead of the payload. The default value is 50,000 bytes. These large payloads are stored in a separate database table: audit_details.

    Please refer to the following document for more details on these properties.

    Since the SOA composite we are developing will be deployed in the same respective SOA Server, you do not require user credentials to create the locator object. This is all you need:

    Locator locator = LocatorFactory.createLocator();

    Please see the SOA Facade API document for more information the Locator class.

    Once the Locator object is created, you can lookup composites and apply various filters to narrow down the search to respective components. This is all explained in detail with examples in the SOA Facade document. Here, we focus on how to retrieve the initial messages of the Mediator and BPEL components to resubmit them.

    How to retrieve initial payload from BPEL?

    In BPEL, the initial payload is either embedded in the audit trail or has a link to it. This is controlled by the audit trail threshold value. If the payload size exceeds the audit threshold value then the audit trail has a link. This is the main method to get audit trail:

    auditTrailXml = (String)compInst.getAuditTrail
    /* The “compInst” is an instance Component that is derived from: */
    Component lookupComponent = (Component)locator.lookupComponent(componentName);
    ComponentInstanceFilter compInstFilter = new ComponentInstanceFilter(); compInstFilter.setId(componentId);

     

    If the payload size exceeds the audit threshold value, then the actual payload is an XML link that is stored in the “audit_details” table. The following is the API facade to get it:

    auditDetailXml = (String)locator.executeComponentInstanceMethod(componentType +”:”+ componentId, auditMethod, new String[]{auditId});

    The “auditId” for SOA is always “0”.

     

    How to retrieve initial payload from Mediator

    The initial payload in Mediator is never embedded in the Audit Trail. It is always linked and the syntax is similar to BPEL (where payload size exceeds the audit threshold value). However, the “auditID” is in the Mediator audit trail and it must be parsed to get that value for the initial payload. This is the code snippet to get the “auditId” from Mediator audit trail:

    if (componentType.equals("mediator")) {
    DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
    DocumentBuilder db = dbf.newDocumentBuilder();
    Document document = db.parse(new InputSource(new StringReader(auditTrailXml)));
    NodeList nodeList = document.getElementsByTagName("event");
    String attribute = nodeList.item(0).getAttributes().getNamedItem("auditId").getNodeValue();
    addAuditTrailEntry("The Audit is: " + attribute); 
    auditId = attribute;auditMethod="getAuditMessage";} 
    
    /* Once you have the "auditID" from above code, the syntax to get the initial payload is the same as in BPEL.*/
    auditDetailXml = (String)locator.executeComponentInstanceMethod(componentType +":"+ componentId, auditMethod, new String[]{auditId});

     

    Complete Java embedded code in BPEL

    try { 
    String componentInstanceID = new Long(getInstanceId()).toString();    
    addAuditTrailEntry("This Run time Component Instance ID is "+componentInstanceID);  
    
    XMLElement compositeNameVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:compositeName");
    String compositeName = compositeNameVar.getTextContent();  
    
    XMLElement compositeIdVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:compositeId");
    String compositeId = compositeIdVar.getTextContent();  
    
    XMLElement componentTypeVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:componentType");
    String componentType = componentTypeVar.getTextContent();  
    
    XMLElement componentNameVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:componentName");
    String componentName = componentNameVar.getTextContent();  
    
    XMLElement componentIdVar = (XMLElement) getVariableData("inputVariable", "payload", "/client:process/client:componentId");
    String componentId = componentIdVar.getTextContent();  
    
    String auditDetailXml = "null";
    String auditTrailXml = "null";
    String auditMethod = "getAuditDetails";
    String auditId = "0";
    
    addAuditTrailEntry("The lookup Composite Instance Name is "+compositeName);  
    addAuditTrailEntry("The lookup Composite Instance ID is "+compositeId);  
    addAuditTrailEntry("The lookup Component Instance Name is "+componentName);
    addAuditTrailEntry("The lookup Component Instance Type is " + componentType);
    addAuditTrailEntry("The lookup Component Instance ID is "+componentId);  
    
    Locator locator = LocatorFactory.createLocator();  
    Composite composite = (Composite)locator.lookupComposite(compositeName);  
    Component lookupComponent = (Component)locator.lookupComponent(componentName);  
    
    ComponentInstanceFilter compInstFilter = new ComponentInstanceFilter();  
    
    compInstFilter.setId(componentId);
    
    List<ComponentInstance> compInstances = lookupComponent.getInstances(compInstFilter);  
    if (compInstances != null) {  
        addAuditTrailEntry("====Audit Trail of Instance===");  
        for (ComponentInstance compInst : compInstances) {  
            String compositeInstanceId = compInst.getCompositeInstanceId(); 
            String componentStatus = compInst.getStatus(); 
            addAuditTrailEntry("Composite Instance ID is "+compositeInstanceId);  
            addAuditTrailEntry("Component Status is "+componentStatus);  
    
            addAuditTrailEntry("Get Audit Trail");
            auditTrailXml = (String)compInst.getAuditTrail();
    
            if (componentType.equals("mediator")) {
                DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
                DocumentBuilder db = dbf.newDocumentBuilder();
                Document document = db.parse(new InputSource(new StringReader(auditTrailXml)));
                NodeList nodeList = document.getElementsByTagName("event");
                String attribute = nodeList.item(0).getAttributes().getNamedItem("auditId").getNodeValue();
                addAuditTrailEntry("The Audit is: " + attribute);
    
                auditId = attribute;
                auditMethod="getAuditMessage";
                }
    
            addAuditTrailEntry("Received Audit Trail");
    
            addAuditTrailEntry("Get Audit Details of: "+ componentType +":"+ componentId + "for auditId: " + auditId);
    
            try {
                auditDetailXml = (String)locator.executeComponentInstanceMethod(componentType +":"+ componentId, auditMethod, new String[]{auditId});
            } catch (Exception e) { 
            addAuditTrailEntry("Exception in getting audit details:" + e);
            }
    
            addAuditTrailEntry("Received Audit Details");
    
            setVariableData("auditTrailString", "payload", "/client:AuditTrailString/client:auditTrail", auditTrailXml);
            setVariableData("auditDetailString", "payload", "/client:AuditDetailString/client:auditDetail", auditDetailXml);
    
            addAuditTrailEntry("BPEL Variables set");
        }  
    } 
    
    } catch (Exception e) { 
        addAuditTrailEntry("Exception in getting Audit Trails and Details"); 
    }
    
    The sample payload to run above composite is:
    
        <element name="process">
            <complexType>
                <sequence>
                    <element name="compositeName" type="string"/>
                                    <element name="compositeId" type="string"/>
                                    <element name="componentType" type="string"/>
                                    <element name="componentName" type="string"/>
                                    <element name="componentId" type="string"/>
                </sequence>
            </complexType>
        </element>

    Sample Code

    Please get the complete Jdeveloper Project as follows:

    1. DummySOAApplication to retrieve initial payload of Mediator and BPEL components

    2. The SOA Audit Trail Composite “SOAAuditTrails” that contains the logic to get initial payload of “Dummy Composite”

    3. Sample Payload “SOA_audit_payload

     

     

    Request Flow in Fusion Applications – Part 1

    $
    0
    0

    Introduction:

    When accessing Fusion Applications (FA), a user typically requests their FA home page by typing in the URL in a browser window or clicking on a favorites link. The user then logs in to the application and gets to the home page and performs various activities such as – navigate to different parts of the application, perform transactions, view existing transactions, run or schedule jobs and view/create reports, among other things. For the end user, all it takes is clicking on links and typing appropriate values in fields most of the time. Behind the scene however, there are various components doing their part and interacting with each other to make all that happen. In this series of posts we will explain interactions between components and how the request flows between these components.

    In this particular post, we will explain the authentication request flow within Fusion Applications which will demonstrate how various components talk to each other and at a high level also show the configuration which enables them to do the same.

    Main Article:

    Let us start with the components involved in Fusion Applications. Fusion Applications is built on the Fusion Middleware foundation, which consists of WebLogic Server, Identity Management Suite, HTTP Server and other middleware components such as SOA Suite, Business Intelligence etc. In the first part, we will not discuss all these components. Instead, we will focus on the components involved in the authentication. Following is a graphical depiction of these components:

     

    ComponentDiagram

    Brief description of the components depicted above:

    Web Browser: The primary purpose of a web browser is to bring information resources to the user, allowing them to view the information, and then access other information. This process begins when the user inputs a Uniform Resource Locator (URL), for example http://en.wikipedia.org/, into the browser.

    The format of a URL is scheme://domain:port/path?query_string

    The scheme, often referred to as protocol, defines how the resource will be obtained.

    For Fusion Applications, all requests use HTTPS protocol for communication with the broswer.

    The domain name gives the destination location for the URL

    The port number is optional; if omitted, the default for the scheme is used. Default for HTTPS is 443

    The path is used to specify the resource requested.

    The query string contains data to be passed to software running on the server.

    Load-Balancer (LBR): The LBR receives incoming HTTP(s) requests, terminates SSL (decrypts HTTPs) and redirects the request to one of the Web Hosts running Oracle HTTP Server. The Web Host to which the request is redirected depends on the algorithm used for load balancing – (e.g., round-robin) and whether persistence is enabled. If persistence is enabled, each subsequent request by the same client gets redirected to the same Web Host

    Oracle HTTP Server: The HTTP Server receives incoming HTTP requests and redirects them to one of the WebLogic managed servers which serves that resource using mod_wl_ohs plugin. Each subsequent request by the same client is redirected to the same WLS managed server. There are 2 HTTP servers (optionally they can be clustered) in FA. One which acts as a front end for Fusion Applications (App OHS) and the other acts as a front end for Oracle Identity Management (Auth OHS)

    Oracle WebGate: WebGate provides perimeter security. Intercepts any incoming request to OHS to check if the resource is protected, and if it is, whether the user is already authenticated and has access to it. Think of WebGate as a security guard outside a invitees only party. The job of the security guard is to validate that you are who you claim to be (checking your ID) and then letting you in. Before the guard lets you in, he puts a token on you (a band or a stamp) which is valid for a limited time. If you come out of the party and try to get in, if the token is still valid he lets you in without further checks. What you are allowed access to inside the party is none of its concern. Once the party is over, the token expires. If you try to come back the next day and show your existing token which has expired now, it will not let you in unless you authenticate yourself again at which point the guard will issue a new token. Similarly WebGate allows you entry by validating your credentials using Oracle Access Manager and on successful authentication, it sends a cookie back to the browser which is valid for a certain amount of time (configured in Oracle Access Manager). Once the authentication flow is complete it lets your request go to OHS which deals with providing you with the requested resource. On every subsequent request, WebGate only checks for a valid cookie. Only when the cookie expires, does it prompt you for reauthentication.

    Oracle Access Manager (OAM): OAM holds an inventory of protected and public resources. If a user is not authenticated, then it challenges for credentials and validates against the user store. It provides an authentication token to WebGate once the user successfully authenticates. It also maintains a server side cookie – OAM_ID – which has details about user, timeout and identifier to coherence session.

    Oracle Internet Directory (OID): OID is simply an LDAP directory which stores credentials.

    Oracle Virtual Directory(OVD): OVD is a virtual layer which presents more than one directory as a single directory. Either Oracle Virtual Directory or Oracle Internet Directory is used by Oracle Access Manager to validate credentials passed by the user.

    Oracle WebLogic Server: WebLogic is a JEE server which runs java applications. It completes the authentication flow by populating the principal (user and group) by using OAM Identity Asserter and downstream authentication providers (OVD or OID Authenticator) and provides the user with the requested resource.

     

    A graphical depiction of the Authentication Request Flow:

     

    AuthenticationRequestFlow

    Description of the Authentication Request Flow:

    1. A user makes a request for the FA home page by typing in the URL, e.g., https://fusion.infusion.com/homePage/  fusion.infusion.com resolves to an IP address listened to by the LBR. 

    HomePageURL

    2. The LBR decrypts the request and redirects this request using HTTP protocol to port 10614 on one of the application web servers (CommonDomain’s HTTP port for external traffic). WebGate then intercepts requests coming to OHS

    3. Since this is the user’s first request, there is no existing session. WebGate contacts OAM to check if the request is for a protected resource. “homePage” itself is not a protected resource so WebGate lets it pass through without authentication, but “homePage” redirects to a URL similar to homePage/adfAuthentication?_adf.authenticate=true&_afrLoop=719352011817477 which is a protected resource.

    You can view the policies associated with these resources by logging in to the OAM Console and performing the following navigation:

    Policy Configuration Tab -> Application Domains -> fs -> Double Click on Resources In the Search section of “fs Resources” tab on the right pane, choose Resource Type: HTTP and Type /homePage* in Resource URL field and click on search. OAM will display authentication and authorization policies associated with the resources which match the query string.

    OAMPoliciesHomePage

    4. OAM queries the database and invokes the policy associated with this resource (Protected Resource Policy) and directs users to a login page – https://login.infusion.com/oam/server/obrareq.cgi?…….

    login.infusion.com resolves to an IP address listened to by Load Balancer.

    Note: A policy is associated with an Authentication scheme which governs the challenge method (Form in most cases), Authentication Module etc. A description of these topics is outside the scope of this article.

    5. The LBR decrypts the request and redirects this request using HTTP protocol to port 7777 on one of the authentication web servers. As it is it’s wont, WebGate intercepts requests coming to the OHS

    6. WebGate contacts OAM to check if the request is for a protected resource. The resource URL /oam/…/* has a protection level of excluded which means WebGate doesn’t contact OAM and allows the request to pass directly to OHS.

    7. OHS looks up at the Virtual Host configuration, finds /oam location which instructs OHS to pass the request using weblogic-handler to OAM managed server. This configuration is stored in $OHS_INSTANCE_HOME/config/OHS/ohs<n>/moduleconf/sso_vh.conf file. Here is a relevant excerpt from the file. Pay special attention to the highlighted items:

    sso_vh

    Once it receives the response back from WLS, it sends the response back to the LBR

    8. The LBR encrypts the response (remember the scheme is https) and sends the login page back to the end user

    9. The user types in their credentials and submit the page. A post request is sent back to the server with the entered credentials

    POST /oam/server/auth_cred_submit HTTP/1.1

    This request also goes via the LBR, through the OHS to the OAM managed server (wls_oam1). In the interest of keeping the diagram simple, we are not depicting this flow in the picture above.

    10. Once OAM receives the credentials, it validates it against the User store configured for the Authentication module being used. Depending on how you have configured your system, it could be OVD or OID. For the purpose of this post, we’ve configured OVD as the user store. OVD in turn uses the adapter to validate the credentials against the underlying directory – OID. If the credential validation is successful, OAM sends access token back to WebGate

    11. WebGate checks the access token and sets the OAMAuthnCookie and triggers the OAM_REMOTE_USER token and passes the request to OHS.

    12. OHS looks up the virtual host configuration, finds /homePage location which instructs OHS to pass the request using weblogic-handler to HomePageServer_<n> managed server. This configuration is stored in $OHS_INSTANCE_HOME/config/OHS/ohs<n>/moduleconf/FusionVirtualHost_fs.conf file. Here is a relevant excerpt from the file. Pay special attention to the highlighted items:

    FusionVirtualHost_fs

    13. WebLogic’s security framework invokes OAMIdentityAsserter which asserts user identity. The downstream authentication provider (OVDAuthenticator in our case) populates the subject with principals (user and group)

    Note: The Home page is constructed with content from various WLS domains (depending on how you have customized your home page). Each of the WLS domains which serves the content, performs this step. In the interest of keeping the explanation simple, this has not been depicted in the diagram.

    14. The HomePageServer constructs the home page and sends the response back to OHS which in turn sends the response back to the LBR. The LBR encrypts the response and sends it back to the browser which renders the home page!

    FAHomePage

     

    Refereces: http://en.wikipedia.org

    Introduction to Fusion Applications Roles Concepts

    $
    0
    0

    Introduction

     

    Fusion Applications Security is designed based on Role-Based Access Control (RBAC). It is an approach to restricting access to authorized users. In general, RBAC is defined based on the primary rules as per this wiki page.

    RBAC normalizes access to functions and data through user roles rather than only users. User access is based on the definition of the roles provisioned to the user. RBAC secures access in a “Who can do what on which functions or sets of data under what conditions” approach. The “who” is the user.

    The roles are defined at functional and technical levels. The functional level is the business definition that is used by business users and the technical level is the implementation of roles using Oracle Technology. This post will quickly review the definition of “Functional Roles” and provide introductory internals on the technical implementation of these “Roles” within Fusion Applications.

    Architecture of Functional Roles

    At a high level, RBAC is based on the following concepts:

    • Role assignment: A subject can exercise permission only if the subject has selected or been assigned a role.
    • Role authorization: A subject’s active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized.
    • Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject’s active role. With rules 1 and 2, this rule ensures that users can exercise only permissions for which they are authorized.

    In Fusion Applications, the RBAC implementation is based on abstract, job, duty, and data roles that work together to control access to functions and data. The definitions of these functional roles are as follows:

    Abstract Role:

    This role categorizes the roles for reference implementation. It inherits duty role but does not contain security policies. For example: Employee, Manager, etc.

    Job Role:

    This role defines a specific job an employee is responsible for. An employee may have many job roles. It may require the data role to control the actions of the respective objects. For example: Benefits Manager, Accounts Receivable Specialist, etc.

    Data Role:

    This role defines access to the data within a specific duty. Who can do what on which set of data? The possible actions are read, update, delete, and manage. Only duty roles hold explicit entitlement to the data. These entitlements control the privileges such as in a user interface that can see specific screens, buttons, data columns, and other artifacts.

    Duty Role:

    This role defines a set of tasks. It is the most granular form of a role. The job and abstract roles inherit duty roles. The data security policies are specified to duty roles to control actions on all respective objects.

    This is a diagram from the “Oracle Fusion Applications Security Guide” that shows the relationships between these roles:

    rolediag

    The duty role in above diagram is the granular form of a role where security policies are attached. They are implemented as application roles in APM and scoped to individual Oracle Fusion Applications.

    Technical Implementation of Functional Roles

    The above functional roles are technically implemented as Enterprise and Applications roles. The Abstract, Job and Data roles are called Enterprise roles and the Duty role is called Application role.

    Fusion Applications implements the security using the Oracle Identity Management (IDM) stack. The IDM stack consists of identity store and Policy store . The Enterprise and Applications roles are implemented y in Identity and Policy stores respectively.

    Enterprise Roles

    Across all Fusion Applications, Abstract, Job and Data roles are mapped to Enterprise roles . These roles are stored in the Identity Store. They are managed through OIM and Identity Administration tools. This tool includes the following capabilities with respect to Enterprise role management:

    • Create Fusion Applications Implementation Users
    • Provision Roles to Implementation Users
    • Manage Abstract, Job and Data roles including the job hierarchy

    The predefined Abstract, Job and Data roles are seeded in:

    cn=FusionGroups,cn=groups,dc=us,dc=<MyCompany>,dc=com

    Example of Data Role: The role name ends with “_DATA ”. This is a naming convention and not a technical requirement.

    cn=CN_INCENTIVE_COMPENSATION_MANAGER_FUSION_CORP_USA_DATA,cn=FusionGroups,cn=groups,dc=us,dc=oracle,dc=com

    Example of Job Role: The role name ends with “_JOB”. This is a naming convention and not a technical requirement.

    cn=BEN_BENEFITS_MANAGER_JOB,cn=FusionGroups,cn=groups,dc=us,dc=oracle,dc=com

    Example of Abstract Role: The role name ends with “_ABSTRACT”. This is a naming convention and not a technical requirement.

    cn=PER_EMPLOYEE_ABSTRACT,cn=FusionGroups,cn=groups,dc=us,dc=oracle,dc=com

    These roles can also be viewed from ODSM (Oracle Directory Services Manager) console. The following steps illustrate this.

    Login to ODSM and OID Connector; navigate from “Data Browser” tab as shown below.

    odsm1

    Click on a role such as “BEN_BENEFITS_MANAGER_JOB”.

    Ben_benfits_mgr

    Applications Roles

    A “Duty Role” is mapped to Application Roles and is stored in the Policy Store. An application role is supplied by a single application or pillar of applications. The application policies are managed through “Authorization Policy Manager” (APM). APM is a graphical interface that simplifies the creation, configuration, and administration of application policies. Applications Authorization Policy Manager (APM) refers to enterprise roles as external roles. For more information on APM, please refer the following link.

    Oracle Fusion Applications is certified to integrate with Applications Access Controls Governor (AACG) in the Oracle Governance, Risk and Compliance Controls (GRCC) suite to ensure effective segregation of duties (SOD). Oracle Fusion Applications checks duty roles for SOD policy violations measured against content and the risks defined in AACG and against content according to best available security guidelines. User and role provisioning respects the segregation of duties policies.

    The following screen in APM UI shows Application Role mappings of External Role “Benefits Manager”:

    Ben_mgr_app_role

    Through APM you can manage existing policies and respective privileges. You can also create new policies and privileges.The Application Roles are stored in Policy Store as:

    cn=Roles,cn=<FAApp>,cn=FusionDomain,cn=JPSContext,cn=FAPolicies

    Example to view “BENEFITS_REPORTING_DATA_DUTY” role from ODSM console:

    Login to ODSM console, connect to OID and navigate from “Data Browser”.

    odsm_hcm

    Expand HCM and navigate to “Roles” to see all the respective duties of HCM Applications.

    ben_rpt_data_duty

     

    Fusion Applications Roles Provisioning Mapping

    Fusion Applications uses FUSION.PER_USER_ROLES table to store information about what roles are provisioned to which users.

    Example

    User name: ORDER_MGR_OPERATIONS

    select r.role_distinguished_name, p.role_GUID, u.username from per_user_roles p, per_roles_dn_vl r, per_users u where p.role_id=r.role_id and p.user_id = u.user_id and u.user_id = ‘118’

    The output of the above query.

    per_user_roles_output

    How all these roles and security policies/privileges work together?

    Fusion Applications seeds all the relevant roles, though they can be modified and customized based on the business requirements. It is important to first understand the functional and data security policies.

    Functional Security Polices

    Function security consists of privileges granted to a user by means of the user’s membership in a role, to control access to a page or a specific widget or functionality/operation within a page, including services, screens, and flows, and typically used in control of the main menu. A function security policy consists of privileges assigned to duty roles and those duty roles assigned to a job or abstract role. Function security policies are defined in the Authorization Policy Manager (APM).

    Function security is implemented using Java Authentication and Authorization Services (JAAS) permissions representing an Application Development Framework (ADF) artifact. These permissions are stored in the Oracle Platform Security Services (OPSS) policy store and are checked by ADF at runtime. When a user accesses the functions of a task flow, the OPSS policy store determines the access entitlement that applies to that user through the roles provisioned to the user.

    Data Security Policies

    Data security policies articulate the security requirement “Who can do What on Which set of data,” where ‘Which set of data’ is an entire object or an object instance or object instance set and ‘What’ is the object entitlement. By default, users are denied access to all data. Data security makes data available to users by the following means.

    • Policies that define grants available through provisioned roles
    • Policies defined in the application code

    A privilege is a single, real world action on a single business object. The possible actions are read, update, delete, and manage. If these privileges are not specified on a duty or data role, then all actions on the respective objects within a page, including services, screens, and flows, and typically used in control of the main menu (specified by function policy) are allowed.

    Enterprise roles provide access to data through data security policies defined for the inherited application roles. When you provision a job role to a user, the job role implicitly limits data access based on the data security policies of the inherited duty roles. When you provision a data role to a user, the data role explicitly limits the data access of the inherited job role to a dimension of data.

    When setting up the enterprise with structures such as business units, data roles are automatically generated that inherit job roles based on data role templates. Data roles also can be generated based on HCM security profiles. Data role templates and HCM security profiles enable defining the instance sets specified in data security policies.

    Managing User and Roles relationships

    These are the few major categories of managing and assigning roles to a user:

    • Create a new user and apply existing roles as per business requirements.
    • The business mandates a new role and respective policies/privileges. There are 3 major types:
    1. Create a new role but assign existing duties and generate data role.
    2. Create a new Duty and assign data security respectively. This new duty can be assigned to a new or existing user
    3. Create new Policies and Privileges . Assign theseit to new or existing Duty role.

    The following is a quick example of how Enterprise and Application Roles works together to control access to functions and data.

    A new Employee is hired as “Benefits Specialist”. The job role “Benefits Specialist” exists but this user should not have access to view salary data of all the enterprise users.The following screen shows all the duties a “Benefit Specialist” can perform:

    ben_specialist

    One of the duty roles is “Benefits Salary Viewer Duty”.  The following screen shows who else has this role:

    ben_salary_view_dataThe following screen shows that no polices are assigned out of the box to “Benefits Salary Viewer Duty” role.

    ben_spec_salary

    In order to add policy to “Benefits Salary Viewer” duty role to view salary data, let’s review the policies of “Salary Management” duty role. The following screen shows the functional policies assigned to this role:

    Snap20

    To manage the above functional policy, select and click edit. The following screen shows how you can search entitlements such as “Salary” and select respectively as per business requirements.

    Snap21

    The following screen show data security polices assigned to this role.

     

    Snap26

    Select “Entry Salary Details” policy and click edit to open it.

    Snap27

    Click on “Rule” tab to view the existing rules for this policy.

    Snap28

    Click on “Action” tab to view selected actions (New action can be selected from available actions).Snap29

    The following screen is an example of data security policy that do not have access to view or manage salary data. In this case, the benefits specialist has been assigned another role policy “Party Information Inquiry Duty”. The data security for this policy is to view “Person” object only.

    party_info_inq_duty

    The following screen is another example of data security policies that “Person Address View Duty” roles has to various resources/objects (another role assigned to benefits specialist).per_addr_duty

    If this new benefits specialist user should not have access to salary data, then the new role such as “Benefit Specialist without Salary View” can be created from an existing one with salary view role removed.

    Conclusion

    This post provides introductory information on functional roles and how they are technically implemented. It provides internals of how and where roles are stored respectively; and where and how they can be managed. Most importantly it provides how these roles work together for “Who can do what on which functions or sets of data under what conditions”. It also provides examples on how to manage functional and data policies.

     


    Introduction to Fusion Applications Roles Concepts

    $
    0
    0

    Introduction

     

    Fusion Applications Security is designed based on Role-Based Access Control (RBAC). It is an approach to restricting access to authorized users. In general, RBAC is defined based on the primary rules as per this wiki page.

    RBAC normalizes access to functions and data through user roles rather than only users. User access is based on the definition of the roles provisioned to the user. RBAC secures access in a “Who can do what on which functions or sets of data under what conditions” approach. The “who” is the user.

    The roles are defined at functional and technical levels. The functional level is the business definition that is used by business users and the technical level is the implementation of roles using Oracle Technology. This post will quickly review the definition of “Functional Roles” and provide introductory internals on the technical implementation of these “Roles” within Fusion Applications.

    Architecture of Functional Roles

    At a high level, RBAC is based on the following concepts:

    • Role assignment: A subject can exercise permission only if the subject has selected or been assigned a role.
    • Role authorization: A subject’s active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized.
    • Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject’s active role. With rules 1 and 2, this rule ensures that users can exercise only permissions for which they are authorized.

    In Fusion Applications, the RBAC implementation is based on abstract, job, duty, and data roles that work together to control access to functions and data. The definitions of these functional roles are as follows:

    Abstract Role:

    This role categorizes the roles for reference implementation. It inherits duty role but does not contain security policies. For example: Employee, Manager, etc.

    Job Role:

    This role defines a specific job an employee is responsible for. An employee may have many job roles. It may require the data role to control the actions of the respective objects. For example: Benefits Manager, Accounts Receivable Specialist, etc.

    Data Role:

    This role defines access to the data within a specific duty. Who can do what on which set of data? The possible actions are read, update, delete, and manage. Only duty roles hold explicit entitlement to the data. These entitlements control the privileges such as in a user interface that can see specific screens, buttons, data columns, and other artifacts.

    Duty Role:

    This role defines a set of tasks. It is the most granular form of a role. The job and abstract roles inherit duty roles. The data security policies are specified to duty roles to control actions on all respective objects.

    This is a diagram from the “Oracle Fusion Applications Security Guide” that shows the relationships between these roles:

    rolediag

    The duty role in above diagram is the granular form of a role where security policies are attached. They are implemented as application roles in APM and scoped to individual Oracle Fusion Applications.

    Technical Implementation of Functional Roles

    The above functional roles are technically implemented as Enterprise and Applications roles. The Abstract, Job and Data roles are called Enterprise roles and the Duty role is called Application role.

    Fusion Applications implements the security using the Oracle Identity Management (IDM) stack. The IDM stack consists of identity store and Policy store . The Enterprise and Applications roles are implemented y in Identity and Policy stores respectively.

    Enterprise Roles

    Across all Fusion Applications, Abstract, Job and Data roles are mapped to Enterprise roles . These roles are stored in the Identity Store. They are managed through OIM and Identity Administration tools. This tool includes the following capabilities with respect to Enterprise role management:

    • Create Fusion Applications Implementation Users
    • Provision Roles to Implementation Users
    • Manage Abstract, Job and Data roles including the job hierarchy

    The predefined Abstract, Job and Data roles are seeded in:

    cn=FusionGroups,cn=groups,dc=us,dc=<MyCompany>,dc=com

    Example of Data Role: The role name ends with “_DATA ”. This is a naming convention and a technical requirement.

    cn=CN_INCENTIVE_COMPENSATION_MANAGER_FUSION_CORP_USA_DATA,cn=FusionGroups,cn=groups,dc=us,dc=oracle,dc=com

    Example of Job Role: The role name ends with “_JOB”. This is a naming convention and a technical requirement.

    cn=BEN_BENEFITS_MANAGER_JOB,cn=FusionGroups,cn=groups,dc=us,dc=oracle,dc=com

    Example of Abstract Role: The role name ends with “_ABSTRACT”. This is a naming convention and a technical requirement.

    cn=PER_EMPLOYEE_ABSTRACT,cn=FusionGroups,cn=groups,dc=us,dc=oracle,dc=com

    These roles can also be viewed from ODSM (Oracle Directory Services Manager) console. The following steps illustrate this.

    Login to ODSM and OID Connector; navigate from “Data Browser” tab as shown below.

    odsm1

    Click on a role such as “BEN_BENEFITS_MANAGER_JOB”.

    Ben_benfits_mgr

    Applications Roles

    A “Duty Role” is mapped to Application Roles and is stored in the Policy Store. An application role is supplied by a single application or pillar of applications. The application policies are managed through “Authorization Policy Manager” (APM). APM is a graphical interface that simplifies the creation, configuration, and administration of application policies. Applications Authorization Policy Manager (APM) refers to enterprise roles as external roles. For more information on APM, please refer the following link.

    Oracle Fusion Applications is certified to integrate with Applications Access Controls Governor (AACG) in the Oracle Governance, Risk and Compliance Controls (GRCC) suite to ensure effective segregation of duties (SOD). Oracle Fusion Applications checks duty roles for SOD policy violations measured against content and the risks defined in AACG and against content according to best available security guidelines. User and role provisioning respects the segregation of duties policies.

    The following screen in APM UI shows Application Role mappings of External Role “Benefits Manager”:

    Ben_mgr_app_role

    Through APM you can manage existing policies and respective privileges. You can also create new policies and privileges.The Application Roles are stored in Policy Store as:

    cn=Roles,cn=<FAApp>,cn=FusionDomain,cn=JPSContext,cn=FAPolicies

    Example to view “BENEFITS_REPORTING_DATA_DUTY” role from ODSM console:

    Login to ODSM console, connect to OID and navigate from “Data Browser”.

    odsm_hcm

    Expand HCM and navigate to “Roles” to see all the respective duties of HCM Applications.

    ben_rpt_data_duty

     

    Fusion Applications Roles Provisioning Mapping

    Fusion Applications uses FUSION.PER_USER_ROLES table to store information about what roles are provisioned to which users.

    Example

    User name: ORDER_MGR_OPERATIONS

    select r.role_distinguished_name, p.role_GUID, u.username from per_user_roles p, per_roles_dn_vl r, per_users u where p.role_id=r.role_id and p.user_id = u.user_id and u.user_id = ‘118’

    The output of the above query.

    per_user_roles_output

    How all these roles and security policies/privileges work together?

    Fusion Applications seeds all the relevant roles, though they can be modified and customized based on the business requirements. It is important to first understand the functional and data security policies.

    Functional Security Polices

    Function security consists of privileges granted to a user by means of the user’s membership in a role, to control access to a page or a specific widget or functionality/operation within a page, including services, screens, and flows, and typically used in control of the main menu. A function security policy consists of privileges assigned to duty roles and those duty roles assigned to a job or abstract role. Function security policies are defined in the Authorization Policy Manager (APM).

    Function security is implemented using Java Authentication and Authorization Services (JAAS) permissions representing an Application Development Framework (ADF) artifact. These permissions are stored in the Oracle Platform Security Services (OPSS) policy store and are checked by ADF at runtime. When a user accesses the functions of a task flow, the OPSS policy store determines the access entitlement that applies to that user through the roles provisioned to the user.

    Data Security Policies

    Data security policies articulate the security requirement “Who can do What on Which set of data,” where ‘Which set of data’ is an entire object or an object instance or object instance set and ‘What’ is the object entitlement. By default, users are denied access to all data. Data security makes data available to users by the following means.

    • Policies that define grants available through provisioned roles
    • Policies defined in the application code

    A privilege is a single, real world action on a single business object. The possible actions are read, update, delete, and manage. If these privileges are not specified on a duty or data role, then all actions on the respective objects within a page, including services, screens, and flows, and typically used in control of the main menu (specified by function policy) are allowed.

    Enterprise roles provide access to data through data security policies defined for the inherited application roles. When you provision a job role to a user, the job role implicitly limits data access based on the data security policies of the inherited duty roles. When you provision a data role to a user, the data role explicitly limits the data access of the inherited job role to a dimension of data.

    When setting up the enterprise with structures such as business units, data roles are automatically generated that inherit job roles based on data role templates. Data roles also can be generated based on HCM security profiles. Data role templates and HCM security profiles enable defining the instance sets specified in data security policies.

    Managing User and Roles relationships

    These are the few major categories of managing and assigning roles to a user:

    • Create a new user and apply existing roles as per business requirements.
    • The business mandates a new role and respective policies/privileges. There are 3 major types:
    1. Create a new role but assign existing duties and generate data role.
    2. Create a new Duty and assign data security respectively. This new duty can be assigned to a new or existing user
    3. Create new Policies and Privileges . Assign theseit to new or existing Duty role.

    The following is a quick example of how Enterprise and Application Roles works together to control access to functions and data.

    A new Employee is hired as “Benefits Specialist”. The job role “Benefits Specialist” exists but this user should not have access to view salary data of all the enterprise users.The following screen shows all the duties a “Benefit Specialist” can perform:

    ben_specialist

    One of the duty roles is “Benefits Salary Viewer Duty”.  The following screen shows who else has this role:

    ben_salary_view_dataThe following screen shows that no polices are assigned out of the box to “Benefits Salary Viewer Duty” role.

    ben_spec_salary

    In order to add policy to “Benefits Salary Viewer” duty role to view salary data, let’s review the policies of “Salary Management” duty role. The following screen shows the functional policies assigned to this role:

    Snap20

    To manage the above functional policy, select and click edit. The following screen shows how you can search entitlements such as “Salary” and select respectively as per business requirements.

    Snap21

    The following screen show data security polices assigned to this role.

     

    Snap26

    Select “Entry Salary Details” policy and click edit to open it.

    Snap27

    Click on “Rule” tab to view the existing rules for this policy.

    Snap28

    Click on “Action” tab to view selected actions (New action can be selected from available actions).Snap29

    The following screen is an example of data security policy that do not have access to view or manage salary data. In this case, the benefits specialist has been assigned another role policy “Party Information Inquiry Duty”. The data security for this policy is to view “Person” object only.

    party_info_inq_duty

    The following screen is another example of data security policies that “Person Address View Duty” roles has to various resources/objects (another role assigned to benefits specialist).per_addr_duty

    If this new benefits specialist user should not have access to salary data, then the new role such as “Benefit Specialist without Salary View” can be created from an existing one with salary view role removed.

    Conclusion

    This post provides introductory information on functional roles and how they are technically implemented. It provides internals of how and where roles are stored respectively; and where and how they can be managed. Most importantly it provides how these roles work together for “Who can do what on which functions or sets of data under what conditions”. It also provides examples on how to manage functional and data policies.

     

    Cloning of Fusion Applications using Oracle Enterprise Manager Cloud Control 12c

    $
    0
    0

    Introduction

    Cloning is a recurring requirement in almost every Oracle Fusion Applications environment. Oracle Enterprise Manager 12c (OEM) is an effective tool to create and automate the repeatable parts of the cloning process. Additionally to the cloning process itself OEM is very useful, when e.g. automating the scale-down of the environment or other actions that may be required to clone the particular environment.

    Cloning Process Overview

    The general cloning process is summarized in the following diagram. In this document the repeatable parts of the process are the focus. Highlighted in red in the diagram. The prerequisite steps and validation steps are not in scope. Detailed information about the complete cloning process can be found in the Cloning Guide. All target servers as well as the databases have to be registered in OEM and the manual cloning process should be executed and tested thoroughly.   ClonePicture1

    Defining the Cloning Job

    The creation of the cloning jobs in Oracle Enterprise Manager is a straight forward process if the manual cloning run has been documented properly. It is basically just a transfer of the executed steps into OEM. In the Job Library – accessible via the Enterprise Drop Down list – create a new “Multi-Task” Job. Select “Different targets for different tasks” from the target drop down list as the cloning process will be running on multiple steps across multiple hosts.

    image2

    Restore Databases

    In this example a simple RMAN operation is used to restore the database to the initial status. This is easily interchangeable with a RMAN duplicate database operation that could be used to copy the production database into the UAT environment. Alternatively a restore of a current backup from production to the development environment would be a possibility with the additional benefit of testing the backup procedure on a regular basis.

    image3

    The ‘RMAN Script’ is able to execute the same script across multiple databases. If different scripts are required for the individual databases, one task is required per database. Simply add additional databases as targets via the “Add” button and follow the wizard.

    image4

    In this example the Oracle Flashback Database feature is used to restore the database. This is especially helpful in environments where the clone process is run frequently, e.g. to the test upgrades or patching. Simply copy and paste the RMAN script in the parameters tab.

    image5

    The usernames and passwords for all operations are securely entered in this screen. These credentials are automatically stored in the Oracle Enterprise Manager Credential store for further usage. They can be reused for other tasks/targets. Make sure to adopt a naming schema for the named credentials especially in complex environments.

    image6

    This concludes the RMAN Script task. Clicking on continue will get OEM to return to the Job view. The next task will be managing the file system restore.

    image7

    Restore Gold Images to file system

    Additionally to the database being restored – the file system needs to be cleared/reset to a state where the master TAR files can be extracted. This is done using the ‘OS Command’ task. Select the target servers based on the particular system requirements. This can potentially be broken down into multiple task to allow parallelization.

    image8

    The OS Command Task allows the usage of complex OS scripts. Oracle Enterprise Manager offers a variety of Target Properties that can be used in the script to allow additional flexibility and functionality in the script. In this example the script is simply removing the entire Fusion Applications and Identity Management file system and then extracts the Gold Images from the tar archive that has been created as part of the manual clone run. Alternatively other technologies like ZFS snapshots could be used at this stage to restore the file system to the initial status.

    image9

    Username and password are stored in the OEM credential store. Make sure that proper OS permissions are allocated to the selected users in order to execute all the steps in the script. These credentials can be reused for the later tasks.

    image10

    Clone Execution Phases

    The next phase represents the actual runs of the faclone.sh tool. Usually faclone.sh is executed 4 times – IDMMT, IDMWT, FAMT and FAWT. These phases are described in detail in the Cloning Guide.  For this step the “OS Command” task is used as well. Simply add the hosts as targets where the scripts need to be executed, e.g. the primodial host for FAMT.

    image11

    At this stage only the “Single Operation” command type is being used. Based on the return value of the faclone.sh script this step will succeed or fail. In this example the command used is e.g.:

     “/u01/app/clone/bin/faclone.sh -p /u01/app/clone/clone.rsp clone idmmt”

    Further details about faclone.sh can be found in the Cloning Guide.

    image12

    The named credentials are being reused simply by selecting them in the Credentials screen. Usually the user that has been used for the provisioning of FA can be selected.

    image13

    These steps are repeated to create individual tasks for the IDMMT, IDMWT, FAMT and the FAWT executions. After all the required steps are created the dependencies need to be configured. Generally the cloning process is a serialized process, however certain activities like restoring the database and resetting the file system could potentially be parallelized. Clicking the “Save to Library” Button finishes the creation of the Job and it is ready to be scheduled.

    image14

    Scheduling the Cloning Job

    The Oracle Enterprise Manager Cloud Control Job System offers multiple options to schedule the job to run at a specific time, regularly based on a schedule or simply on demand. In the Job Library use the search function to locate the previously created job, select it and click the “Submit” button to open up the scheduler window.

    image15

    In this example the job is executed immediately. However this screen allows you to create a schedule to meet the individual requirements for your organization.

    image16

    The access screen is mainly used to configure notifications and allows granting permissions on the job to view or control the executions. By default all administrators inherit view permissions on the job. This permission does not allow the control, e.g. start/stop/scheduling of the job.

    image17

    The job can be monitored, while it is executing or after has been finished. Notifications will be send out accordingly based on the selected options on the previous screen. An additional advantage for using OEM is the automatic archival of previous executions.

    image18

    Further Information

    The process described here only uses a minimal set of the powerful Job System of the Oracle Enterprise Manager 12c. The process can be tailored to achieve automated cloning including error correction in nearly every environment. Further information about the Job System can be found here

    Fusion Applications WebCenter Content Integration – Automating File Import/Export

    $
    0
    0

    Introduction

    Oracle WebCenter Content, a component of Fusion Middleware, is a strategic solution for the management, security, and distribution of unstructured content such as documents, spreadsheets, presentations, and video. Oracle Fusion Applications leverages Oracle WebCenter Content to store all marketing collateral as well as all attachments. Import flow also uses it to stage the CSV files that the user uploads.

    WebCenter Content replaces SSH File Transfer Protocol (SFTP) as a content repository in Fusion Applications starting with Release 7. For example, it is implemented in HCM for File Based Loader (FBL) and Extracts integration. There are several ways of importing and exporting content to and from Fusion Applications such as:

    • Upload using “File Import and Export” UI from home page navigation: Navigator > Tools
    • Upload using WebCenter Content Document Transfer Utility
    • Upload programmatically via Java Code or Web Service API

    This post provides an introduction, with working sample code, on how to programmatically export content from Fusion Applications to automate the outbound integration process to other applications in the cloud or on-premise. A Service Oriented Architecture (SOA) composite is implemented to demonstrate the concept.

    Main Article

    Fusion Applications Security in WebCenter Content

    The content in WebCenter Content are secured through user, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist”. The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.

    Let’s review the inbound and outbound batch integration flows.

    Inbound Flow

    This is a typical Inbound FBL process flow:

     

    fblflow

    The uploaded file is registered by invoking the Loader Integration Service – http://{Host}/hcmCommonBatchLoader/LoaderIntegrationService.

    You specify the following in the payload:

    • Content id of the file to be loaded
    • Business objects that you are loading
    • Batch name
    • Load type (FBL)
    • Imported file to be loaded automatically

    Fusion Applications UI also allows the end user to register and initiate the data load process.

     

    Outbound Flow

    This is a typical Outbound batch Integration flow using tools such as Business Intelligence (BI) Publishers and Answers and HCM Extracts:

    extractflow

    The extracted file could be delivered to the WebCenter Content server.

    Programmatic Approach to export files from Webcenter Content

    In Fusion Applications, the WebCenter Content Managed server is installed in the Common domain Weblogic Server. The WebCenter Content server provides two types of web services:

    Generic JAX-WS based web service

    This is a generic web service for general access to the Content Server. The context root for this service is “/idcws”. For details of the format, see the published WSDL at https://<hostname>:<port>/idcws/GenericSoapPort?WSDL. This service is protected through Oracle Web Services Security Manager (OWSM). As a result of allowing WS-Security policies to be applied to this service, streaming Message Transmission Optimization Mechanism (MTOM) is not available for use with this service. Very large files (greater than the memory of the client or the server) cannot be uploaded or downloaded.

    Native SOAP based web service

    This is the general WebCenter Content service. Essentially, it is a normal socket request to Content Server, wrapped in a SOAP request. Requests are sent to the Content Server using streaming Message Transmission Optimization Mechanism (MTOM) in order to support large files. The context root for this service is “/idcnativews”. The main web service is IdcWebRequestPort and it requires JSESSIONID, which can be retrieved from IdcWebLoginPort service.

    The Remote Intradoc Client (RIDC) uses the native web services. Oracle recommends that you do not develop a custom client against these services.

    For more information, please refer “Developing with WebCenter Content Web Services for Integration“.

    Generic Web Service Implementation

    This post provides a sample of implementing generic web service /idcws/GenericSoapPort. In order to implement this web service, it is critical to review the following definitions to generate the request message and parse the response message:

    IdcService:

    IdcService is a predefined service node’s attribute that is to be executed, for example, CHECKIN_UNIVERSAL, GET_SEARCH_RESULTS, GET_FILE, CHECKOUT_BY_NAME, etc.

    User

    User is a subnode within a <service> and contains all user information.

    Document

    Document is a collection of all the content-item information and is the parent node of the all the data.

    ResultSet

    ResultSet is a typical row/column based schema. The name attribute specifies the name of the ResultSet. It contains set of row subnodes.

    Row

    Row is a typical row within a ResultSet, which can have multiple <row> subnodes. It contains sets of Field objects

    Field

    Field is a subnode of either <document> or <row>. It represents document or user metadata such as content Id, Name, Version, etc.

    File

    File is a file object that is either being uploaded or downloaded

    For more information, please refer Configuring Web Services with WSDL, SOAP, and the WSDL Generator.

    Web Service Security

    The genericSoapPort web service is protected by Oracle Web Services Manager (OWSM). In Oracle Fusion Applications cloud, the OWSM policy is: “oracle/wss11_saml_or_username_token_with_message_protection_service_policy”.

    In your SOAP envelope, you will need the appropriate “wsee” headers. This is a sample:

    <soapenv:Header>
    <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soapenv:mustUnderstand="1">
    <saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" MajorVersion="1" MinorVersion="1" AssertionID="SAML-iiYLE6rlHjI2j9AUZXrXmg22" IssueInstant="2014-10-20T13:52:25Z" Issuer="www.oracle.com">
    <saml:Conditions NotBefore="2014-10-20T13:52:25Z" NotOnOrAfter="2015-11-22T13:57:25Z"/>
    <saml:AuthenticationStatement AuthenticationInstant="2014-10-20T14:52:25Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">
    <saml:Subject>
    <saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">FAAdmin</saml:NameIdentifier>
    <saml:SubjectConfirmation>
    <saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:sender-vouches</saml:ConfirmationMethod>
    </saml:SubjectConfirmation>
    </saml:Subject>
    </saml:AuthenticationStatement>
    </saml:Assertion>
    </wsse:Security>
    </soapenv:Header>

    Sample SOA Composite

    The SOA code provides a sample on how to search for a document in WebCenter Content, extract a file name from the search result, and get the file and save it in your local directory. The file could be processed immediately based on your requirements. Since this is a generic web service with a generic request message, you can use the same interface to invoke various IdcServices, such as GET_FILE, GET_SEARCH_RESULTS, etc.

    In the SOA composite sample, two external services are created: GenericSoapPort and FileAdapter. If the service is GET_FILE, then it will save a copy of the retrieved file in your local machine.

    Export File

    The GET_FILE service returns a specific rendition of a content item, the latest revision, or the latest released revision. A copy of the file is retrieved without performing a check out. It requires either dID (content item revision ID) for the revision, or dDocName (content item name) along with a RevisionSelectionMethod parameter. The RevisionSelectionMethod could be either “Latest” (latest revision of the content) or “LatestReleased” (latest released revision of the content). For example, to retrieve file:

    <ucm:GenericRequest webKey="cs">
    <ucm:Service IdcService="GET_FILE">
    <ucm:Document>
    <ucm:Field name="dID">401</ucm:Field>
    </ucm:Document>
    </ucm:Service>
    </ucm:GenericRequest>

    Search File

    The dID of the content could be retrieved using the service GET_SEARCH_RESULTS. It uses a QueryText attribute in <Field> node. The QueryText attribute defines the query and must be XML encoded. You can append values for title, content Id, and so on, in the QueryText, to refine the search. The syntax for QueryText could be challenging, but once you understand the special characters formats, it is straight forward. For example, to search content by its original name:

    <ucm:Service IdcService="GET_SEARCH_RESULTS">
    <ucm:Document>
    <ucm:Field name="QueryText">dOriginalName &lt;starts&gt; `Test`</ucm:Field>
    </ucm:Document>
    </ucm:Service>

    In plain text, it is dOriginalName <starts> `Test`. The <substring> is the mandatory format. You can further refine the query by adding more parameters.

    This a sample SOA composite with 2 external references, genericSoapPort and FileAdapter.

    ucmComposite

    This is a sample BPEL process flow that demonstrates how to retrieve the file and save a copy to a local directory using File Adapter. If the idcService is GET_SEARCH_RESULTS, then do not save the file. In a real scenario, you will search, check out and start processing the file.

     

    ucmBPEL1

    The original file name is preserved when copying it to a local directory by passing the header property to the FileAdapter. For example, create a variable fileName and use assign as follows:

    1. get file name from the response message in your <assign> activity as follows:

    <from expression="bpws:getVariableData('InvokeGenericSoapPort_GenericSoapOperation_OutputVariable','GenericResponse','/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name=&quot;dOriginalName&quot;]')"/>
    <to variable="fileName"/>

    Please make note of the XPath expression as this will assist you to retrieve other metadata.

    2. Pass this fileName variable to the <invoke> of the FileAdapter as follows:

    <bpelx:inputProperty name="jca.file.FileName" variable="fileName"/>
    

    Please add the following property manually to the ../CommonDomain/ucm/cs/config/config.cfg file for the QueryText syntax: AllowNativeQueryFormat=true
    Restart the managed server.
    The typical error is: “StatusMessage”>Unable to retrieve search results. Parsing error at character xx in query….”

    Testing SOA Composite:

    After the composite is deployed in your SOA server, you can test it either from Enterprise Manager (EM) or using SoapUI. These are the sample request messages for GET_SEARCH_RESULTS and GET_FILE.

    The following screens show the SOA composites for “GET_SEARCH_RESULTS” and “GET_FILE”:

    searchfile

    getfile

    Get_File Response snippet with critical objects:

    <ns2:GenericResponse xmlns:ns2="http://www.oracle.com/UCM">
    <ns2:Service IdcService="GET_FILE">
    <ns2:Document>
    <ns2:Field name="dID">401</ns2:Field>
    <ns2:Field name="IdcService">GET_FILE</ns2:Field>
    ....
    <ns2:ResultSet name="FILE_DOC_INFO">
    <ns2:Row>
    <ns2:Field name="dID">401</ns2:Field>
    <ns2:Field name="dDocName">UCMFA000401</ns2:Field>
    <ns2:Field name="dDocType">Document</ns2:Field>
    <ns2:Field name="dDocTitle">JRD Test</ns2:Field>
    <ns2:Field name="dDocAuthor">FAAdmin</ns2:Field>
    <ns2:Field name="dRevClassID">401</ns2:Field>
    <ns2:Field name="dOriginalName">Readme.html</ns2:Field>
    </ns2:Row>
    </ns2:ResultSet>
    </ns2:ResultSet>
    <ns2:File name="" href="/u01/app/fa/config/domains/fusionhost.mycompany.com/CommonDomain/ucm/cs/vault/document/bwzh/mdaw/401.html">
    <ns2:Contents>
    <xop:Include href="cid:7405676a-11f8-442d-b13c-f8f6c2b682e4" xmlns:xop="http://www.w3.org/2004/08/xop/include"/>
    </ns2:Contents>
    </ns2:File>
    </ns2:Document>
    </ns2:Service>
    </ns2:GenericResponse>

    Import File

    The above sample can also be use to import files into the WebCenter Content repository for Inbound integration or other use cases. The service name is CHECKIN_UNIVERSAL.

    Summary

    This post demonstrates how to automate the export and import of contents in WebCenter Content server implemented by Fusion Applications. It further demonstrates how integration tools like SOA can be implemented to automate, extend and orchestrate integration between Fusion Applications in the cloud or on-premise, with Oracle or non-Oracle applications, either in Cloud or on-premise sites.

    The SOA sample code is here.

    Fusion Applications User, Role Identity Flow and Initial Bulk Load

    $
    0
    0

    Introduction

    As customers work towards implementing Fusion Applications (FA) in their enterprise and prepare for go-live, the enterprise user and role identity data from various HR applications needs to be migrated to FA, so that the users can become part of FA system and be able to use the application. There are a number of steps involved in this process and performing partial steps in lower environments does not prepare sufficiently for a go-live preparation to a production environment. In this article, we will cover the important steps involved and some helpful tips for a successful user and role load.

    High-level Steps of Initial Bulk Load

    To migrate the Users and Role identities to FA, user and role information can be collected from various enterprise applications and brought in to FA using tools offered with the FA release the customer is using. For Release 7 and Release 8 of FA, Human Capital Management (HCM) File-Based Loader (FBL) is the recommended option for bulk-load. While Release 9 includes an evolution of FBL, this is not covered in detail in this article, FBL can still be used with Release 9.

    FBL enables you to bulk-load data from any data source to HCM for the initial load as well as for on-going maintenance of the data by means of periodic batch loading. Please refer to the documents listed in reference section below for full list of FBL capabilities and limitations and detailed steps of execution. The following diagram depicts the high-level steps of using FBL:

    FBL Flow Diagram

    As noted in the diagram above, data gathered from various sources is transformed for import using FBL. The tool helps with data validation and up to loading the data into FA. After a FBL run, you will see the users and roles reflected in the FA product families. However, the users and roles loaded using FBL cannot be used, as yet. There is further down-stream provisioning needed before we can login using the user-id of any of the users just loaded. Before we look at the steps of how to provision down-stream, it helps to understand user and role identity containers in FA. Let’s look at that next.

    User and Role Identities in FA

    User and Role identity information is stored primarily in 3 containers in a typical FA deployment:

    1. FA-HCM (Even if you meant to use other product families, HCM module is always used in FA to manage User and Role identity)

    2. Oracle Identity Manager (OIM)

    3. Oracle Internet Directory (OID).

    During run-time, the identity information in all of these 3 containers is applied to the users and used in enforcing security and entitlement policies of FA. For more details, please refer to the ‘Implementing Workforce Deployment’ document: https://docs.oracle.com/cd/E48434_01/fusionapps.1118/e49577/toc.htm

    The following diagram gives an overview of the flow of user and role identity information between them:

    VNBlogpic3

    We discussed FBL and the process of loading users and roles into FA HCM. Now, as per the flow in the diagram above, we need the users and roles provisioned in OIM and in-turn provisioned to the OID server. So that, when a user tries to login to the FA system, Oracle Access Manager (OAM) can authenticate the user by looking up the user in OID. Likewise, much of the FA security policy enforcement is performed based on what user and role information is available in OID. Hence, after step 1.FBL, step 2. ‘Send Pending LDAP Requests’ must be manually invoked to provision the users downstream. There are scheduled jobs that take care of step 3 to provision the users into OID. Once we have established that the steps 1, 2 and 3 have been run, step 4. ‘Retrieve Latest LDAP Changes’ can be invoked manually to complete the full cycle. Step 4, ‘Retrieve Latest LDAP Changes’ brings in updates made in the IDM systems (OIM, OID and APM) to FA HCM. The step marked ‘1.1.Person Synchronization’ helps to ensure user information is complete and also help cover dated actions. This 1.1 step must always be run at the beginning of the cycle followed by Step 1.2 Retrieve Latest LDAP Changes’ for initial load.

    If a customer were to perform step 1, FBL and not continue through the rest of the steps under the assumption that the rest of the steps are not significant, they would not come away with a full understanding of the importance of remaining steps of the process, nor would the users and roles be fully provisioned in the system. So, it is important to complete all 4 steps in at least one lower environment before attempting to do this in production environment for the first time. This so you have a full appreciation of the complete process and an ability to plan for go-live activity.

    Completing All the Important Steps

    Assuming step 1.FBL is completed, let’s look at completing the ‘Send Pending LDAP Requests’ job to provision the users downstream from FA HCM. This is an important and resource intensive step that needs to be carefully executed with proper planning.

    Firstly, the FA system must be properly tuned. There is good guidance available in FA tuning document. Please review FA tuning guide from the link below and make sure you apply recommended parameters for your FA environment needs:

    http://docs.oracle.com/cd/E36909_01/fusionapps.1111/e16686/toc.htm#BEGIN

    If we were to point one primary component that must be tuned, we would look at the OID processes. This component is kind of security-central for the overall FA application and must be tuned properly for a good performance of the FA system anytime. It is very easy to tune this component as detailed in the above document. Next, the OIM and SOA components tuning certainly helps in the next step. The tuning document above details the necessary JVM heap size and data source tuning parameters among others.

    Once tuning is completed, and you are now ready to invoke the step 2. Send Pending LDAP Requests job, be prepared with a few SQL queries to track the progress of the load as shown below. At the end of step 1. FBL load, you would see multiple LDAP requests generated in status of CREATE and REQUEST.  You may query the status, as they get processed by the Send Pending LDAP Requests. A sample SQL query such as below, gives a snapshot of the requests:

    select plr.ldap_request_id, plr.request_status, plu.request_status user_request_status, plu.request_type,
    ppn.last_name, ppn.first_name, plu.username requested_username, pu.username, plr.request_id, plr.active_flag,
          plr.error_code, plr.error_description,
          plr.last_update_date,plr.request_date, plr.requesting_reference_id
    from fusion.per_ldap_requests plr, fusion.per_ldap_users plu, fusion.per_person_names_f ppn, fusion.per_users pu
    where trunc(plr.last_update_date) >= trunc(sysdate-:dayOffset)
    and plr.requesting_reference_id = ppn.person_id(+)
    and plr.requesting_reference_id = pu.person_id(+)
    and plr.ldap_request_id = plu.ldap_request_id
    and ppn.name_type(+) = ‘GLOBAL’
    and sysdate between ppn.effective_start_date(+) and ppn.effective_end_date(+)
    –and nvl(plr.error_code, ‘X’) <> ‘HCM-SPML-0001′
    –and plr.request_status in (‘FAULTED’,’REJECTED’)
    –and plr.request_status = ‘COMPLETE’
    –and plr.request_id is not null
    –and plu.request_type = ‘TERMINATE’
    order by plr.last_update_date desc;

    Once step 2 is finished, you may export the output to a spread sheet to process success and failures and resolve any issues for a re-run. There can be reasonable errors such as user id already exists or invalid data that may prevent the process from completing. So, you may correct those errors and a re-run could complete them. For example, error IAM-3076036 – User with the attribute mail, value <email_id> already existscan be corrected by updating the user record with a correct email id. Error code IAM-3071004 means the user is not correctly setup with a manager record. For more details on the error codes, please refer to Doc ID 1509644.1 in support.oracle.com.

    Also, since this provisioning to OIM and OID, steps 2 and 3, are resource intensive steps, it is a good idea to split the load and load users in small batches. When the users are brought in with FBL, make that a smaller manageable size that performs better for your environment requirement. Say, you are bringing in a total of 100,000 users, then it may be good to load them in batches of 25,000. Load 25,000 users in first batch using step 1. FBL and complete steps 2, 3 and 4. Then bring in next 25,000 in FBL and so on.

    There are pros and cons of running in smaller batches as with any batch processing, but achieving the right size is important. Smaller batches help in reducing the stress on the system and also help in error handling. Depending on the performance you get, you may adjust the size of the load.

    It is a good idea to plan for dedicated time for the system to complete step 2. ‘Send Pending LDAP Requests’ process, since a general slowness may be observed in user login performance during this load. Hence, you may plan for one batch over-night or few batches over a weekend. This helps in avoiding conflicts, were the system to be shared by other work streams. Allowing sufficient dedicated load-time gets this process complete relatively faster.

    There are several good documents, listed in the reference section, in My Oracle Support to prepare and monitor the progress of this phase and trouble-shooting articles that you would find valuable.

    Besides the initial load, it is important to plan and schedule for the ongoing incremental processing once the system is in use. By looking at the number of users created or updated, roles created or updated in previous systems, a set of these jobs we looked at above need to be scheduled. Step 1.1 must always be executed at the beginning of scheduled cycle. Step 3. OIM LDAP Synchronization can also be manually invoked using OIM scheduled jobs. ‘LDAP Role Create and Update Full Reconciliation’ and ‘LDAP Role Create and Update Reconciliation’ are some important ones. For further details on how these programs work, and when to schedule them, see ‘Synchronization of User and Role Information with Oracle Identity Management: How It Is Processed in the Oracle® Fusion Applications Coexistence for HCM Implementation Guide.

    Summary

    A few take away points from this article for initial bulk-load:

    1. Load initial bulk load of users and roles in batches of size relative to your environment.

    2. Proper tuning of OID, OIM and SOA components are key to the success of this process

    3. Run the full process discussed here in at least one environment before production load and plan to allow dedicated system resources for the job.

    4. Prepare / gather necessary monitoring and validation scripts beforehand for easy monitoring and progress during the load.

    5. Plan for and run scheduled jobs discussed above on an ongoing basis to keep the information flow current.

    Reference

    My Oracle Support Docs:

    File-Based Loader for Release 7 & 8 – 1595283.1

    File Based Loader Diagnostics Release 7 & 8 – Doc ID 1594500.1

    User and Role Provisioning – Troubleshooting Guide (Doc ID 1459830.1)

    Fusion HCM: Common BPEL and OIM error messages in User and Roles Provisioning (Doc ID 1509644.1)

    Fusion HCM Cloud Bulk Integration Automation

    $
    0
    0

    Introduction

    Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from cloud. The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL supports data migration for full HR, incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT). It also provides the ability to bulk load into configured flexfields. HCM Extracts is an outbound integration tool that let’s you choose data, gathers and archives it. This archived raw data is converted into a desired format and delivered to supported channels recipients.

    HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns. This post focuses on how to automate data file transfer with WebCenter Content to initiate the loader. The same APIs will be used to download data file from the WebCenter Content delivered through the extract process.

    WebCenter Content replaces SSH File Transfer Protocol (SFTP) server in the cloud as a content repository in Fusion HCM starting with Release 7+. There are several ways of importing and exporting content to and from Fusion Applications such as:

    • Upload using “File Import and Export” UI from home page navigation: Navigator > Tools
    • Upload using WebCenter Content Document Transfer Utility
    • Upload programmatically via Java Code or Web Service API

    This post provides an introduction, with working sample code, on how to programmatically export content from Fusion Applications to automate the outbound integration process to other applications in the cloud or on-premise. A Service Oriented Architecture (SOA) composite is implemented to demonstrate the concept.

    Main Article

    Fusion Applications Security in WebCenter Content

    The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.

    Let’s review the inbound and outbound batch integration flows.

    Inbound Flow

    This is a typical Inbound FBL process flow:

     

    HDL_loader_process

    The data file is uploaded to WebCenter Content Server either using Fusion HCM UI or programmatically in /hcm/dataloader/import account. This uploaded file is registered by invoking the Loader Integration Service – http://{Host}/hcmCommonBatchLoader/LoaderIntegrationService.

    You must specify the following in the payload:

    • Content id of the file to be loaded
    • Business objects that you are loading
    • Batch name
    • Load type (FBL)
    • Imported file to be loaded automatically

    Fusion Applications UI also allows the end user to register and initiate the data load process.

     

    Encryption of Data File using Pretty Good Privacy (PGP)

    All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
    Fusion supports the following types of encryption:

    • PGP Signed
    • PGP Unsigned
    • PGPX509 Signed
    • PGPX509 Unsigned

    To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

    • Fusion can decrypt inbound files
    • Fusion can encrypt outbound files
    • Customer can encrypt files sent to Fusion
    • Customer can decrypt files received from Fusion

    Steps to Implement PGP

    1. 1. Provide your PGP Public Key
    2. 2. Oracle’s Cloud Operations team provides you with the Fusion PGP Public Key.

    Steps to Implement PGP X.509

    1. 1. Self signed fusion key pair (default option)
      • You provide the public X.509 certificate
    2. 2. Fusion Key Pair provided by you:
      • Public X.509 certificate uploaded via Oracle Support Service Request (SR)
      • Fusion Key Pair for Fusion’s X.509 certificate in a Keystore with Keystore password.

    Steps for Certificate Authority (CA) signed Fusion certificate

        1. Obtain Certificate Authority (CA) signed Fusion certificate
        2. Public X.509 certificate uploaded via SR
        3. Oracle’s Cloud Operations exports the fusion public X.509 CSR certificate and uploads it to SR
        4. Using Fusion public X.509 CSR certificate, Customer provides signed CA certificate and uploads it to SR
      5. Oracle’s Cloud Operations provides the Fusion PGP Public Certificate to you via an SR

     

    Modification to Loader Integration Service Payload to support PGP

    The loaderIntegrationService has a new method called “submitEncryptedBatch” which has an additional parameter named “encryptType”. The valid values to pass in the “encryptType” parameter are taken from the ORA_HRC_FILE_ENCRYPT_TYPE lookup:

    • NONE
    • PGPSIGNED
    • PGPUNSIGNED
    • PGPX509SIGNED
    • PGPX509UNSIGNED

    Sample Payload

    <soap:Envelope xmlns:soap=”http://schemas.xmlsoap.org/soap/envelope/”> <soap:Body>
    <ns1:submitEncryptedBatch
    xmlns:ns1=”http://xmlns.oracle.com/apps/hcm/common/batchLoader/core/loaderIntegrationService/types/”>
    <ns1:ZipFileName>LOCATIONTEST622.ZIP</ns1:ZipFileName>
    <ns1:BusinessObjectList>Location</ns1:BusinessObjectList>
    <ns1:BatchName>LOCATIONTEST622.ZIP</ns1:BatchName>
    <ns1:LoadType>FBL</ns1:LoadType>
    <ns1:AutoLoad>Y</ns1:AutoLoad>
    <ns1:encryptType>PGPX509SIGNED</ns1:encryptType>
    </ns1:submitEncryptedBatch>
    </soap:Body>
    </soap:Envelope>

     

    Outbound Flow

    This is a typical Outbound batch Integration flow using HCM Extracts:

    extractflow

    The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

    1. Select HCM Delivery Type to “HCM Connect”
    2. Select an Encryption Mode of the 4 supported encryption types. or select None
    3. Specify the Integration Name – his value is used to build the title of the entry in WebCenter Content

     

    Extracted File Naming Convention in WebCenter Content

    The file will have the following properties:

    • Author: FUSION_APPSHCM_ESS_APPID
    • Security Group: FAFusionImportExport
    • Account: hcm/dataloader/export
    • Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

     

    Programmatic Approach to export/import files from/to WebCenter Content

    In Fusion Applications, the WebCenter Content Managed server is installed in the Common domain Weblogic Server. The WebCenter Content server provides two types of web services:

    Generic JAX-WS based web service

    This is a generic web service for general access to the Content Server. The context root for this service is “/idcws”. For details of the format, see the published WSDL at https://<hostname>:<port>/idcws/GenericSoapPort?WSDL. This service is protected through Oracle Web Services Security Manager (OWSM). As a result of allowing WS-Security policies to be applied to this service, streaming Message Transmission Optimization Mechanism (MTOM) is not available for use with this service. Very large files (greater than the memory of the client or the server) cannot be uploaded or downloaded.

    Native SOAP based web service

    This is the general WebCenter Content service. Essentially, it is a normal socket request to Content Server, wrapped in a SOAP request. Requests are sent to the Content Server using streaming Message Transmission Optimization Mechanism (MTOM) in order to support large files. The context root for this service is “/idcnativews”. The main web service is IdcWebRequestPort and it requires JSESSIONID, which can be retrieved from IdcWebLoginPort service.

    The Remote Intradoc Client (RIDC) uses the native web services. Oracle recommends that you do not develop a custom client against these services.

    For more information, please refer “Developing with WebCenter Content Web Services for Integration.”

    Generic Web Service Implementation

    This post provides a sample of implementing generic web service /idcws/GenericSoapPort. In order to implement this web service, it is critical to review the following definitions to generate the request message and parse the response message:

    IdcService:

    IdcService is a predefined service node’s attribute that is to be executed, for example, CHECKIN_UNIVERSAL, GET_SEARCH_RESULTS, GET_FILE, CHECKOUT_BY_NAME, etc.

    User

    User is a subnode within a <service> and contains all user information.

    Document

    Document is a collection of all the content-item information and is the parent node of the all the data.

    ResultSet

    ResultSet is a typical row/column based schema. The name attribute specifies the name of the ResultSet. It contains set of row subnodes.

    Row

    Row is a typical row within a ResultSet, which can have multiple <row> subnodes. It contains sets of Field objects

    Field

    Field is a subnode of either <document> or <row>. It represents document or user metadata such as content Id, Name, Version, etc.

    File

    File is a file object that is either being uploaded or downloaded

    For more information, please refer Configuring Web Services with WSDL, SOAP, and the WSDL Generator.

    Web Service Security

    The genericSoapPort web service is protected by Oracle Web Services Manager (OWSM). In Oracle Fusion Applications cloud, the OWSM policy is: “oracle/wss11_saml_or_username_token_with_message_protection_service_policy”.

    In your SOAP envelope, you will need the appropriate “wsee” headers. This is a sample:

    <soapenv:Header>
    <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soapenv:mustUnderstand="1">
    <saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" MajorVersion="1" MinorVersion="1" AssertionID="SAML-iiYLE6rlHjI2j9AUZXrXmg22" IssueInstant="2014-10-20T13:52:25Z" Issuer="www.oracle.com">
    <saml:Conditions NotBefore="2014-10-20T13:52:25Z" NotOnOrAfter="2015-11-22T13:57:25Z"/>
    <saml:AuthenticationStatement AuthenticationInstant="2014-10-20T14:52:25Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">
    <saml:Subject>
    <saml:NameIdentifier Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified">FAAdmin</saml:NameIdentifier>
    <saml:SubjectConfirmation>
    <saml:ConfirmationMethod>urn:oasis:names:tc:SAML:1.0:cm:sender-vouches</saml:ConfirmationMethod>
    </saml:SubjectConfirmation>
    </saml:Subject>
    </saml:AuthenticationStatement>
    </saml:Assertion>
    </wsse:Security>
    </soapenv:Header>

    Sample SOA Composite

    The SOA code provides a sample on how to search for a document in WebCenter Content, extract a file name from the search result, and get the file and save it in your local directory. The file could be processed immediately based on your requirements. Since this is a generic web service with a generic request message, you can use the same interface to invoke various IdcServices, such as GET_FILE, GET_SEARCH_RESULTS, etc.

    In the SOA composite sample, two external services are created: GenericSoapPort and FileAdapter. If the service is GET_FILE, then it will save a copy of the retrieved file in your local machine.

    Export File

    The GET_FILE service returns a specific rendition of a content item, the latest revision, or the latest released revision. A copy of the file is retrieved without performing a check out. It requires either dID (content item revision ID) for the revision, or dDocName (content item name) along with a RevisionSelectionMethod parameter. The RevisionSelectionMethod could be either “Latest” (latest revision of the content) or “LatestReleased” (latest released revision of the content). For example, to retrieve file:

    <ucm:GenericRequest webKey="cs">
    <ucm:Service IdcService="GET_FILE">
    <ucm:Document>
    <ucm:Field name="dID">401</ucm:Field>
    </ucm:Document>
    </ucm:Service>
    </ucm:GenericRequest>

    Search File

    The dID of the content could be retrieved using the service GET_SEARCH_RESULTS. It uses a QueryText attribute in <Field> node. The QueryText attribute defines the query and must be XML encoded. You can append values for title, content Id, and so on, in the QueryText, to refine the search. The syntax for QueryText could be challenging, but once you understand the special characters formats, it is straight forward. For example, to search content by its original name:

    <ucm:Service IdcService="GET_SEARCH_RESULTS">
    <ucm:Document>
    <ucm:Field name="QueryText">dOriginalName &lt;starts&gt; `Test`</ucm:Field>
    </ucm:Document>
    </ucm:Service>

    In plain text, it is dOriginalName <starts> `Test`. The <substring> is the mandatory format. You can further refine the query by adding more parameters.

    This a sample SOA composite with 2 external references, genericSoapPort and FileAdapter.

    ucmComposite

    This is a sample BPEL process flow that demonstrates how to retrieve the file and save a copy to a local directory using File Adapter. If the idcService is GET_SEARCH_RESULTS, then do not save the file. In a real scenario, you will search, check out and start processing the file.

     

    ucmBPEL1

    The original file name is preserved when copying it to a local directory by passing the header property to the FileAdapter. For example, create a variable fileName and use assign as follows:

    1. get file name from the response message in your <assign> activity as follows:

    <from expression="bpws:getVariableData('InvokeGenericSoapPort_GenericSoapOperation_OutputVariable','GenericResponse','/ns2:GenericResponse/ns2:Service/ns2:Document/ns2:ResultSet/ns2:Row/ns2:Field[@name=&quot;dOriginalName&quot;]')"/>
    <to variable="fileName"/>

    Please make note of the XPath expression as this will assist you to retrieve other metadata.

    2. Pass this fileName variable to the <invoke> of the FileAdapter as follows:

    <bpelx:inputProperty name="jca.file.FileName" variable="fileName"/>
    

    Please add the following property manually to the ../CommonDomain/ucm/cs/config/config.cfg file for the QueryText syntax: AllowNativeQueryFormat=true
    Restart the managed server.
    The typical error is: “StatusMessage”>Unable to retrieve search results. Parsing error at character xx in query….”

    Testing SOA Composite:

    After the composite is deployed in your SOA server, you can test it either from Enterprise Manager (EM) or using SoapUI. These are the sample request messages for GET_SEARCH_RESULTS and GET_FILE.

    The following screens show the SOA composites for “GET_SEARCH_RESULTS” and “GET_FILE”:

    searchfile

    getfile

    Get_File Response snippet with critical objects:

    <ns2:GenericResponse xmlns:ns2="http://www.oracle.com/UCM">
    <ns2:Service IdcService="GET_FILE">
    <ns2:Document>
    <ns2:Field name="dID">401</ns2:Field>
    <ns2:Field name="IdcService">GET_FILE</ns2:Field>
    ....
    <ns2:ResultSet name="FILE_DOC_INFO">
    <ns2:Row>
    <ns2:Field name="dID">401</ns2:Field>
    <ns2:Field name="dDocName">UCMFA000401</ns2:Field>
    <ns2:Field name="dDocType">Document</ns2:Field>
    <ns2:Field name="dDocTitle">JRD Test</ns2:Field>
    <ns2:Field name="dDocAuthor">FAAdmin</ns2:Field>
    <ns2:Field name="dRevClassID">401</ns2:Field>
    <ns2:Field name="dOriginalName">Readme.html</ns2:Field>
    </ns2:Row>
    </ns2:ResultSet>
    </ns2:ResultSet>
    <ns2:File name="" href="/u01/app/fa/config/domains/fusionhost.mycompany.com/CommonDomain/ucm/cs/vault/document/bwzh/mdaw/401.html">
    <ns2:Contents>
    <xop:Include href="cid:7405676a-11f8-442d-b13c-f8f6c2b682e4" xmlns:xop="http://www.w3.org/2004/08/xop/include"/>
    </ns2:Contents>
    </ns2:File>
    </ns2:Document>
    </ns2:Service>
    </ns2:GenericResponse>

    Import (Upload) File for HDL

    The above sample can also be use to import files into the WebCenter Content repository for Inbound integration or other use cases. The service name is CHECKIN_UNIVERSAL.

    Summary

    This post demonstrates how to secure and automate the export and import of data files in WebCenter Content server implemented by Fusion HCM Cloud. It further demonstrates how integration tools like SOA can be implemented to automate, extend and orchestrate integration between HCM in the cloud and Oracle or non-Oracle applications, either in Cloud or on-premise sites.

    The SOA sample code is here.

    Viewing all 49 articles
    Browse latest View live