Quantcast
Channel: FA – ATeam Chronicles
Viewing all 49 articles
Browse latest View live

Invoke Fusion Cloud Secured RESTFul Web Services

$
0
0

Introduction

The objective of this blog is to demonstrate how to invoke secured RestFul web services from Fusion Cloud using Oracle Service Oriented Architecture (SOA) as an Integration hub for real time integration with other clouds and on-premise applications. SOA could be on-premise or in the cloud (PAAS). The SOA composites deployed in on-premise SOA can be migrated to SOA in cloud.

What is REST?

REST stands for Representational State Transfer. It ignores the details of implementation and applies a set of interaction constraints. The web service APIs that adhere to the REST Architectural constraints are called RestFul. The HTTP based RESTFul APIs area defined with the following aspects:

  • Exactly one entry point – For example: http://example.com/resources/
  • Support of media type data – JavaScript Object Notation (JSON) and XML are common
  • Standard HTTP Verbs (GET, PUT, POST, PATCH or DELETE)
  • Hypertext links to reference state
  • Hypertext links to reference related resources

Resources & Collections

The Resources can be grouped into collections. Each collection is homogeneous and contains only one type of resource. For example:

URI Description Example
/api/ API Entry Point /fusionApi/resources
/api/:coll/ Top Level Collection :coll /fusionApi/resources/department
/api/:coll/:id Resource ID inside Collection /fusionApi/resources/department/10
/api/:coll/:id/:subcoll Sub-collection /fusionApi/resources/department/10/employees
/api/:coll/:id/:subcoll/:subid Sub Resource ID /fusionApi/resources/department/10/employees/1001

 

Invoking Secured RestFul Service using Service Oriented Architecture (SOA)

SOA 12c supports REST Adapter and it can be configured as a service binding component in a SOA Composite application. For more information, please refer to this link. In order to invoke a secured RestFul service, Fusion security requirements must be met. These are the following requirements:

Fusion Applications Security

All external URLs in the Oracle Fusion Cloud, for RESTful Services, are secured using Oracle Web Security Manager (OWSM). The server policy is “oracle/http_jwt_token_client_policy” that allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

JSON Web Token (JWT) is a light-weight implementation for web services authentication. A client having valid JWT token is allowed to call the REST service until it expires. The OWSM existing policy “oracle/wss11_saml_or_username_token_with_message_protection_service_policy” has the JWT over SSL assertion. For more information, please refer to this.

The client must provide one of the above policies in the security headers of the invocation call for authentication. In SOA, a client policy may be attached from Enterprise Manager (EM) to decouple it from the design time.

Fusion Security Roles

The user must have appropriate Fusion Roles including respective data security roles to view or change resources in Fusion Cloud. Each product pillar has respective roles. For example in HCM, a user must have any role that inherits the following roles:

  • HCM REST Services Duty – Example: “Human Capital Management Integration Specialist”
  • Data security Roles that inherit “Person Management Duty” – Example: “Human Resource Specialist – View All”

 

Design SOA Code using JDeveloper

In your SOA composite editor, right-click the Exposed Services swimlane and select Insert > REST. This action adds REST support as a service binding component to interact with the appropriate service component.

This the sample SOA Composite with REST Adapter using Mediator component (you can also use BPEL):

rest_composite

These are the following screens on how to configure RestFul Adapter as an external reference:

REST Adapter Binding

rest_adapter_config_1

REST Operation Binding

rest_adapter_config_2

REST Adapter converts JSON response to XML using Native Format Builder (NXSD). For more information on configuring NXSD from JSON to XML, please refer this link.

generic_json_to_xml_nxd

Attaching Oracle Web Service Manager (OWSM) Policy

Once the SOA composite is deployed to your SOA server, the HTTP Basic Authentication OWSM policy is attached as follows:

Navigate to your composite from EM and click on policies tab as follows:

 

rest_wsm_policy_from_EM_2

 

Identity Propagation

Once the OWSM policy is attached to your REST reference, the HTTP token can be passed using the Credential Store. Please create credential store as follows:

1. Right-Click on  SOA Domain and select Security/Credentials.

rest_credential_1

2. Please see the following screen to create a key under oracle.wsm.security map:

 

rest_credential_2

Note: If oracle.wsm.security map is missing, then create this map before creating a key.

 

By default, OWSM policy uses basic.crendial key. To use newly created key from above, the default key is override using the following instructions:

1. Navigate to REST reference binding as follows:

rest_wsm_overridepolicyconfig

rest_wsm_overridepolicyconfig_2

Replace basic.credentials with your new key value.

 

Secure Socket Layer (SSL) Configuration

In Oracle Fusion Applications, the OWSM policy mandates HTTPs protocol. For introduction to SSL and detailed configuration, please refer this link.

The cloud server certificate must be imported in two locations as follows:

1. keytool -import -alias slc08ykt -file /media/sf_C_DRIVE/JDeveloper/mywork/MyRestProject/facert.cer -keystore /oracle/xehome/app/soa12c/wlserver/server/lib/DemoTrust.jks -storepass DemoTrustKeyStorePassPhrase

This is the output:

Owner: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Issuer: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Serial number: 7
Valid from: Mon Apr 25 09:08:55 PDT 2011 until: Thu Apr 22 09:08:55 PDT 2021
Certificate fingerprints:
MD5: 30:0E:B4:91:F3:A4:A7:EE:67:6F:73:D3:E1:1B:A6:82
SHA1: 67:93:15:14:3E:64:74:27:32:32:26:43:FF:B8:B9:E6:05:A8:DE:49
SHA256: 01:0E:2A:8A:D3:A9:3B:A4:AE:58:4F:AD:2C:E7:BD:45:B7:97:6F:A0:C4:FA:96:A5:29:DD:77:85:3A:05:B1:B8
Signature algorithm name: MD5withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

2. keytool -import -alias <name> -file /media/sf_C_DRIVE/JDeveloper/mywork/MyRestPorject/facert.cer -trustcacerts -keystore /oracle/xehome/app/jdk1.7.0_55/jre/lib/security/cacerts

This is the output:

Enter keystore password:
Owner: CN=*.us.mycompany.com, DC=us, DC=mycompany, DC=com
Issuer: CN=*.us.mycompany.com, DC=us, DC=oracle, DC=com
Serial number: 7
Valid from: Mon Apr 25 09:08:55 PDT 2011 until: Thu Apr 22 09:08:55 PDT 2021
Certificate fingerprints:
MD5: 30:0E:B4:91:F3:A4:A7:EE:67:6F:73:D3:E1:1B:A6:82
SHA1: 67:93:15:14:3E:64:74:27:32:32:26:43:FF:B8:B9:E6:05:A8:DE:49
SHA256: 01:0E:2A:8A:D3:A9:3B:A4:AE:58:4F:AD:2C:E7:BD:45:B7:97:6F:A0:C4:FA:96:A5:29:DD:77:85:3A:05:B1:B8
Signature algorithm name: MD5withRSA
Version: 1
Trust this certificate? [no]: yes
Certificate was added to keystore

You must restart Admin and SOA Servers.

 

Testing

Deploy the above composite in your SOA server. The SOA composite can be invoked from EM or using tools like SOAPUI. Please see the following link to test REST adapter using HTTP Analyzer.

Conclusion

This blog demonstrates how to invoke secured REST services from Fusion Applications cloud using SOA. It provides detailed configuration on importing cloud keystores and attaching OWSM policies. This sample supports multiple patterns such as cloud-to-cloud, cloud-to-OnPremise, cloud-to-BPO, etc.

 

 

 


IDM FA Integration flows

$
0
0

Introduction

One of the key aspects of Fusion Applications operations is the Users and Roles management. Fusion Applications uses the Oracle Identity management for its Identity store and policy store by default.This article explains how user and roles flows work from different poin of views, using ‘key’ IDM products for each flow in detail. With a clear understanding of the workings of the Fusion Applications with Identity Management for user provisioning and roles management you will have better understanding and can improve your FA IDM environments by integrating with the rest of the enterprise assets and processes. For example: If you need to integrate your current IDM enterprise with this solution what are the flows you need to be aware of.

Main Article

FA relies on roles and privileges implemented in IDM to both authenticate and authorize users and operations respectively. FA uses jobs in the ESS system to reconcile the users and roles in OIM. OIM, in turn, gets the corresponding data from the user and policy store respectively using LdapSynch(provisioning and reconciliation process). This flow is described below

Fig1: FA IDM integration flow

Fig1: FA IDM integration flow.

Brief explanation of each topic on this main flow above:

FA OID flow: OID holds policy information from FA. Basically duty roles and privileges are created from FA to OID(Policy or Security Store).

Fig2: FusionApps and OID.

Fig2: FusionApps and OID.

FA OIM flow:FA/OIM provision users or roles to OIM/FA through SPML.

For example: Enterprise business logic may qualify the requester and initiate a role provisioning request by invoking the Services Provisioning.

Language (SPML) client module, as may occur during onboarding of internal users with Human Capital Management (HCM), in which case the SPML client submits an asynchronous SPML call to OIM.

Or OIM handles the role request by presenting roles for selection based on associated policies.

Or it communicates with each other produc providing challenge questions response , password reset procedure and more.

Fig3:picture above helps to explain the flow information that we explained above.

Fig3: picture above helps to explain the flow information that we explained above.

OID OIM flow: OIM connects into OVD through LDAP ITResource feature, that allows the connection and it is also responsible for LDAP Synch Reconciliations from OID to OIM as well as the event handlers that OIM triggers, if there is any update from there.

Fig4: Provides the visual explanation of the OID OIM flow.

Fig4: Provides the visual explanation of the third flow.

FA OIM flow: Here it’s ESS JOB from FA that create user into OID or update it from OID. 4.1)”Retrieve Latest LDAP Changes” reads from OID and updates FA if there are any things missing (users, role assignments, etc); 4.2) “Send Pending LDAP Changes” will send over to OIM any requests that have not yet been processed. (If you are using the FA UIs like Manage Users to create a user, it should happen almost immediately, but if you have bulk loaded employees and assignments, you need to run Send Pending LDAP Requests to get the requests processed.)

Fig5: OAM -FA integrated.

Fig5: OAM -FA integrated.

Conclusion

Implementing FA+IDM solution for an organization is a proposition that should be done with all other flows consideration, such as ‘New Hire’ and ‘Authentication and Autorization’ flows. Using a proper planning and understanding the various dimensions provided by this solution and its concepts allows an organization to discern why or even whether they need Oracle IDM and FA wired or not with their IDM enterprise solution. It also highlights, what of the enterprise is willing to protect on user details, and how best to offer Oracle protection in an integrated and effective manner.

Other useful links:

Oracle® Fusion Applications Security Guide ,11g Release 1 (11.1.1.5.0) : http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e16689/F323392AN1A795.htm

Mass Reset Password-part1 OID

$
0
0

Introduction

One of the great features that customers need to be aware of and it could be used, as post-process, on many different situations such as: P2T, T2P and clone is the ability to reset multiple passwords simultaneously. Imagine the customer is scaling out their environment because they need an additional UAT environment. This customer has a new requirement: Replace all end-user passwords on an entire FA-IDM REL8 solution. This kind of scenario is growing naturally because it allows many process variations in on-premise environments. Unfortunately, by default, OIM and OID don’t have WebUI feature that allows this mass reset. This blog will cover part 1: How to do it using OID commands and in December we will publish part 2: How to do it using the OIM api.

Main Article

In this case, the best approach is to run P2T, and then change some information that comes from production that is unwanted in the test environment. For our scenario, this is the passwords. Therefore, once it is copied to another place, the passwords must be changed. This article provides step-by-step instructions to accomplish this task and make sure your end-user’s passwords will not be available in other environments.

Step1) Backup: $ORACLE_HOME/ldap/bin/ldifwrite connect=oiddb basedn=”cn=users,dc=mycompany,dc=com” thread=3 verbose=true ldiffile=/tmp/backup-[NAME] -PWD-[DATE].dat

Chronicle-ATeamOct2014-MassResetPwd-phs_1

Step2) ldapsearch -p 3060 -D cn=orcladmin -w Welcome1 -b “cn=Users,dc=mycompany,dc=com” -L ‘(&(objectclass=*)(!(cn=FUSION_APPS_*)))’ dn | sed ‘s/dc=com/dc=com\nchangetype:\ modify\nreplace:\ userpassword\nuserpassword:\ NewPwdValue/g’ > User_pwd_reset_list.ldif

Chronicle-ATeamOct2014-MassResetPwd-phs_2

If you open the file created you should see something like this with many users:

Chronicle-ATeamOct2014-MassResetPwd-phs_3

Step3) Manually remove all users below this from the ldif file (created above):
Excluded List:
cn=AppIDUsers,cn=Users,dc=mycompay,dc=com
cn=orcladmin, cn=Users, dc=mycompay,dc=com
cn=PUBLIC, cn=Users, dc=mycompay,dc=com
cn=PolicyROUser,cn=Users,dc=mycompay,dc=com
cn=PolicyRWUser,cn=Users,dc=mycompay,dc=com
cn=oamAdminUser,cn=Users,dc=mycompay,dc=com
cn=oamSoftwareUser,cn=Users,dc=mycompay,dc=com
cn=xelsysadm,cn=Users,dc=mycompay,dc=com
cn=weblogic_idm,cn=Users,dc=mycompay,dc=com
cn=IDROUser,cn=Users,dc=mycompay,dc=com
cn=IDRWUser,cn=Users,dc=mycompay,dc=com
cn=FAAdmin,cn=Users,dc=mycompay,dc=com
cn=oim_admin,cn=Users,dc=mycompay,dc=com
uid=webchatadmin,cn=Users,dc=mycompay,dc=com
cn=em_monitoring,cn=Users,dc=mycompay,dc=com
cn=OCLOUD9_osn_APPID,cn=AppIDUsers,cn=Users,dc=mycompay,dc=com
cn=oamSoftwareUser,cn=Users,dc=mycompay,dc=com
cn=oimAdminUser,cn=systemids,dc=mycompay,dc=com
cn=OblixAnonymous,dc=mycompay,dc=com
cn=OSN_LDAP_BIND_USER,cn=users,dc=mycompay,dc=com
cn=saas_readonly,cn=Users,dc=mycompay,dc=com
cn=fa_guest,cn=Users,dc=mycompay,dc=com

Step4) Run a double check on this file to make sure it is clean:
grep [FILE].ldif | wc -l (checking)

Step5) Run: ldapmodify -p 3060 -D cn=orcladmin -w **** -c -v -f /u01/XXXPOD_User_PWD_RESET_FINAL.ldif

Chronicle-ATeamOct2014-MassResetPwd-phs_4

Step6) Test one of the users using LdapBind, eg: ldapbind -p 3060 -D “cn=test_test,cn=Users,dc=mycompany,dc =com” –q

Chronicle-ATeamOct2014-MassResetPwd-phs_5

Note: You don’t need to run any ESS Job or OIM reconciliation to have these users updated there. As this action modifies the OID Changelog, the next OIM Incremental reconciliation will run and collect all these changes automatically. So, as provided in these screenshots, you should be able to login on any SSO application, such as OIM and others after the time of next incremental reconciliation.

Chronicle-ATeamOct2014-MassResetPwd-phs_6

Conclusion

Well done, however, implementing FA+IDM Mass reset password solution for an organization is a proposition that should be done carefully, and an entire environment backup must be done before it starts. Using proper planning and understanding the various dimensions provided by this solution and its concepts allows an organization to discern how they handle copied passwords. It also highlights what of the enterprise is willing to protect end- user data from copied environments, and how best to offer Oracle protection in an integrated and effective manner.

Mass Reset Password -part2 – using OIM Apis

$
0
0

Introduction

Back in November, I wrote a blog about Mass Rest Password using OID. As mentioned there, and expected for this month, Oracle is now providing the same password change feature, but now using Java OIM API.

Main Article

In this case, for develoment and test environments customers usually want something that they can control by java exceptions to avoid any interruption, or a solution where they can have multiple options amid different use-cases. Using java, particularly, will allow more possibilities from the development perspective. Let’s return to the main example mentioned before, the P2T scenario. Here, critical data is coming from a production environment and is moved to a test environment and some critical data must be changed. This article provides step-by-step instructions to accomplish this task to make sure your production end-user’s passwords will not be available in the target environment.

Pre-requirements:

• Make sure you the OIM Design Console folder, xlclient , on the server where you are running the java code.

• Once you have finished the FA-IDM P2T process, the next step is to remove, or replace the key information that should be available only in production.
• Make sure all users have requested objectclasses to do this change through the API:
objectclass=orclIDXPerson
objectclass=oblixPersonPwdPolicy
objectclass=oblixOrgPerson
objectclass=OIMPersonPwdPolicy
objectclass=inetorgperson
objectclass=top
objectclass=organizationalPerson
objectclass=person

Steps

Step1) Open config.properties and update your search criteria.

config.properties

NOTE: Don’t remove or comment the all_except_logins unless you are sure that you want to change product admin users. Be aware that changing these may disrupt communications within the products.
Step2) Run reset_password.sh provided here.

Results expected for step2:

Running shell script-part1

Running shell script-part2

Conclusion

Well done, however, implementing FA+IDM Mass reset password solution for an organization is a proposition that should be done carefully, and an entire environment backup must be done before it starts. Using proper planning and understanding the various dimensions provided by this solution and its concepts allows an organization to discern how they handle copied passwords. It also highlights what of the enterprise is willing to protect end- user data from copied environments, and how best to offer Oracle protection in an integrated and effective manner.

Fusion HCM Cloud – Bulk Integration Automation Using Managed File Transfer (MFT) and Node.js

$
0
0

Introduction

Fusion HCM Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the bulk integration to load and extract data to/from the cloud.

The inbound tool is the File Based data loader (FBL) evolving into HCM Data Loaders (HDL). HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion Human Capital Management (Oracle Fusion HCM). HDL supports one-time data migration and incremental load to support co-existence with Oracle Applications such as E-Business Suite (EBS) and PeopleSoft (PSFT).

HCM Extracts is an outbound integration tool that lets you choose HCM data, gathers it from the HCM database and archives it as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

HCM cloud implements Oracle WebCenter Content, a component of Fusion Middleware, to store and secure data files for both inbound and outbound bulk integration patterns.

Oracle Managed File Transfer (Oracle MFT) enables secure file exchange and management with internal systems and external partners. It protects against inadvertent access to unsecured files at every step in the end-to-end transfer of files. It is easy to use, especially for non technical staff, so you can leverage more resources to manage the transfer of files. The built in extensive reporting capabilities allow you to get quick status of a file transfer and resubmit it as required.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

This post focuses on how to automate HCM Cloud batch integration using MFT (Managed File Transfer) and Node.js. MFT can receive files, decrypt/encrypt files and invoke Service Oriented Architecture (SOA) composites for various HCM integration patterns.

 

Main Article

Managed File Transfer (MFT)

Oracle Managed File Transfer (MFT) is a high performance, standards-based, end-to-end managed file gateway. It features design, deployment, and monitoring of file transfers using a lightweight web-based design-time console that includes file encryption, scheduling, and embedded FTP and sFTP servers.

Oracle MFT provides built-in compression, decompression, encryption and decryption actions for transfer pre-processing and post-processing. You can create new pre-processing and post-processing actions, which are called callouts.

The callouts can be associated with either the source or the target. The sequence of processing action execution during a transfer is as follows:

  1. 1. Source pre processing actions
  2. 2. Target pre processing actions
  3. 3. Payload delivery
  4. 4. Target post processing actions
Source Pre-Processing

Source pre-processing is triggered right after a file has been received and has identified a matching Transfer. This is the best place to do file validation, compression/decompression, encryption/decryption and/or extend MFT.

Target Pre-Processing

Target pre-processing is triggered just before the file is delivered to the Target by the Transfer. This is the best place to send files to external locations and protocols not supported in MFT.

Target Post-Processing

Post-processing occurs after the file is delivered. This is the best place for notifications, analytic/reporting or maybe remote endpoint file rename.

For more information, please refer to the Oracle MFT document

 

HCM Inbound Flow

This is a typical Inbound FBL/HDL process flow:

inbound_mft

The FBL/HDL process for HCM is a two-phase web services process as follows:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke “LoaderIntegrationService” or “HCMDataLoader” to initiate the loading process.

The following diagram illustrates the MFT steps with respect to “Integration” for FBL/HDL:

inbound_mft_2

HCM Outbound Flow

This is a typical outbound batch Integration flow using HCM Extracts:

extractflow

 

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS) – this report is stored in WCC under the hcm/dataloader/export account.
  • MFT scheduler can pull files from WCC
  • The data file(s) are either uploaded to the customer’s sFTP server as pass through or to Integration tools such as Service Oriented Architecture (SOA) for orchestrating and processing data to target applications in cloud or on-premise.

The following diagram illustrates the MFT orchestration steps in “Integration” for Extract:

 

outbound_mft

 

The extracted file could be delivered to the WebCenter Content server. HCM Extract has an ability to generate an encrypted output file. In Extract delivery options ensure the following options are correctly configured:

  • Select HCM Delivery Type to “HCM Connect”
  • Select an Encryption Mode of the four supported encryption types or select None
  • Specify the Integration Name – this value is used to build the title of the entry in WebCenter Content

 

Extracted File Naming Convention in WebCenter Content

The file will have the following properties:
Author: FUSION_APPSHCM_ESS_APPID
Security Group: FAFusionImportExport
Account: hcm/dataloader/export
Title: HEXTV1CON_{IntegrationName}_{EncryptionType}_{DateTimeStamp}

 

Fusion Applications Security

The content in WebCenter Content is secured through users, roles, privileges and accounts. The user could be any valid user with a role such as “Integration Specialist.” The role may have privileges such as read, write and delete. The accounts are predefined by each application. For example, HCM uses /hcm/dataloader/import and /hcm/dataloader/export respectively.
The FBL/HDL web services are secured through Oracle Web Service Manager (OWSM) using the following policy: oracle/wss11_saml_or_username_token_with_message_protection_service_policy.

The client must satisfy the message protection policy to ensure that the payload is encrypted or sent over the SSL transport layer.

A client policy that can be used to meet this requirement is: “oracle/wss11_username_token_with_message_protection_client_policy”

To use this policy, the message must be encrypted using a public key provided by the server. When the message reaches the server it can be decrypted by the server’s private key. A KeyStore is used to import the certificate and it is referenced in the subsequent client code.

The public key can be obtained from the certificate provided in the service WSDL file.

Encryption of Data File using Pretty Good Privacy (PGP)

All data files transit over a network via SSL. In addition, HCM Cloud supports encryption of data files at rest using PGP.
Fusion HCM supports the following types of encryption:

  • PGP Signed
  • PGP Unsigned
  • PGPX509 Signed
  • PGPX509 Unsigned

To use this PGP Encryption capability, a customer must exchange encryption keys with Fusion for the following:

  • Fusion can decrypt inbound files
  • Fusion can encrypt outbound files
  • Customer can encrypt files sent to Fusion
  • Customer can decrypt files received from Fusion

MFT Callout using Node.js

 

Prerequisites

To automate HCM batch integration patterns, the following components must be installed and configured respectively:

 

Node.js Utility

A simple Node.js utility “mft2hcm” has been developed for uploading or downloading files to/from a MFT server callout to Oracle WebCenter Content server and initiate HCM SaaS loader service. It utilizes the node “mft-upload” package and provides SOAP substitution templates for WebCenter (UCM) and Oracle HCM Loader service.

Please refer to the “mft2hcm” node package for installation and configuration.

RunScript

The RunScript is configured as “Run Script Pre 01” to configure a callout that can be injected into MFT in pre or post processing. This callout always sends the following default parameters to the script:

  • Filename
  • Directory
  • ECID
  • Filesize
  • Targetname (not for source callouts)
  • Sourcename
  • Createtime

Please refer to “PreRunScript” for more information on installation and configuration.

MFT Design

MFT Console enables the following tasks depending on your user roles:

Designer: Use this page to create, modify, delete, rename, and deploy sources, targets, and transfers.

Monitoring: Use this page to monitor transfer statistics, progress, and errors. You can also use this page to disable, enable, and undeploy transfer deployments and to pause, resume, and resubmit instances.

Administration: Use this page to manage the Oracle Managed File Transfer configuration, including embedded server configuration.

Please refer to the MFT Users Guide for more information.

 

HCM FBL/HDL MFT Transfer

This is a typical MFT transfer design and configuration for FBL/HDL:

MFT_FBL_Transfer

The transfer could be designed for additional steps such as compress file and/or encrypt/decrypt files using PGP, depending on the use cases.

 

HCM FBL/HDL (HCM-MFT) Target

The MFT server receives files from any Source protocol such as SFTP, SOAP, local file system or a back end integration process. The file can be decrypted, uncompressed or validated before a Source or Target pre-processing callout uploads it to UCM then notifies HCM to initiate the batch load. Finally the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion.

This is a typical target configuration in the MFT-HCM transfer:

Click on target Pre-Processing Action and select “Run Script Pre 01”:

MFT_RunScriptPre01

 

Enter “scriptLocation” where node package “mft2hcm” is installed. For example, <Node.js-Home>/hcm/node_modules/mft2hcm/mft2hcm.js

MFTPreScriptUpload

 

Do not check ”UseFileFromScript”. This property replaces an inbound file (source) of MFT with the file from target execution. In FBL/HDL, the response (target execution) do not contain file.

 

HCM Extract (HCM-MFT) Transfer

An external event or scheduler triggers the MFT server to search for a file in WCC using a search query. Once a document id is indentified, it is retrieved using a “Source Pre-Processing” callout which injects the retrieved file into the MFT Transfer. The file can then be decrypted, validated or decompressed before being sent to an MFT Target of any protocol such as SFTP, File system, SOAP Web Service or a back end integration process. Finally, the original file is backed up into the local file system, remote SFTP server or a cloud based storage service. An optional notification can also be delivered to the caller using a Target post-processing callout upon successful completion. The MFT server can live in either on premise or a cloud iPaaS hosted environment.

This is a typical configuration of HCM-MFT Extract Transfer:

MFT_Extract_Transfer

 

In the Source definition, add “Run Script Pre 01” processing action and enter the location of the script:

MFTPreScriptDownload

 

The “UseFileFromScript” must be checked as the source scheduler is triggered with mft2hcm payload (UCM-PAYLOAD-SEARCH) to initiate the search and get WCC’s operations. Once the file is retrieved from WCC, this flag tells MFT engine to substitute the file from downloaded from WCC.

 

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using MFT and Node.js. The Node.js package could be replaced with WebCenter Content native APIs and SOA for orchestration. This process can also be replicated for other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).

HCM Atom Feed Subscriber using Node.js

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Node.js. The assumption is that the reader has some basic knowledge on Node.js. Please refer to this link to download and install Node.js in your environment.

Node.js is a programming platform that allows you to execute server-side code that is similar to JavaScript in the browser. It enables real-time, two-way connections in web applications with push capability, allowing a non-blocking, event-driven I/O paradigm. It runs on a single threaded event loop and leverages asynchronous calls for various operations such as I/O. This is an evolution from stateless-web based on the stateless request-response paradigm. For example, when a request is sent to invoke a service such as REST or a database query, Node.js will continue serving the new requests. When a response comes back, it will jump back to the respective requestor. Node.js is lightweight and provides a high level of concurrency. However, it is not suitable for CPU intensive operations as it is single threaded.

Node.js is built on an event-driven, asynchronous model. The in-coming requests are non-blocking. Each request is passed off to an asynchronous callback handler. This frees up the main thread to respond to more requests.

For more information on Node.js, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

Refer my blog on how to invoke secured REST services using Node.js

Security

The RESTFul services in Oracle HCM Cloud are protected with Oracle Web Service Manager (OWSM). The server policy allows the following client authentication types:

  • HTTP Basic Authentication over Secure Socket Layer (SSL)
  • Oracle Access Manager(OAM) Token-service
  • Simple and Protected GSS-API Negotiate Mechanism (SPNEGO)
  • SAML token

The client must provide one of the above policies in the security headers of the invocation call for authentication. The sample in this post is using HTTP Basic Authentication over SSL policy.

 

Fusion Security Roles

REST and Atom Feed Roles

To use Atom feed, a user must have any HCM Cloud role that inherits the following roles:

  • “HCM REST Services and Atom Feeds Duty” – for example, Human Capital Management Integration Specialist
  • “Person Management Duty” – for example, Human Resource Specialist

REST/Atom Privileges

 

Privilege Name

Resource and Method

PER_REST_SERVICE_ACCESS_EMPLOYEES_PRIV emps ( GET, POST, PATCH)
PER_REST_SERVICE_ACCESS_WORKSTRUCTURES_PRIV grades (get)jobs (get)
jobFamilies (get)
positions (get)
locations (get)
organizations (get)
PER_ATOM_WORKSPACE_ACCESS_EMPLOYEES_PRIV employee/newhire (get)
employee/termination (get)
employee/empupdate (get)
employee/empassignment (get )
PER_ATOM_WORKSPACE_ACCESS_WORKSTRUCTURES_PRIV workstructures/grades (get)
workstructures/jobs (get)
workstructures/jobFamilies (get)
workstructures/positions (get)
workstructures/locations (get)
workstructures/organizations (get)

 

 

Atom Payload Response Structure

The Atom feed response is in XML format. Please see the following diagram to understand the feed structure:

 

AtomFeedSample_1

 

A feed can have multiple entries. The entries are ordered by “updated” timestamp of the <entry> and the first one is the latest. There are two critical elements that will provide information on how to process these entries downstream.

Content

The <content> element contains critical attributes such as Employee Number, Phone, Suffix, CitizenshipLegislation, EffectiveStartDate, Religion, PassportNumber, NationalIdentifierType, , EventDescription, LicenseNumber, EmployeeName, WorkEmail, NationalIdentifierNumber. It is in JSON format as you can see from the above diagram.

Resource Link

If data provided in the <content> is not sufficient, the RESTFul service resource link is provided to get more details. Please refer the above diagram on employee resource link for each entry. Node.js can invoke this newly created RestFul resource link.

 

Avoid Duplicate Atom Feed Entries

To avoid consuming feeds with duplicate entries, one of the following parameters must be provided to consume feeds since last polled:

1. updated-min: Returns entries within collection  Atom:updated > updated-min

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z – Return entries published after “2015-09-16T09:16:00.000Z”.

2. updated-max: Returns entries within collection Atom:updated <=updated-max

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-max=2015-09-16T09:16:00.000Z – Return entries published at/before “2015-09-16T09:16:00.000Z”.

3. updated-min=&updated-max: Return entries within collection (Atom:updated > updated-min && Atom:updated <=updated-max)

Example: https://hclg-test.hcm.us2.oraclecloud.com/hcmCoreApi/Atomservlet/employee/newhire?updated-min=2015-09-16T09:16:00.000Z&updated-max=2015-09-11T10:03:35.000Z – Return entries published between “2015-09-11T10:03:35.000Z” and “2015-09-16T09:16:00.000Z”.

Node.js Implementation

Refer my blog on how to invoke secured REST services using Node.js. These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a file.

For example:

//persist timestamp for the next call

if (i == 0) {

fs.writeFile('updateDate', updateDate[0].text, function(fserr) {

if (fserr) throw fserr; } );

}

 

Next Call

In next call, read the updated timestamp value from the above persisted file to generate the path as follows:

//Check if updateDate file exists and is not empty
try {

var lastFeedUpdateDate = fs.readFileSync('updateDate');

console.log('Last Updated Date is: ' + lastFeedUpdateDate);

} catch (e) {

// handle error

}

if (lastFeedUpdateDate.length > 0) {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;

} else {

pathUri = '/hcmCoreApi/Atomservlet/employee/newhire';

}

 

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the “node-elementtree” package is implemented to parse the XML. You can use any library as long as the following data are extracted for each entry in the feed for downstream processing.

var et = require('elementtree');
//Request call
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Feed Response - the structure is defined in section: Atom Payload Response Structure
feed = et.parse(body);

//Identify if feed has any entries
var numberOfEntries = feed.findall('./entry/').length;

//if there are entries, extract data for downstream processing
if (numberOfEntries > 0) {
console.log('Get Content for each Entry');

//Get Data based on XPath Expression
var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i > content.length; i++ ) {

//get Resouce link for the respected entry
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));

//get Content data of the respective entry which in JSON format
console.log(feed.findall('content.text'));
 
//persist timestamp for the next call
if (i == 0) {
  fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
  if (fserr) throw fserr; } );

}

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Downstream Processing Pattern

The node.js scheduler can be implemented to consume feeds periodically. Once the message is parsed, there are several patterns to support various use cases. In addition, you could have multiple subscribers such as Employee new hire, Employee termination, locations, jobs, positions, etc. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This pattern will provide global transaction and recovery when downstream applications are not available or throws error. The following diagram shows the high level architecture:

nodejs_soa_atom_pattern

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

var et = require('elementtree');

var uname = 'username';
var pword = 'password';
var http = require('https'),
fs = require('fs');

var XML = et.XML;
var ElementTree = et.ElementTree;
var element = et.Element;
var subElement = et.SubElement;

var lastFeedUpdateDate = '';
var pathUri = '';

//Check if updateDate file exists and is not empty
try {
var lastFeedUpdateDate = fs.readFileSync('updateDate');
console.log('Last Updated Date is: ' + lastFeedUpdateDate);
} catch (e) {
// add error logic
}

//get last feed updated date to get entries since that date
if (lastFeedUpdateDate.length > 0) {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire?updated-min=' + lastFeedUpdateDate;
} else {
pathUri = '/hcmCoreApi/atomservlet/employee/newhire';
}

// Generate Request Options
var options = {
ca: fs.readFileSync('HCM Cert'), //get HCM Cloud certificate - either through openssl or export from web browser
host: 'HCMHostname',
port: 443,
path: pathUri,
"rejectUnauthorized" : false,
headers: {
'Authorization': 'Basic ' + new Buffer(uname + ':' + pword).toString('base64')
}
};

//Invoke REST resource for Employee New Hires
var request = http.get(options, function(res){
var body = "";
res.on('data', function(data) {
body += data;
});
res.on('end', function() {

//Parse Atom Payload response 
feed = et.parse(body);

//Get Entries count
var numberOfEntries = feed.findall('./entry/').length;

console.log('...................Feed Extracted.....................');
console.log('Numer of Entries: ' + numberOfEntries);

//Process each entry
if (numberOfEntries > 0) {

console.log('Get Content for each Entry');

var content = feed.findall('./entry/content/');
var entryId = feed.findall('./entry/id');
var updateDate = feed.findall('./entry/updated');

for ( var i = 0; i < content.length; i++ ) {
console.log(feed.findall('./entry/link/[@rel="related"]')[i].get('href'));
console.log(feed.findall('content.text'));

//persist timestamp for the next call
if (i == 0) {
fs.writeFile('updateDate', updateDate[0].text, function(fserr) {
if (fserr) throw fserr; } );
}

fs.writeFile(entryId[i].text,content[i].text, function(fserr) {
if (fserr) throw fserr; } );
}
}

})
res.on('error', function(e) {
console.log("Got error: " + e.message);
});
});

 

 

HCM Atom Feed Subscriber using SOA Cloud Service

$
0
0

Introduction

HCM Atom feeds provide notifications of Oracle Fusion Human Capital Management (HCM) events and are tightly integrated with REST services. When an event occurs in Oracle Fusion HCM, the corresponding Atom feed is delivered automatically to the Atom server. The feed contains details of the REST resource on which the event occurred. Subscribers who consume these Atom feeds use the REST resources to retrieve additional information about the resource.

For more information on Atom, please refer to this.

This post focuses on consuming and processing HCM Atom feeds using Oracle Service Oriented Architecture (SOA) Cloud Service. Oracle SOA Cloud Service provides a PaaS computing platform solution for running Oracle SOA Suite, Oracle Service Bus, and Oracle API Manager in the cloud. For more information on SOA Cloud Service, please refer this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based connectivity to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure.

For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer this.

 

Main Article

Atom feeds enable you to keep track of any changes made to feed-enabled resources in Oracle HCM Cloud. For any updates that may be of interest for downstream applications, such as new hire, terminations, employee transfers and promotions, Oracle HCM Cloud publishes Atom feeds. Your application will be able to read these feeds and take appropriate action.

Atom Publishing Protocol (AtomPub) allows software applications to subscribe to changes that occur on REST resources through published feeds. Updates are published when changes occur to feed-enabled resources in Oracle HCM Cloud. These are the following primary Atom feeds:

Employee Feeds

New hire
Termination
Employee update

Assignment creation, update, and end date

Work Structures Feeds (Creation, update, and end date)

Organizations
Jobs
Positions
Grades
Locations

The above feeds can be consumed programmatically. In this post, Node.js is implemented as one of the solutions consuming “Employee New Hire” feeds, but design and development is similar for all the supported objects in HCM.

 

HCM Atom Introduction

For Atom “security, roles and privileges”, please refer my blog HCM Atom Feed Subscriber using Node.js.

 

Atom Feed Response Template

 

AtomFeedSample_1

SOA Cloud Service Implementation

Refer my blog on how to invoke secured REST services using SOA. The following diagram shows the patterns to subscribe to HCM Atom feeds and process it to downstream applications that may have either web services or file based interfaces. Optionally, all entries from the feeds could be staged either in database or messaging cloud before processing it during events such as downstream application is not available or throwing system errors. This provides the ability to consume the feeds, but hold the processing until downstream applications are available. Enterprise Scheduler Service (ESS), a component of SOA Suite, is leveraged to invoke the subscriber composite periodically.

 

soacs_atom_pattern

The following diagram shows the implementation of the above pattern for Employee New Hire:

soacs_atom_composite

 

Feed Invocation from SOA

HCM cloud feed though in XML representation, the media type of the payload response is “application/atom+xml”. This media type is not supported at this time, but use the following java embedded activity in your BPEL component:

Once the built-in REST Adapter supports the Atom media type, java embedded activity will be replaced and further simplify the solution.

try {

String url = "https://mycompany.oraclecloud.com";
String lastEntryTS = (String)getVariableData("LastEntryTS");
String uri = "/hcmCoreApi/atomservlet/employee/newhire";

//Generate URI based on last entry timestamp from previous invocation
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

java.net.URL obj = new URL(null,url+uri, new sun.net.www.protocol.https.Handler());

javax.net.ssl.HttpsURLConnection conn = (HttpsURLConnection) obj.openConnection();
conn.setRequestProperty("Content-Type", "application/vnd.oracle.adf.resource+json");
conn.setDoOutput(true);
conn.setRequestMethod("GET");

String userpass = "username" + ":" + "password";
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes("UTF-8"));
conn.setRequestProperty ("Authorization", basicAuth);

String response="";
int responseCode=conn.getResponseCode();
System.out.println("Response Code is: " + responseCode);

if (responseCode == HttpsURLConnection.HTTP_OK) {

BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream()));

String line;
String contents = "";

while ((line = reader.readLine()) != null) {
contents += line;
}

setVariableData("outputVariable", "payload", "/client:processResponse/client:result", contents);

reader.close();

}

} catch (Exception e) {
e.printStackTrace();
}

 

These are the following things to consider when consuming feeds:

Initial Consumption

When you subscribe first time, you can invoke the resource with the query parameters to get all the published feeds or use updated-min or updated-max arguments to filter entries in a feed to begin with.

For example the invocation path could be /hcmCoreApi/Atomservlet/employee/newhire or /hcmCoreApi/Atomservlet/employee/newhire?updated-min=<some-timestamp>

After the first consumption, the “updated” element of the first entry must be persisted to use it in next call to avoid duplication. In this prototype, the “/entry/updated” timestamp value is persisted in a database cloud (DbaaS).

This is the sample database table

create table atomsub (
id number,
feed_ts varchar2(100) );

For initial consumption, keep the table empty or add a row with the value of feed_ts to consume initial feeds. For example, the feed_ts value could be “2015-09-16T09:16:00.000Z” to get all the feeds after this timestamp.

In SOA composite, you will update the above table to persist the “/entry/updated” timestamp in the feed_ts column of the “atomsub” table.

 

Next Call

In next call, read the updated timestamp value from the database and generate the URI path as follows:

String uri = "/hcmCoreApi/atomservlet/employee/newhire";
String lastEntryTS = (String)getVariableData("LastEntryTS");
if (!(lastEntryTS.isEmpty())) {
uri = uri + "?updated-min=" + lastEntryTS;
}

The above step is done in java embedded activity, but it could be done in SOA using <assign> expressions.

Parsing Atom Feed Response

The Atom feed response is in XML format as shown previously in the diagram. In this prototype, the feed response is stored in output variable as a string. The following expression in <assign> activity will convert it to XML

oraext:parseXML($outputVariable.payload/client:result)


Parsing Each Atom Entry for Downstream Processing

Each entry has two major elements as mentioned in Atom response payload structure.

Resource Link

This contains the REST employee resource link to get Employee object. This is a typical REST invocation from SOA using REST Adapter. For more information on invoking REST services from SOA, please refer my blog.

 

Content Type

This contains selected resource data in JSON format. For example: “{  “Context” : [ {    "EmployeeNumber" : "212",    "PersonId" : "300000006013981",    "EffectiveStartDate" : "2015-10-08",    "EffectiveDate" : "2015-10-08",    "WorkEmail" : "phil.davey@mycompany.com",    "EmployeeName" : "Davey, Phillip"  } ]}”.

In order to use above data, it must be converted to XML. The BPEL component provides a Translator activity to transform JSON to XML. Please refer the SOA Development document, section B1.8 – doTranslateFromNative.

 

The <Translate> activity syntax to convert above JSON string from <content> is as follows:

<assign name="TranslateJSON">
<bpelx:annotation>
<bpelx:pattern>translate</bpelx:pattern>
</bpelx:annotation>
<copy>
 <from>ora:doTranslateFromNative(string($FeedVariable.payload/ns1:entry/ns1:content), 'Schemas/JsonToXml.xsd', 'Root-Element', 'DOM')</from>
 <to>$JsonToXml_OutputVar_1</to>
 </copy>
</assign>

This is the output:

jsonToXmlOutput

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select JSON format and use above <content> as a sample to generate a schema. Please see the following diagrams:

JSON_nxsd_1JSON_nxsd_2JSON_nxsd_3

JSON_nxsd_5

 

One and Only One Entry

Each entry in an Atom feed has a unique ID. For example: <id>Atomservlet:newhire:EMP300000005960615</id>

In target applications, this ID can be used as one of the keys or lookups to prevent reprocessing. The logic can be implemented in your downstream applications or in the integration space to avoid duplication.

 

Scheduler and Downstream Processing

Oracle Enterprise Scheduler Service (ESS) is configured to invoke the above composite periodically. At present, SOA cloud service is not provisioned with ESS, but refer this to extend your domain. Once the feed response message is parsed, you can process it to downstream applications based on your requirements or use cases. For guaranteed transactions, each feed entry can be published in Messaging cloud or Oracle Database to stage all the feeds. This will provide global transaction and recovery when downstream applications are not available or throws error.

The following diagram shows how to create job definition for a SOA composite. For more information on ESS, please refer this.

ess_3

SOA Cloud Service Instance Flows

First invocation without updated-min argument to get all the feeds

 

soacs_atom_instance_json

Atom Feed Response from above instance

AtomFeedResponse_1

 

Next invocation with updated-min argument based on last entry timestamp

soacs_atom_instance_noentries

 

Conclusion

This post demonstrates how to consume HCM Atom feeds and process it for downstream applications. It provides details on how to consume new feeds (avoid duplication) since last polled. Finally it provides an enterprise integration pattern from consuming feeds to downstream applications processing.

 

Sample Prototype Code

The sample prototype code is available here.

 

soacs_atom_composite_1

 

 

Oracle HCM Cloud – Bulk Integration Automation Using SOA Cloud Service

$
0
0

Introduction

Oracle Human Capital Management (HCM) Cloud provides a comprehensive set of tools, templates, and pre-packaged integration to cover various scenarios using modern and efficient technologies. One of the patterns is the batch integration to load and extract data to and from the HCM cloud. HCM provides the following bulk integration interfaces and tools:

HCM Data Loader (HDL)

HDL is a powerful tool for bulk-loading data from any source to Oracle Fusion HCM. It supports important business objects belonging to key Oracle Fusion HCM products, including Oracle Fusion Global Human Resources, Compensation, Absence Management, Performance Management, Profile Management, Global Payroll, Talent and Workforce Management. For detailed information on HDL, please refer to this.

HCM Extracts

HCM Extract is an outbound integration tool that lets you select HCM data elements, extracting them from the HCM database and archiving these data elements as XML. This archived raw XML data can be converted into a desired format and delivered to supported channels recipients.

Oracle Fusion HCM provides the above tools with comprehensive user interfaces for initiating data uploads, monitoring upload progress, and reviewing errors, with real-time information provided for both the import and load stages of upload processing. Fusion HCM provides tools, but it requires additional orchestration such as generating FBL or HDL file, uploading these files to WebCenter Content and initiating FBL or HDL web services. This post describes how to design and automate these steps leveraging Oracle Service Oriented Architecture (SOA) Cloud Service deployed on Oracle’s cloud Platform As a Service (PaaS) infrastructure.  For more information on SOA Cloud Service, please refer to this.

Oracle SOA is the industry’s most complete and unified application integration and SOA solution. It transforms complex application integration into agile and re-usable service-based components to speed time to market, respond faster to business requirements, and lower costs.. SOA facilitates the development of enterprise applications as modular business web services that can be easily integrated and reused, creating a truly flexible, adaptable IT infrastructure. For more information on getting started with Oracle SOA, please refer this. For developing SOA applications using SOA Suite, please refer to this.

These bulk integration interfaces and patterns are not applicable to Oracle Taleo.

Main Article

 

HCM Inbound Flow (HDL)

Oracle WebCenter Content (WCC) acts as the staging repository for files to be loaded and processed by HDL. WCC is part of the Fusion HCM infrastructure.

The loading process for FBL and HDL consists of the following steps:

  • Upload the data file to WCC/UCM using WCC GenericSoapPort web service
  • Invoke the “LoaderIntegrationService” or the “HCMDataLoader” to initiate the loading process.

However, the above steps assume the existence of an HDL file and do not provide a mechanism to generate an HDL file of the respective objects. In this post we will use the sample use case where we get the data file from customer, using it to transform the data and generate an HDL file, and then initiate the loading process.

The following diagram illustrates the typical orchestration of the end-to-end HDL process using SOA cloud service:

 

hcm_inbound_v1

HCM Outbound Flow (Extract)

The “Extract” process for HCM has the following steps:

  • An Extract report is generated in HCM either by user or through Enterprise Scheduler Service (ESS)
  • Report is stored in WCC under the hcm/dataloader/export account.

 

However, the report must then be delivered to its destination depending on the use cases. The following diagram illustrates the typical end-to-end orchestration after the Extract report is generated:

hcm_outbound_v1

 

For HCM bulk integration introduction including security, roles and privileges, please refer to my blog Fusion HCM Cloud – Bulk Integration Automation using Managed File Trasnfer (MFT) and Node.js. For introduction to WebCenter Content Integration services using SOA, please refer to my blog Fusion HCM Cloud Bulk Automation.

 

Sample Use Case

Assume that a customer receives benefits data from their partner in a file with CSV (comma separated value) format periodically. This data must be converted into HDL format for the “ElementEntry” object and initiate the loading process in Fusion HCM cloud.

This is a sample source data:

E138_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,23,Reason,Corrected all entry value,Date,2013-01-10
E139_ASG,2015/01/01,2015/12/31,4,UK LDG,CRP_UK_MNTH,E,H,Amount,33,Reason,Corrected one entry value,Date,2013-01-11

This is the HDL format of ElementryEntry object that needs to be generated based on above sample file:

METADATA|ElementEntry|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|EntryType|CreatorType
MERGE|ElementEntry|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|E|H
MERGE|ElementEntry|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|E|H
METADATA|ElementEntryValue|EffectiveStartDate|EffectiveEndDate|AssignmentNumber|MultipleEntryCount|LegislativeDataGroupName|ElementName|InputValueName|ScreenEntryValue
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Amount|23
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected all entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E138_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-10
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Amount|33
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Reason|Corrected one entry value
MERGE|ElementEntryValue|2015/01/01|2015/12/31|E139_ASG|4|UK LDG|CRP_UK_MNTH|Date|2013-01-11

SOA Cloud Service Design and Implementation

A canonical schema pattern has been implemented to design end-to-end inbound bulk integration process – from the source data file to generating HDL file and initiating the loading process in HCM cloud. The XML schema of HDL object “ElementEntry” is created. The source data is mapped to this HDL schema and SOA activities will generate the HDL file.

Having a canonical pattern automates the generation of HDL file and it becomes a reusable asset for various interfaces. The developer or business user only needs to focus on mapping the source data to this canonical schema. All other activities such as generating the HDL file, compressing and encrypting the file, uploading the file to WebCenter Content and invoking web services needs to be developed once and then once these activities are developed they also become reusable assets.

Please refer to Wikipedia for the definition of Canonical Schema Pattern

These are the following design considerations:

1. Convert source data file from delimited format to XML

2. Generate Canonical Schema of ElementEntry HDL Object

3. Transform source XML data to HDL canonical schema

4. Generate and compress HDL file

5. Upload a file to WebCenter Content and invoke HDL web service

 

Please refer to SOA Cloud Service Develop and Deploy for introduction and creating SOA applications.

SOA Composite Design

This is a composite based on above implementation principles:

hdl_composite

Convert Source Data to XML

“GetEntryData” in the above composite is a File Adapter service. It is configured to use native format builder to convert CSV data to XML format. For more information on File Adapter, refer to this. For more information on Native Format Builder, refer to this.

The following provides detailed steps on how to use Native Format Builder in JDeveloper:

In native format builder, select delimited format type and use source data as a sample to generate a XML schema. Please see the following diagrams:

FileAdapterConfig

nxsd1

nxsd2_v1 nxsd3_v1 nxsd4_v1 nxsd5_v1 nxsd6_v1 nxsd7_v1

Generate XML Schema of ElementEntry HDL Object

A similar approach is used to generate ElementEntry schema. It has two main objects: ElementEntry and ElementEntryValue.

ElementEntry Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”Entry” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntry” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EntryType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”CreatorType” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

ElementEntryValue Schema generated using Native Format Builder

<?xml version = ‘1.0’ encoding = ‘UTF-8’?>
<xsd:schema xmlns:xsd=”http://www.w3.org/2001/XMLSchema” xmlns:nxsd=”http://xmlns.oracle.com/pcbpel/nxsd” xmlns:tns=”http://TargetNamespace.com/GetEntryValueHdlData” targetNamespace=”http://TargetNamespace.com/GetEntryValueHdlData” elementFormDefault=”qualified” attributeFormDefault=”unqualified” nxsd:version=”NXSD” nxsd:stream=”chars” nxsd:encoding=”UTF-8″>
<xsd:element name=”Root-Element”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”EntryValue” minOccurs=”1″ maxOccurs=”unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=”METADATA” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveStartDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”EffectiveEndDate” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”AssignmentNumber” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”MultipleEntryCount” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”LegislativeDataGroupName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ElementName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”InputValueName” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”|” nxsd:quotedBy=”&quot;”/>
<xsd:element name=”ScreenEntryValue” type=”xsd:string” nxsd:style=”terminated” nxsd:terminatedBy=”${eol}” nxsd:quotedBy=”&quot;”/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:annotation>
<xsd:appinfo>NXSDSAMPLE=/ElementEntryAllSrc.dat</xsd:appinfo>
<xsd:appinfo>USEHEADER=false</xsd:appinfo>
</xsd:annotation>
</xsd:schema>

In Native Format Builder, change “|” separator to “,” in the sample file and change it to “|” for each element in the generated schema.

Transform Source XML Data to HDL Canonical Schema

Since we are using canonical schema, all we need to do is map the source data appropriately and Native Format Builder will convert each object into HDL output file. The transformation could be complex depending on the source data format and organization of data values. In our sample use case, each row has one ElementEntry object and 3 ElementEntryValue sub-objects respectively.

The following provides the organization of the data elements in a single row of the source:

Entry_Desc_v1

The main ElementEntry entries are mapped to each respective row, but ElementEntryValue entries attributes are located at the end of each row. In this sample it results 3 entries. This can be achieved easily by splitting and transforming each row with different mappings as follows:

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “1” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “2” from above diagram

<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”> – map pair of columns “3” from above diagram

 

Metadata Attribute

The most common use cases are to use “merge” action for creating and updating objects. In this use case, it is hard coded to “merge”, but the action could be set up to be dynamic if source data row has this information. The “delete” action removes the entire record and must not be used with “merge” instruction of the same record as HDL cannot guarantee in which order the instructions will be processed. It is highly recommended to correct the data rather than to delete and recreate it using the “delete” action. The deleted data cannot be recovered.

 

This is the sample schema developed in JDeveloper to split each row into 3 rows for ElementEntryValue object:

<xsl:template match=”/”>
<tns:Root-Element>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C9″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C10″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C11″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C12″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
<xsl:for-each select=”/ns0:Root-Element/ns0:Entry”>
<tns:Entry>
<tns:METADATA>
<xsl:value-of select=”‘MERGE'”/>
</tns:METADATA>
<tns:ElementEntry>
<xsl:value-of select=”‘ElementEntryValue'”/>
</tns:ElementEntry>
<tns:EffectiveStartDate>
<xsl:value-of select=”ns0:C2″/>
</tns:EffectiveStartDate>
<tns:EffectiveEndDate>
<xsl:value-of select=”ns0:C3″/>
</tns:EffectiveEndDate>
<tns:AssignmentNumber>
<xsl:value-of select=”ns0:C1″/>
</tns:AssignmentNumber>
<tns:MultipleEntryCount>
<xsl:value-of select=”ns0:C4″/>
</tns:MultipleEntryCount>
<tns:LegislativeDataGroupName>
<xsl:value-of select=”ns0:C5″/>
</tns:LegislativeDataGroupName>
<tns:ElementName>
<xsl:value-of select=”ns0:C6″/>
</tns:ElementName>
<tns:EntryType>
<xsl:value-of select=”ns0:C13″/>
</tns:EntryType>
<tns:CreatorType>
<xsl:value-of select=”ns0:C14″/>
</tns:CreatorType>
</tns:Entry>
</xsl:for-each>
</tns:Root-Element>
</xsl:template>

BPEL Design – “ElementEntryPro…”

This is a BPEL component where all the major orchestration activities are defined. In this sample, all the activities after transformation are reusable and can be moved to a separate composite. A separate composite may be developed only for transformation and data enrichment that in the end invokes the reusable composite to complete the loading process.

 

hdl_bpel_v2

 

 

SOA Cloud Service Instance Flows

The following diagram shows an instance flow:

ElementEntry Composite Instance

instance1

BPEL Instance Flow

audit_1

Receive Input Activity – receives delimited data to XML format through Native Format Builder using File Adapter

audit_2

Transformation to Canonical ElementEntry data

Canonical_entry

Transformation to Canonical ElementEntryValue data

Canonical_entryvalue

Conclusion

This post demonstrates how to automate HCM inbound and outbound patterns using SOA Cloud Service. It shows how to convert customer’s data to HDL format followed by initiating the loading process. This process can also be replicated to other Fusion Applications pillars such as Oracle Enterprise Resource Planning (ERP).


Configuring the system for a successful Fusion Application installation – Part 1 – System limits

$
0
0

IntroductionI wanted to share my experience in the installation of Fusion Applications. For those that are not as familiar with it, Fusion application installation goes through several phases after the provisioning plan has been created. These arePr...

How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester?

$
0
0

Issue

When creating ADF applications using JDeveloper with Fusion Applications Extension, the Application Module Tester (Business Component Browser) will fail the following exception:
“(oracle.jbo.JboException) JBO-29000: Unexpected exception caught: java.lang.NoClassDefFoundError, msg=Could not initialize class oracle.apps.fnd.applcore.oaext.model.OAApplicationModuleImpl”

Description:

The following picture shows the default libraries added to your Model project when creating an ADF Business Component from Tables:

 
After reviewing the Fusion Applications Developer Guide (here), you must add Applications Coreand Applications Core (Attachments Model) libraries to the data model project (default is Model). However, when adding the above libraries into your project, it throws warnings for unable to resolve dependent libraries (secondary imports) such as Topology Manager, Functional Setup Model. Please see the following screens:
 
 
 
 

Solution:

You must select “Topology Manager” and “Functional Setup Module” along with Application Core and Application Core (Attachments Model) libraries. You must also add “Java EE 1.5” and “Java EE 1.5 API” libraries.
You will still get the following warning:
Business Components: Unable to load Business Components Project.  File not found.
    Object: oracle.apps.fnd.applxdf.dm.model.Model
    Owner:  oracle.apps.model.Model 
You can ignore it for now to run ADF Business Component Browser to test your ADF Model.
Note: This is a bug as I do not see this class in <Jdev location>/jdeveloper/atgpf/lib/oracle.apps.fnd.applxdf.jar file.
The following screen shows the libraries in your Model Project:
 
Note: Some of the libraries such as “Commons *” are automatically added with Application Core as secondary imports.
 
 

Changes in SOA Human Task Flow (Run-Time) for Fusion Applications

$
0
0

Recently I was engaged with one of the customers to deploy custom SOA composite to one of the domains in Fusion Applications environment. The SOA composite was basic and had a simple Human Task Flow component. At runtime, the human task was created suc...

Tips and Tricks When Upgrading IDM for Fusion Applications – RUP1 through RUP3

$
0
0

In this article, I want to share some lessons learned from doing the IDM portion of the FA upgrade. Overall, the process of upgrading the security components is straightforward if you follow the instructions carefully, but there's no denying that the IDM part of the upgrade is a little more hands-on and potentially confusing that the FA part.

A Note About Versions

When you start reading the documentation below, you may notice like I did that the naming convention for the upgrades is changing -- up to now, the upgrades have been referred to RUP1, RUP2, etc., but the documentation no longer reflects that:
Version              Old Name                   New Name
11.1.2.0.0          FA 11gR1 RUP1         FA 11gR1 Update 1
11.1.3.0.0          FA 11gR1 RUP2         FA 11gR1 Update 2
11.1.4.0.0          FA 11gR1 RUP3         FA 11gR1 Update 3

You will see references to both naming conventions, so you should keep both in mind when searching in My Oracle Support or in your favorite search engine for information.
Important Note: These are incremental and not cumulative patches. You will need to apply RUP1 before you can apply RUP2, and the same goes for RUP3 (RUP1 and RUP2 need to be applied first).

The Documentation

Starting with RUP2/Update 2, the documentation you need to upgrade consists of three separate docs:

For RUP1/Update 1:
Release Notes: MOS Document 1382781.1
Patching Guide: http://docs.oracle.com/cd/E25054_01/fusionapps.1111/e16602/toc.htm

For RUP2/Update 2:
Release Notes: MOS Document 1439014.1
Patching Guide: http://docs.oracle.com/cd/E25178_01/fusionapps.1111/e16602/toc.htm
IDM Upgrade Procedure: MOS Document 1435333.1

For RUP3/Update 3:
Release Notes: MOS Document 1455116.1
Patching Guide: http://docs.oracle.com/cd/E28271_01/fusionapps.1111/e16602/toc.htm
IDM Upgrade Procedure: MOS Document 1441704.1

You should always start with the release notes before looking at the patching guide. The IDM upgrade procedure is a complement to the patching guide -- while following the patching guide for the upgrade you're performing, it will refer you to the IDM upgrade doc, which will then refer you back to the patching guide to continue.
Important Note: These docs have all been updated over time -- make sure you check online before starting your upgrade to make sure that you have the most recent version!

Upgrading IDM from RUP1/Update 1 to RUP2/Update 2

Most of this consists of patching, but there are a couple of major changes on the IDM side, namely, the upgrade of the IDM Suite to 11.1.1.6.0 from 11.1.1.5.0 which in turn requires the upgrade of WLS in the IDM domain to 10.3.6.

Things You Should Do Before You Start the Upgrade

As of the time I'm publishing this, the RUP2 binaries on eDelivery come with the wrong copy of the IDM Suite installer (version 11.1.1.5.0 instead of the required 11.1.1.6.0). This means you will need to separately download the correct version from eDelivery. Once you have everything downloaded and ready to go, it's worth going through the binaries to identify where the IDM-specific patches are so you don't have to search for them later. In my case, everything was unpacked to "/u01/backup/rup2" on the machine where I first tried this:

Patch                Location
13797139        /u01/backup/rup2/installers/oracle_common/patch/13797139
13642895        /u01/backup/rup2/installers/oracle_common/patch/13642895
13686287        /u01/backup/rup2/installers/oracle_common/patch/13686287
13579026        /u01/backup/rup2/installers/oracle_common/patch/13579026
13782459        /u01/backup/rup2/installers/pltsec/patch/13782459
13620505        /u01/backup/rup2/installers/pltsec/patch/13620505
13399365        /u01/backup/rup2/installers/iam_patches/13399365
13115859        /u01/backup/rup2/installers/iam_patches/13115859
13684834        /u01/backup/rup2/installers/iam_patches/13684834
13477091        /u01/backup/rup2/installers/webgate/ext

Note that the last patch (13477091) is an OAM 10g patch for WebGate and is not applied with OPatch but run as a standalone executable.
In a similar vein, the doc references some common paths, and the example paths in the doc do not reflect the paths that are specified in the Enterprise Deployment Guide. I went through the paths on one of my lab servers which does follow the EDG:

IDM_ORACLE_HOME                             /u01/app/oracle/product/fmw/idm
OID_ORACLE_HOME                             /u01/app/oracle/product/fmw/idm
IDM_ORACLE_COMMON_HOME        /u01/app/oracle/product/fmw/oracle_common
IAM_ORACLE_HOME                            /u01/app/oracle/product/fmw/iam
OIM_ORACLE_HOME                            /u01/app/oracle/product/fmw/iam
IAM_ORACLE_COMMON_HOME       /u01/app/oracle/product/fmw/oracle_common
SOA_ORACLE_HOME                           /u01/app/oracle/product/fmw/soa
OHS_ORACLE_HOME                           /u01/app/oracle/product/fmw/web
WEB_ORACLE_HOME                          /u01/app/oracle/product/fmw/web
OHS_WEBGATE_ORACLE_HOME      /u01/app/oracle/product/fmw/oam/webgate
OHS_ORACLE_COMMON_HOME      /u01/app/oracle/product/fmw/oracle_common

Again, this is an exercise worth doing before you begin to make the upgrade steps go a little more quickly.

Following the Upgrade Doc

The upgrade doc divides the tasks into 13 steps, and I will highlight some specific comments below for the steps where I encountered an issue or where I felt that some clarification was warranted. For example, there are four patches (13797139, 13642895, 13686287 and 13579026) that you are instructed to apply in Step 6 and then again in Steps 7 and 8 and this made no sense at first. On a closer read, however, it became clear that the doc was covering a very generic deployment where the IDM Suite, the IAM Suite and the Web Tier were on separate WebLogic domains. And that's not the case for FA installations that follow the EDG. So you really only need to apply the four patches in Step 7 because the WLS-based security components are all in one domain.

Step 4: Create Backups

Do not neglect this step! Upgrading FA is a large and complex effort.

Step 5: Download Required Patches

This step was a bit of a puzzle for me, because it belongs in the "Before Upgrading" section before Step 1. Besides, with the exception of the IDM Suite 11.1.1.6.0, everything else is included in the RUP2 binaries that you download from eDelivery.

Step 6: Upgrade the IDM Node

There should be no drama in this section, but remember from above that you can wait until Step 7 to apply those four patches. If you don't, you will need to roll back and reapply those patches in the next step.

 Step 7: Upgrade the IAM Node

Remember that if you followed the EDG, the IDM and IAM nodes are one and the same and you have already upgraded WLS (and don't need to do it again).

Apply Oracle Identity Manager patch 13399365

There are two things that happened to me when I tried this the first time. The first thing was an error from OPatch saying that this patch conflicts with patch 12961473. Turns out that this is a known issue and the fix is to simply roll back patch 13399365 and then reapply it. The second thing was deploying the new weblogic.profile file. There were some unfamiliar parameters that needed to be given values, and to save you some time, here is a copy of what it should look like for FA:

# For passwords if you dont want to put password </optional> in this file just comment it out from here, you will be promted for it in rumtime.
#Neccessary env variables [Mandatory]
ant_home=/u01/app/oracle/product/fmw/modules/org.apache.ant_1.7.1
java_home=/u01/app/oracle/product/fmw/jrockit-jdk1.6.0_24
mw_home=/u01/app/oracle/product/fmw
oim_oracle_home=/u01/app/oracle/product/fmw/iam
#DB configuration variables [Mandatory]
operationsDB.user=FA_OIM
# Database password is optional. if you want to  give it on terminal itself leave it commented. Otherwise uncomment it.
OIM.DBPassword=Welcome1
operationsDB.driver=oracle.jdbc.OracleDriver
operationsDB.host=idmdbhost.mycompany.com
operationsDB.serviceName=oimedg.mycompany.com
operationsDB.port=1521
appserver.type=wls
isMTEnabled=false
# If you have milty-tenancy enabled in your environment
mdsDB.user=<MDS DB Schema owner>
#Password is optional,  if you want to  give it on terminal itself leave it commented. Otherwise uncomment it.
#mdsDB.password=<MDS DB Schema password>
mdsDB.host=<MDS DB Host>
mdsDB.port=<MDS DB port>
mdsDB.serviceName=<MDS DB ServiceName>
#For domain level configurations [Mandatory]
# put here your admin server related credentials
weblogic_user=weblogic
#Password is optional,  if you want to  give it on terminal itself leave it commented. Otherwise uncomment it.
weblogic_password=Welcome1
weblogic_host=servername
weblogic_port=7001
weblogic.server.dir=/u01/app/oracle/product/fmw/wlserver_10.3
#oim specific domain level parameters [Mandatory]
oimserver_host=servername
oimserver_port=14000
oim_managed_server=wls_oim1
oim_domain_dir=/u01/app/oracle/admin/IDMDomain/aserver/IDMDomain
isSODEnabled=false
#SOA specific details [Mandatory]
soa_home=/u01/app/oracle/product/fmw/soa
soa_managed_server=wls_soa1
soaserver_host=servername
soaserver_port=8001
#put here the name of the targets of taskdetails. in non cluster it will be soa server name and in cluster it will be something like cluster_soa
taskdetails_target_name=cluster_soa
isOHSEnabled=false
#Following params is needed only if you have enabled OHS in your env
ohs_home=/u01/app/oracle/product/fmw/web
#If your env is FA, you can set this var false or ignore this if your env is non FA.
isFAEnabled=true

The patch_weblogic.sh script is in the same directory as weblogic.profile. You may see an error in the last step that it runs, and this seems to be due to the fact that the script tries to deploy SOA stuff before the server is fully up. This is another known bug, and the easiest fix is to check for undeployed apps in the WebLogic console and manually deploy them.

Upgrading IDM from RUP2/Update 2 to RUP3/Update 3

Most of this consists of patching, but there are a couple of changes on the IDM side, namely, the upgrade of the WebGates to 11g from 10g. The other thing (and it's not quite spelled out in the doc) is that the FA database is upgraded to 11.2.0.3, and while this is mandatory for the FA DB it is optional for the IDM DB. For that reason alone, if you are going to upgrade the IDM DB as well, I recommend that you wait until the overall upgrade is complete and smoke tested before doing it.

Things You Should Do Before You Start the Upgrade

Once again, you should map the homes as above to have those paths handy, and locate the patches in the RUP3 binaries before rather than during the upgrade:

Patch 12575078, which contains Oracle Access Manager WebGates (11.1.1.5.0)
 /u01/rup3/installers/webgate/Disk1
Patch 13453929 for Oracle Access Manager WebGates
 /u01/rup3/installers/webgate/patch/13453929
Patch 13735036 for Oracle Identity Manager
 /u01/rup3/installers/idm/patch/13735036
Patch 13768278 for IDM Tools. Note: Be sure you download the 11.1.1.5.0 version of this
patch, which is named p13768278_111150_Generic.zip.
 /u01/rup3/installers/idm/patch/13768278
Patch 13787495 for Oracle Access Manager Config Tool
 /u01/rup3/installers/oracle_common/patch/13787495
Patch 13879999 for Oracle Internet Directory
 /u01/rup3/installers/pltsec/patch/13879999
Patch 13893692 for Oracle SOA Manager
 /u01/rup3/installers/oracle_common/patch/13893692 < for ORACLE_COMMON_HOME
 /u01/rup3/installers/soa/patch/13893692 < for SOA_HOME
Patch 13901417 for Oracle Access Manager
 /u01/rup3/installers/idm/patch/13901417

Important Note: Patch 13893692 has two parts that are applied separately, and the two parts live in different directories.
Finally, when you install the 11g WebGate(s), you will need a couple of specific gcc libraries present, and for me, my server had the 32 bit versions but not the required 64 bit versions. You can build these from scratch by downloading the source from GNU, but if you're like me and in a hurry sometimes, you can just get them via yum:

 yum update libgcc
 mkdir /u01/app/oracle/product/fmw/gcc_lib
 cd /u01/app/oracle/product/fmw/gcc_lib
 ln -s /lib64/libgcc_s-4.1.2-20080825.so.1 ./libgcc_s.so.1
 ln -s /usr/lib64/libstdc++.6.0.8 ./libstdc++.so.6

You will need this in Step 9.

Following the Upgrade Doc

Same applies here as above -- there are 14 steps in this doc with a special section that applies to Solaris.

Step 8: Upgrade the IAM Node

When you apply patch 13893692, you will get a message stating that it is already present. Just to be safe, I did a roll back and reapplication of it.

Step 9: Upgrade the OHS Node

This is where you need those gcc libraries -- from the installer, specify the path that contains the symbolic links above (/u01/app/oracle/product/fmw/gcc_lib for me) for the location of the gcc libraries. The file names have to match those above.
In the new httpd.conf file, you may need to add the following lines to ensure that all logins work as they previously did:
You may need to add one line to httpd.conf to ensure that login works correctly:

 #*******Default Login page alias***
 Alias /oamsso "/u01/app/oracle/product/fmw/oam/webgate/access/oamsso"

Coda

So that's it. If you're careful and methodical, you shouldn't encounter any issues for the IDM part of the upgrade. The two upgrade docs, while a bit confusing in some respects, are complete and should work for you. If anybody has their own experiences to share, please do.

Splitting Fusion Applications Topology from Single to Multiple Host Servers

$
0
0
Introduction The purpose of this technical paper is to document how to split Fusion Applications domains to multiple machines when originally provisioned in a single machine. This is different than scaling out or scaling up a Fusion Applications environment. This is also not to be confused with moving Fusion Applications from one environment to another. […]

Configuring Essbase Cluster for Fusion Applications

$
0
0

If you have provisioned an FA environment using Release 3 (11.1.3) or Release 4 (11.1.4) and followed Chapter 14 of the Enterprise Deployment Guide to cluster BI and Essbase, you will get an error similar to the one below when running Create Cubes ESS ...

Starting and Stopping Fusion Applications the Right Way

$
0
0

OverviewStarting and stopping Fusion Applications is a complex task that involves invoking commands for multiple components (including WebLogic Domains, OPMN-Managed Instances, Database instances, 3rd Party software, etc) in multiple hosts. This proces...


IDM Maintenance Tasks for Fusion Applications

$
0
0
Introduction Fusion Applications represents a very large and complex suite of applications, along with a large and complex set of security components to keep the business resources that FA manages under control. The Enterprise Deployment Guide is the starting point for a deployment, but it does not address operations and maintenance. In this article, the […]

Setting memory parameters for servers in Fusion Applications

$
0
0

Note: The following article applies to Fusion Apps Release 4 (11.1.4 or RUP3) or lower. The procedure has changed in Release 5 (11.1.5 or RUP4) and I'll update the post soon with details.Setting memory parameters for Admin and Managed servers of variou...

Setting up HTTPS on OHS for Fusion Apps

$
0
0

Hello and Welcome to Everyone!I've been selected to write the inaugural post for the team's blog, and the topic I've chosen is one that I've had to help a couple of clients with in the past couple of weeks: How to replace the default SSL certificates f...

Split profiles with AD and OID for Fusion Apps IDM

$
0
0
In this post I will walk you through on How to set up split profiles with AD and OID as backend directory server while Oracle Virtual Directory links them together to present a single consolidated view.
 
This is a very generic implementation scenario but is very important when setting up IDM for Fusion Applications, where clients would like to use their existing Enterprise Repository for the user base. Very common example is to provision users out of existing AD without replicating the user base to some other repository, that’s when split profile AD and OID comes into place, while OVD becomes the presenter of consolidated view.
 
 
Here are some of the FAQs:
  1. Why do we need OID  for Fusion Applications when existing Enterprise Repository can be used ?
    a.       All the Fusion Applications specific and Oracle specific attributes are created in OID
  2.  Can multiple directories still be used as Identity stores?
    a.       Yes. Multiple directories can still be used as Identity stores with oracle specific attributes present in OID and enterprise specific attributes and Fusion Application specific attributes present in say AD.I will discuss this scenario in upcoming blogs
  3. Are User Login Ids unique across directories?
    a.       Yes , this a pre requisite and other pre requisites and limitations are very well discussed in IDM Enterprise Deployment Guide for Fusion Applications for configuration of directories other than OID
  4. When is the good time to configure split directory mode, before or after FA provisioning?
    a.       I will stress this  and recommend to go with this configuration after FA provisioning is completed 
     
    b.      Since this can also be done prior to FA provisioning  , in that case the recommendation is to complete the IDM Environment with OVD and OID (ID Store,Policy Store) >>Validate IDM Environment >> Then proceed with split AD Configuration
     
    c.       Configuring AD and OID before IDM validation is prone to good number of user errors.
     

     

    For easy understanding and simple configuration I will stick to the scenario of split profile configuration where existing Enterprise Repository is not extended.In this scenario this is how the view is from OVD level (adapter plug-in view/ unified view).

    As you see in the image above even though the actual base of both OID and AD repositories are same ‘dc=us,dc=oracle,dc-com’ , OVD Adapters are configured to map uniquely and to consolidate to a unified view  of ‘dc=adidm,dc=oididm,dc=com’

    Now let’s get in to action on how to create above configuration. On a high level this can be split in to 5 tasks

    1. Setting up Shadow directory  in OID
    2. Create a shadow joiner
    3. Create user Adapters for AD and OID
    4. Create Changelog Adapters for AD and OID
    5. Create Join View Adapter and Global Change Log Plug-In
       
       
       
       
       

    1.Set up OID as shadow directory

    Since AD is not being extended, OID will be used as a shadow directory and use Oracle Virtual Directory to merge the entities from the directories and for this purpose we need to create a container in OID to store Fusion Apps specific attributes
     
    a. Create 'shadowentries' container in oid ( below is sample ShadowADContainer.ldif)

    dn: cn=shadowentries
    cn: shadowAD1
    objectclass: top
    objectclass: orclContainer

 
b. Load the group with following command
$ORACLE_HOME/bin/ldapadd -h <oid-host> -p <oid-port> -D cn=orcladmin -w <password> -c -v -f
ShadowADContainer.ldif

c. Create acis on the newly created group/container  to grant access to RealmAdministrators and OIMAdministrators(This is the group that does all ID Administration in OIM)

dn: cn=shadowentries
changetype: modify
add: orclaci
orclaci: access to entry by group="cn=RealmAdministrators,cn=groups,cn=OracleContext,dc=us,dc=oracle,dc=com" (browse,add,delete)
orclaci: access to attr=(*) by group="cn=RealmAdministrators,cn=groups,cn=OracleContext,dc=us,dc=oracle,dc=com" (read,write,search,compare)
orclaci: access to entry by group="cn=OIMAdministrators,cn=groups,dc=us,dc=oracle,dc=com" (browse,add,delete)
orclaci: access to attr=(*) by group="cn=OIMAdministrators,cn=groups,dc=us,dc=oracle,dc=com" (search,read,compare,write)
-
changetype: modify
add: orclentrylevelaci
orclentrylevelaci: access to entry by * (browse,noadd,nodelete)
orclentrylevelaci: access to attr=(*) by * (read,search,nowrite,nocompare)

 
d. An image of how the shadow container looks after creation.
 
 
 
 
 

 Note: All the steps here after are to be performed by connecting to OVD via ODSM.You can use the screen shots as pointers for configuration.

 

 

2.Create Shadow Joiner Adapter

Shadow Joiner User Adapter settings 
 
 
 
 
 
 

3.Create User Adapters for AD and OID

you would need to create a User Adapter for AD and OID.Use these screen-shots as pointers
 
 
3.1 User Adapter for AD
  
 
 
 
 
          AD User Adapter Parameters
 
 
 
3.2   User Adapter for OID
 
 
 
 
      OID User Adapter Parameters
 
 
 

4.Create Change Log Adapters for AD and OID

 

  4.1 Change Log Adapter for AD

 
 
 
4.2 Change Log Adapter for OID

5.Create a Join View Adapter and Global Change Log Plug-in

5.1 Join View Adapter Settings
 
 
 
 5.2 Global Change Log Plug-in
 
 
 
Finally this is how the summary of all the OVD Adapters is shown in HOME tab for OVD in ODSM
 
 
 
Next Steps ? 
Now that split profile is configured, what are the settings that need to change in OAM and OIM , I will discuss that in next blog.

Super User/Role setup for Common Implementation of Fusion Apps

$
0
0
 
Hello Everyone!
This post is about preparing the installed IDM/FA for Common implementation of Fusion Applications (FA).  

 

Please note that this procedure needs to be done for BARE METAL Installation. However for OVM template based installation it has been observed that these steps have been already done as a part of installation. My suggestion for OVM would be to validate these steps and complete any/all of the step(s) as needed.

 

After the installation of FA, an organization starts the common implementation that involves a  series of tasks shown below. 
 

 

This blog is limited to explaining the first 2 tasks in the above diagram. Kindly refer to Getting Started with Fusion Applications : Common Implementation document for  further details.

A little background on why we are doing this.
In Fusion Applications, users along with security are managed by HCM Task flows which require Enterprise structures to be setup.  For setting up these Enterprise structures we need to create specific users in HCM. Initially since there will not be any Enterprise structures, we need to have a Super User who can create appropriate implementation users.
The implementation user created by the super user in turn will be responsible for providing
  1. Users and their Security Management.
  2. Implementation Project Management.
  3. Enterprise Structure creation and management.

As part of our post IDM/FA install, we have to complete  the following two tasks using the OIM system administrator.

  1. Preparing Oracle Fusion Applications Super User for User Management and Configuration

     

  2. Preparing IT Security Manager Role for User and Role Management

     

 Requirements

 

 Before we begin we have to  make sure the following.

  1. FA install is successfully completed. Any RUP install are done and successfully completed.

 

  • URLs for Oracle FA and OIM are available.

 

 

  • OIM system administrator user and Super User (FAAdmin or weblogic_fa or user defined) credentials.

 

Preparing Oracle Fusion Applications Super User for User Management and Configuration
During the provisioning and installation of Oracle Fusion Application a super user is created by default (FAAdmin or weblogi_fa  etc as provided during the installation). However the email id for this super user may  not be setup correctly during the provisioning and installation. The first task is to make sure the super user has a valid email id as it is mandatory for  User management and configuration.  This could be done in couple of ways

a)      Command Line (Linux)   

Command Line Interface

  1. Open a new Terminal.

     

  2. Using Vi editor or gedit,  create an ldif file with the following contents (sample.ldif).  I had stored this file along with other property file in the following directory /u01/fastage/prop_files

     

dn: cn=weblogic_fa, cn=users, dc=mycompany, dc=com

 

changetype: modify

 

replace: mail

 

mail: valid e-mail_address
Note that the super user in this case is "weblogic_fa".
  1. In the Oracle Identity management domain (IDM), set  the Oracle Home to point to IDM.
$> export MW_HOME=/u01/app/oracle/product/fmw 
$>export ORACLE_HOME=$MW_HOME/idm
  1. Run the ldapmodify command to modify the super user (in this case weblogic_fa) email id.
            $> $ORACLE_HOME/bin/ldapmodify -h idstore.mycompany.com -p 389
-D cn=orcladmin  -w Welcome1  -f $HOME/prop_files/superuseremail.ldif
 Note we use the OIM administrator "orcladmin" to effect the email changes to the super user "weblogic_fa".
  1. Make sure that the command is run without any errors.

 

  • Run the Reconciliation detailed below (after the GUI Interface for ODSM section)

 

  1. Log into ODSM using OIM administrator (xelsysadmn).
 
  1. Click on “connect to directory” and select OID – OID-SSL  and log in as administrator (cn=orcladmin)
 
  1. Select Data Browser TAB and expand DN “dc=com” which provides the details of the users.
 
  1. Navigate and select the super user “weblogc_fa”.  On the right side pane, you can edit/change the email address.
 
  1. Press APPLY to effect the changes.

Reconciliation

Run the reconciliation to synch LDAP with  OIM.

6.      Launch the OIM URL and use the OIM system administrator user name and password to sign in.
7.       Click the Advanced link in the upper right of the interface.

 

            a.        Click Search Scheduled Jobs in the System Management tasks. 

             b.       Enter LDAP User Create and Update Full Reconciliation in the Search Scheduled Jobs field.  

 

 
               c.        Select the job in the search results.
 
    1. Click Run Now to reconcile user updates based on the change log from LDAP.  Scroll down to make sure that the job has run successfully.
 
 
The super user created during the installation and provisioning can implement the Oracle Fusion application and administer security. However it does not have roles to create and manage Oracle Fusion Application users. Hence for the IT Security Manager role we add the following OIM roles.
  • Identity User Administrators, which carries user management entitlement

 

  • Role Administrators, which carries role management entitlement

 

Note: If you plan to implement your pilot project entirely while signed in as the super user and do not plan to create additional users, then you can skip this step. In reality there would be multiple Fusion Application Users created for various transactions and you are most likely need to perform this step.

 

  1. Sign in to OIM. Launch the OIM URL and use the OIM system administrator user name and password to sign in.
 
2.       Click on Administration in the upper right of the interface

                                a.   Search for the IT Security Manager role, and select the role name in the search results.

                               b.       From the Hierarchy tab, click on Inherits From.

                               c.        Click on Add.

                               d.       Select the role category: OIM  Roles and click the find arrow.

                                e.      Select IDENTITY USER ADMINISTRATORS & ROLE ADMINISTRATORS (ctrl + click) and move them to the Add Role list.

 

 
 
                         f.        Click Save. This enables the IT Security Manager with both the roles (Identity user Administrator & Role Administrators) to IT Security Manger.

3.       ALTERNATE for the above task # 2). You may just add SYSTEMADMINISTRATORS role which Inherits from both the roles Identity user Administrator & Role Administrators)  to IT SECURITY MANAGER role

 

 
4.       Return to the Welcome to Identity Manager Delegated Administration page,    
  • In the search pane, enter  Xellerate Users in the search field
  • Select organization on the left drop down box and hit search arrow
  • Select the organization name in the search results. The left pane should now display the corresponding details of Xellerate Users.

                           a.        Click the Administrative Roles link in the row of links above the Xellerate Users page.

b.       In the POPUP window Click ASSIGN

                                      c.        In the Filter By Role Name field of the Details window, enter *IT_SECURITY_MANAGER*

                                      d.       Click Find.

                                      e.        Enable Read, Write, Delete, and Assign.

                            f.        Click Assign and Confirm.

 

 
  1. Close the window and sign out.
This concludes this post on preparing the IDM/FA installed environment for Common Implementation of Fusion Applications.
Viewing all 49 articles
Browse latest View live