Sunday, December 17, 2017

Transform your on-premise Oracle Investments to Cloud - A Perspective !!

This article is an inspiration from some of the questions that I get asked by customers every day.

1) We have made a lot of on-premise Oracle investments - especially database. How can Oracle help us with our cloud transformation initiatives?
2) How does Oracle DB Cloud Service compare to AWS RDS, Oracle software on Azure? Why should we choose Oracle cloud over competition?
3) What is Oracle's strategy and vision for enterprise customers who have made significant investments over the years on-prem?
4) Other than price-point TCO benefits, what other benefits does Oracle Cloud offer?

In my job role as an enterprise cloud architect, I engage with my customers by bringing in a point-of-view that helps nurture long-term strategy discussions, enrich ideas, propose solution & options to further their cloud/digital transformation endeavors.

In this article, we will analyze a typical customer scenario with various cloud options, inherent PaaS advantages, cost comparisons and non-quantifiable benefits.

Before we delve deep into the details and cost comparisons, I want to state a safe harbor disclaimer that all views (including data points, pricing and options) expressed in this article are my own, based on experience and does not necessarily reflect the views of Oracle. As an Oracle enthusiast and evangelist, this article is purely intended to present a point-of-view, analyze options, value and benefits.

Okay.. Let's take a quick peek at the Oracle database cloud offerings. Built on the basic premise of offering "complete choice", customers have the option to subscribe to the smallest standard DB instance on a VM for development, 2-node RAC cluster DB instance on bare metal for high performance production workloads or opt for the subscription based extreme performance Exadata in the cloud.

Unique to Oracle Cloud, for customers with existing on-premise database licenses, it's an understatement to say the BYOL PaaS pricing model is "attractive". Just for quick comparisons, at published Pay-as-yo-go pricing;

License included DBCS Enterprise Edition (1 OCPU / Hour) is $0.8064
BYOL to Oracle DBCS Enterprise Edition (1 OCPU /Hour) is $0.2903

That is 64% savings right off the bat.

1 OCPU is equivalent of one physical core of Intel Xeon processor with hyper threading enabled - equivalent of AWS' 2 vCPUs and 1 Azure Core.

Let's now look at how this compares to Oracle database on AWS, Azure and GCP. This list is not exhaustive but a selection of a few key considerations for enterprise mission-critical workloads.

AWS and Azure are authorized cloud environments. Google Cloud Platform is not an authorized cloud environment for Oracle Database (predominantly because of how GCP virtualizes their servers).

However, should customers choose AWS or Azure cloud to host Oracle Database? - depends on a few factors;

First and foremost consideration when customers move workloads to cloud: IaaS or PaaS? Database on IaaS only offers "IaaS" benefits like saving datacenter costs. PaaS options like Oracle Database Cloud Service offers higher level of service benefits in the cloud including automated provisioning, elastic scaling, patching, rollback etc..

a) High Availability (HA): For customers with HA needs, this could be a deal breaker as neither Azure nor AWS support RAC (Real Application Clusters). At best AWS RDS offers replication and Multi-AZ deployments but not with zero-downtime.

b) PaaS / Fully Managed: If you are looking for a fully managed, elastic, seamlessly scalable, full-stack patching capabilities, AWS/Azure may not be right fit.

c) License Cost: Although AWS and Azure are authorized cloud environments for running Oracle database, when counting Oracle Processor license requirements, the Oracle Processor Core Factor Table is not applicable. This basically makes it 2x more expensive for customers to run Oracle database on AWS/Azure than on-premise.

d) Provisioned IOPS: Costs can quickly add up if customers choose "provisioned IOPS" SSD for storage. By default, for all workloads Oracle Cloud offers high performance NVMe based SSD storage.

e) Data Security & Encryption: TDE (Transparent Data Encryption) is included and enabled by default in the Oracle Cloud for all Oracle editions and options (including database standard edition). For eg., with AWS customer must buy the "Advanced Security" option.

f) Database Options: Oracle cloud bundles database options into 4 broad offerings. Standard, Enterprise, Enterprise High Performance & Enterprise Extreme Performance. For BYOL customers, even the basic Enterprise Edition comes included with database options such as Diagnostics Pack, Tuning Pack, Real Application Testing, Data Masking & Subsetting Pack. This means, customers with Database EE license can leverage these features in the cloud even if they are not currently licensed on-premise - thus presenting a huge advantage.

g) Backup & Restore: Oracle offers in-place restore for your database backups. This means, you can choose from any of the available backups (automated / point-in-time / most recent) and perform a restore on the same database instance. In contrast, AWS allows restore from backups but creates a "new" database instance - potentially impacting application connectivity, VPC, security group re-configuration.

Now, let's take a typical customer scenario as we walk through various options;

Current Install Base (8 Processor Licenses):

  • Oracle Database Enterprise Edition

Licensed Database Options:
  • Partitioning
  • Real Application Clusters (RAC)
  • Active Data Guard
  • Advanced Compression
  • Database Vault
  • Diagnostics Pack
  • Tuning Pack
  • OLAP
  • Advanced Security

Quick note on Oracle on-prem license metrics - 1 Processor license typically has a 0.5 core factor multiplier unless customers have deployed on high horsepower systems such as Intel Itaniums or IBM Ps.

In this scenario, this means customer can deploy Oracle software on 16 cores - which typically is equivalent to 32 vCPUs in a virtualized environment (Assumption: 1 physical core -> 2 threads).

At list price, initial cost of the above configuration would be $1.27 M (including software license acquisition & support). Pragmatically, @ 60% discount, this could be $500 K.

Year 1 Year 2 Year n
DB EE License $1.27 M $0 $0
Support $358 K $358 K $358 K
Total $1.63 M $358 K $358 K
@ 60% Discount $508 K $143 K $143 K

Now, let's pivot this on-premise database to PaaS (Database as a Service)...
Customer has 2 options;

  • Subscribe to "license-included" DBCS (PaaS). This would preserve their on-prem licenses which could be re-purposed for other projects still on-prem
  • BYOL (Bring Your Own License) option - Convert on-premise database investments to cloud with heavily discounted PaaS subscription costs (Credits applied since customer owns on-prem Oracle database licenses)
For the same configuration, closest option for license-included DBCS is DBCS Extreme Performance (support for RAC & Active Data Guard). Customer is also entitled for other database options like In-Memory, Advanced Analytics etc.. as they are bundled under Extreme Performance edition.

However, with BYOL, customers can bring their DB Enterprise Edition license along with the licensed options and run it on Oracle cloud as PaaS. In this case, customer also gains access to features like Real Application Testing, Data Masking & Subsetting Pack,

This is another unique Oracle cloud feature. For eg., AWS does not offer a "license-included" RDS for Oracle Database Enterprise Edition.

Irrespective of options, subscription cost includes underlying infrastructure (compute, storage & networking), infrastructure support, software (database) licence, software support and automations.

Year 1 Year 2 Year n
License Included DBCS Extreme Performance $360 K $360 K $360 K
BYOL DBCS EE $41 K $41 K $41 K

Clearly BYOL option is a winner with ~89% savings over license included PaaS.

That's not all. The above is based on published PAYG pricing. Further discounting available on monthly commits.

Of course, no one size fits all !! Customers have a wide range of options to choose their deployment on VMs, Bare Metal or Exadata. Engage your Oracle team for value add services including portfolio analysis, TCO & tailored roadmap.

Please leave your feedback and thoughts.

Monday, November 20, 2017

The “Enterprise Cloud”: 5 reasons why Oracle’s Next-Gen Cloud Infrastructure is perfect for your Enterprise

Spend, Security & Sustainability are most likely the top 3 concerns of any CIO/CDO in the cloud era. The spike trend in “cloud transformation” initiatives is at its peak. As enterprises look to pivot to the cloud, it’s imperative not to create a “cloud spaghetti” – the same issue that haunts the traditional on-prem systems. It is not about that first one-off experimental project or lift & shift of an application to the cloud Infrastructure that adds value in the longer run – painting the enterprise’s broader vision, ensuring cloud vendor’s compliance to “standards”, seamless integration options (PaaS), roadmap for cloud maturity/evolution (SaaS) for higher level of service efficiencies – all of which should be key concerns of enterprise architects.

Purpose built for diverse enterprise workloads, the next gen Oracle Cloud Infrastructure promises extreme peak consistent performance, standards compliance and choice at simple intuitive pricing.

Here are 5 ways how Oracle Cloud Infrastructure uniquely offers these capabilities;

1)      Modern X7 and GPU Instances

Oracle Cloud Infrastructure offers compute for a variety of workloads - from cloud-native application development to graphic intensive application workloads. Modern X7 skylake processors with up to 52 OCPUs available in standard, High IO, Dense IO shapes with available local high-speed NVMe storage and Tesla P100 GPUs based on NVIDIA Pascal Generation powers Oracle Cloud Infrastructure.

2)      Choice of Compute & Deployment

Oracle is uniquely positioned to offer 3 deployment models – public cloud, private cloud & cloud @ customer to serve customers of all different shapes, sizes, needs and maturity. Customers can provision dedicated bare-metal servers in the cloud where no provider software resides or virtual machine instances based on needs. Also unique to Oracle Cloud Infrastructure is that it is optimized to run Oracle Databases and Oracle Applications helping customers with their transition to cloud.

3)      High Throughput 25Gbps Flat Network Infrastructure

With a flat network design reaching any compute or storage node within the Oracle Cloud Infrastructure is no more than 2 hops – extreme performance. Connections between any two nodes within an Availability Domain is < 100 microseconds and < 1 millisecond between Availability Domains. Unique to Oracle Cloud Infrastructure is the fact that there is no “tax” for HA – customers pay no “data transfer” charges for HA between Availability Domains.

4)      High Performance NVMe local & Flash-based Block Storage

Oracle Cloud offers best-in-class storage using the industry-leading NVMe SSDs. In terms of performance, what this means is that customers can get up to 25,000 IOPS per service volume. Unique to Oracle Cloud Infrastructure is the model where customers don’t get charged for provisioned IOPS which makes a lot of IOPS intensive usecases much cheaper to run. With out-of-the-box data @ rest encryption, integrated backups and redundancy, customers pay little over 4 cents per GB per month – that’s ~$500 per TB for a year!

5)      Network Isolation

With security at the core of the design, Oracle Cloud Infrastructure virtualizes at the network layer – where it truly belongs. This helps fully encapsulate every customer’s traffic in a completely private SDN. With highly customizable VCNs (Virtual Cloud Networks), fully configurable IP addresses, subnets, routing, firewall and connectivity services, organizations can seamlessly extend their IT infrastructure by mirroring their internal networks or build new network topologies with fine-grained control.

Wednesday, February 8, 2017

Zero-2-Eventing in minutes: Dockerize Apache Kafka on Oracle Container Cloud

Messaging platform with extreme scaling, fault-tolerance, replication, parallelism, real-time streaming and load balancing - Apache Kafka is arguably the most commonly used distributed messaging platform today.

A few weeks ago, I was working with one of my customers on their enterprise cloud strategy. They are one of the largest retail brands in the US. As part of their rationalization exercise and "Pivot to the Cloud" strategy, their Kafka event hub had to be containerized and deployed on cloud infrastructure.

The idea of this blog post is to walk you through running a full-stack Docker based Apache Kafka + Zookeeper cluster on Oracle Container Cloud in a matter of minutes without having to deal with the complex infrastructure/network setup, Docker toolset installs, upgrades and maintenance.

If you are new to Oracle Container Cloud, please refer to my earlier blog here.

If you would like to get a feel of the Oracle Container Cloud service, head out to and request a fully-featured instance.

Once you are logged-in as a cloud administrator, click on Container Cloud Service from the list of services available on the cloud dashboard.

In the Oracle Container Cloud Service console, click "Create Service" to create a new Container Cloud Service instance.

Define the service details on the "Create Service" page. Click Next and Confirm.

Give it a few minutes and you will find a Container Manager Node and Worker Nodes provisioned for use. Click on the service to explore the service details.

Click on the hamburger menu on the container service and choose "Container Console" to open the service administrator console. Login using the administrator user (provided during the service creation).

For users of Apache Kafka on Docker, you would be aware of tens of publicly available containers.

If you are looking to have a simple single container Kafka service where Zookeeper and Kafka brokers co-exist on a single container, I have found spotify/kafka easy to setup & use.
For the more complex multi-tier setup, where Zookeeper and Kafka brokers run on dedicated container nodes, wurstmeister/kafka is the most popular option.

Since, I want to demonstrate how to provision a production grade Kafka stack on Oracle Container Cloud, we will go with the wurstmeister/kafka docker image

Go to the Services section and click "New Service" button. You can see that OCCS offers multiple options to define a service container;

  1. Builder: For the not-so-tech-savvy users where you can simply enter service details and OCCS takes care of building the docker commands for you
  2. Docker Run: If you are a Docker pro, you can simply head out to this tab and enter your Docker Run commands directly. This is also a great option, if you already have existing Docker setup which allows you to simply copy paste your Docker Run command
  3. YAML: For the YAML lovers, you can also define your service using YAML constructs

The cool thing is that, you can use any / either / a combination of these options to define and create your container service. Any changes you make on any of these will reflect immediately on the other automagically.

Let's use the "Builder" tab to define our first Kafka service.

Service Name: Provide a name for the Kafka service. Notice that the service ID is automatically generated which will be used to uniquely identify our service.

Service Description: Describe the service. Eg., My Kafka Event Hub.
Notice that this would automatically create a environment variable "occs:description"

Scheduler: Determines how & where containers will be provisioned across hosts.

Availability: Define availability of the service based on pool, host or tags.

Image: Enter "wurstmeister/kafka" (without quotes).
Since this is a public image available on docker hub, OCCS can pull this automatically. Remember you can also pull from private docker registries that you might have. If so, head over to "Registries" section on the main console to add your docker registry.

Command: If you want to run some commands on container startup that goes here.

In the "Available Options" panel, choose "Ports". This will add a new "Ports" section to your Builder panel. Click "Add". This will define on what port our Kafka service would run.
Leave the IP field empty (this would default to the container IP based on the host it would run - determined dynamically). Enter host port as 9092, container port as 9092 and choose TCP for protocol.
Your first Kafka service should look like below.

Now, head over to the "Docker Run" and "YAML" tabs and notice the service definition created automagically in the background while we were defining the service. Click "Save" to exit.

Let's now create another definition for our Zookeeper service. Kafka uses Zookeeper for cluster and member management.
Follow the same steps as earlier to create the new Zookeeper service which would run on port 2181.

Your Zookeeper service should look like below. Save & Exit.

Now that we have our Kafka and Zookeeper services ready, time to link them up together for our full-fledged Kafka stack on cloud.

Go to "Stacks" section and click "New Stack".
Note: This is the Docker Compose feature. If you already had your Docker Compose YAML files, you can simply copy-paste here to stack up your services.

Provide the new stack a name: MyKafkaStack (This would create a stack id automatically to uniquely identify the stack).

Notice all the services displayed on the right under the "Available Services" section.

Similar to the "Service" definition, "Stacks" offer 2 modes to define and create stacks. Either drag & drop services on UI (or) click on "Advanced Editor" to wire services using YAML constructs. Even better, use a combination of both.
Let's use a combination.

Let's drag & drop MyKafka and MyZookeeper services on to the "Stacks" screen.

Click on "Advanced Editor" to open the YAML composer. Immediately notice that the YAML script is generated based on the service we composed on the UI (drag & drop).

The Kafka service requires a few environment variables to be set to expose itself for external connectivity. Add the following environment variables to the YAML editor under the "MyKafka" service.

- "KAFKA_ADVERTISED_HOST_NAME={{hostip_for_interface .HostIPs \"public_ip\"}}"
- "KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181"

Note: We want the stack to run on "Any Host" irrespective of IP address changes. The expression above will fetch the IP address of the host dynamically at run-time. You can leverage the "Tips & Tricks" option in the editor to see some tips & examples.

Define the links under the "MyKafka" service to link it to our Zookeeper container, using the YAML construct below;


Ensure that your stack editor looks like below and click Done to exit. Save to exit the stack editor.

Note: A link is shown on the UI indicating the Kafka service is "linked" to the Zookeeper.

OCCS offers a convenient single-click deployment of stacks. Click "Deploy" next to the "MyKafkaStack" to deploy both Kafka and Zookeeper services.

On successful deployment, you should see 2 healthy services running. Note that OCCS allows you to define health checks on containers.

You can also define "webhooks" for your Continuous Integration (CI) / Continuous Delivery (CD) capabilities.

Let's quickly test our new Kafka stack. I am using my local Kafka command line client to test my Kafka service. First create a new topic, start the producer and consumer scripts in your terminals.

We just deployed a Apache Kafka Zookeeper Docker stack on Oracle Cloud. You can now start scaling your Kafka cluster, add more container/hosts and dynamically scale up/down and use this as your cloud-based event hub.

In my next blog, we will see how to rapidly deploy a LAMP stack application on Oracle Container Cloud. Stay Tuned!

Tuesday, February 7, 2017

Simplify Cloud Native, Microservices DevOps with Oracle Container Cloud

"Containers" are becoming the new normal and an indispensable part of cloud native / microservices development. If you are new to the concept of containers, open a new tab and google. Container benefits are out of scope of this article.
With respect to cloud-native development, containers provide DevOps 2 huge benefits;

  • Robust foundation for microservices "style" architecture & scalability
  • Environment parity (Dev-Test-Prod) and seamless hybrid deployment

All things considered, containers are great for "Dev". Are they good for "Ops"?

With even more services to manage, monitor & maintain, containers certainly pose some challenges unless you have a robust, easy to provision management and monitoring platform.

Oracle Container Cloud Service aims to solve exactly that problem. With comprehensive tooling to compose, deploy, orchestrate and manage container-based apps, Oracle container cloud enables rapid creation, deployment & management of enterprise-grade container infrastructure.

Let's take a peek under the hood;

Spin-up or Tear-down containers at-will:

Whether you are looking to quickly setup an infrastructure for testing your container apps or setting up a production-grade container infrastructure to run your apps, you can do it all with just a few clicks.

Oracle container cloud automatically provisions a manager node which will act as the "container management" server and you have the ability to configure the shape and size of the hosts on which your containers would run - called "worker nodes".

Group or Assign hosts to different pools for resource segregation using the "Resource Pools" feature.

In addition, "Tags" feature allows you to tag your resource pools, hosts, services and deployments. Tags provide fine-grained control over hosts/resource pools on which a service/stack can be deployed on.

Discover & Manage DNS information of all your running docker containers from the "Service Discovery" page.

BYOD (Bring Your Own Docker containers) or Start with example stacks:

Oracle container cloud links to the public docker hub registry out-of-the-box where you can pull from thousands of docker images. Whether you have a public docker repository or a private docker hub, you can add them to the container cloud docker registry.

If you are new to Docker containers, you can jumpstart with some of the in-built example services and stacks - Nginx, Apache HTTP server, Mongo, MySQL, MariaDB, Busybox, HAProxy, Wordpress etc..

Focus on building your apps and service stacks:

As I mentioned earlier, the operational complexity with containers and microservices is with the complex orchestration scripts, dependency management, scaling and deployment. Oracle container cloud stands-out in this respect - providing the ability to create single-click deployment of the entire stack, built-in service discovery, quick import existing Docker Run / Docker Compose YAML and on-click scale - all from a single pane of glass.

When it's time to fly:

After successful deployment of Docker containers, it's highly critical to gain insight into your container apps and services.
Oracle Container Cloud offers simple yet powerful monitoring & management dashboards to monitor container/host performance, container health, event audit logs. To top it, OCCS also maintains the running state of the app with self-healing application deployments.

Any modern cloud offering is never complete without REST APIs. OCCS offers complete suite of REST APIs to configure, deploy, administer, monitor, orchestrate & scale your container apps/services.

In my next article here, I will walk you through on how to deploy a full-stack Apache Kafka service with Zookeeper in minutes.

Eager to get started? Get a free trial of the Oracle Container Cloud here and let me know your feedback.

Thursday, December 22, 2016

SOA 12c: Process Large Files Using Oracle MFT & File Adapter Chunked Read Option

SOA 12c adds a new ChunkedRead operation to the JCA File Adapter. Prior to this, users had to use a SynchRead operation and then edit the JCA file to achieve a "chunked read". In this blog, I will attempt to explain how to process a large file in chunks using the SOA File Adapter and some best practices around it. One of the major advantages of chunking large files is that it reduces the amount of data that is loaded in memory and makes efficient use of the translator resources.

"File Processing" means, reading, parsing and translating the file contents. If you just want to move/transfer a file consider using MFT for best performance, efficiency & scalability.

In this example, MFT gets a large customer records file from a remote SFTP location and sends it to SOA layer for further processing. MFT configuration is pretty straight-forward and is out of scope in this entry. For more info on Oracle MFT read here.

SOA 12c offers tight integration with Oracle MFT through the simple to use MFT adapter. If the MFT adapter is configured as a service, MFT can directly pass the file either inline or as a reference to the SOA process. If configured as a reference, it enables a SOA process to leverage MFT to transfer a file.

MFT also provides a bunch of useful file metadata info (target file name, directory, file size etc..) as part of the MFT header SOAP request.
Create a File Adapter:

Drag & drop a File adapter to the external references swimlane of our SOA composite. Follow instructions in the wizard to complete the configuration as shown below. Ensure that you choose the "Chunked Read" operation and define a chunk size - This will be the number of records in the file that will be read in each iteration. For eg., if you have 500 records with a chunk size of 100, the adapter would read the file in 5 chunks.

You will have to create an NXSD schema which can be generated with the sample flat file. The file adapter uses the NXSD to read the flat file and also convert it into XML format.

Implementing the BPEL Process:

Now, create a BPEL process using the BPEL 2.0 specification [This is the default option].
As a best practice, ensure the BPEL process is asynchronous - this will ensure that the "long running" BPEL process doesn't hog threads.

In this case, since we are receiving a file from MFT, we will choose "No Service" template to create a BPEL process with no interface. We will define this interface later with the MFT adapter.

Create MFT Adapter:

Drag and drop an MFT adapter to the "Exposed Services" swimlane of your SOA composite application, provide a name and choose "Service". Now, wire the MFT Adapter service and File Adapter reference to the BPEL process we created. Your SOA composite should look like below;

Processing large file in chunks:

In order to process the file in chunks, the BPEL process invoke that triggers the File Adapter must be placed within a while loop. During each iteration, the file adapter uses the property header values to determine where to start reading.

At a minimum, the following are the JCA adapter properties that must be set;

jca.file.FileName : Send/Receive file name. This property overrides the adapter configuration. Very handy property to set / get dynamic file names
jca.file.Directory : Send/Receive directory location. This property overrides the adapter configuration
jca.file.LineNumber : Set/Get line number from which the file adapter must start processing the native file
jca.file.ColumnNumber : Set/Get column number from which the file adapter must start processing the native file
jca.file.IsEOF : File adapter returns this property to indicate whether end-of-file has been reached or not

Apart from the above, there are 3 other properties that helps with error management & exception handling.

jca.file.IsMessageRejected : Returned by the file adapter if a message is rejected (non-conformance to the schema/not well formed)
jca.file.RejectionReason : Returned by the file adapter in conjunction with the above property. Reason for the message rejection
jca.file.NoDataFound : Returned by the file adapter if no data is found to be read

In the BPEL process "Invoke" activity, only jca.file.FileName and jca.file.Directory properies are available to choose from the properties tab. We will have to configure the other properties manually.

First, let's create a bunch of BPEL variables to hold these properties. For simplicity, just create all variables with a simple XSD string type.

Let's now configure the file adapter properties.

For input, we must first send filename, directory, line number and column number to the file adapter, so the first chunked read can happen. From the return properties (output), we will receive the new line number, column number, end-of-file properties which can be fed back to the adapter within a while loop.

Click on the "source" tab in the BPEL process and configure the following properties. Syntax shown below is for BPEL 2.0 spec, since we built the BPEL process based on BPEL 2.0.

Note: In BPEL 1.1 specification, the syntax was bpelx:inputProperties & bpelx:outputProperties.
Drag & drop an assign activity before the while loop to initialize the variables for the first time the file is read (first chunk) - since we know the first chunk of data will start at line 1 and column 1.

lineNumber -> 1
columnNumber -> 1
isEOF -> 'false'

For the while loop condition, the file adapter must be invoked until end-of-file is reached, enter the following loop condition;

Within the while loop, drag & drop another assign activity to re-assign file adapter properties.

returnIsEOF -> isEOF
returnLineNumber -> lineNumber
returnColumnNumber -> columnNumber

This will ensure that the in the next loop, file adapter would start fetching records from the previous end. For eg., If you have a file with 500 records with a chunk value of 100, returnLineNumber will have a value of 101 after the first loop is completed. This will ensure the file adapter starts reading the file from line number 101 instead of starting over.

Your BPEL process must look like this;

We now have the BPEL process that receives file reference from MFT, reads the large file in chunks.

Further processing like data shaping, transformation can be done from within the while loop.

Thursday, November 17, 2016

SOA 12c RCU: Oracle XE 11g TNS listener does not currently know of SID

Recently, I installed Oracle XE 11g database on my windows machine to host my SOA 12c RCU.
Note: Although XE is not a certified database for SOA 12c, it works just fine for development purposes.

Strangely enough, my RCU utility was unable to connect to the database instance. I kept getting the error that "Unable to connect to the DB. Service not available".
I was pretty sure that all my connect parameters were correct.

Also, worth noting is that, I couldn't connect to the DB apex application running @

First suspicion was to check the service name, as sometimes during installation, the domain name gets appended to the service name. eg., instead of orcl, it might be registered as orcl.localdomain

A quick look at the listener.ora file revealed that the default service name was indeed XE.

However, when I ran the lsnrctl status command, I could see that the XE service was not listed.

Default Service           XE
Listener Parameter File   C:\oraclexe\app\oracle\product\11.2.0\server\network\admin\listener.ora
Listener Log File         C:\oraclexe\app\oracle\diag\tnslsnr\SATANNAM-US\listener\alert\log.xml
Listening Endpoints Summary...
Services Summary...
Service "CLRExtProc" has 1 instance(s).
  Instance "CLRExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully.

This is due to the fact that the listener hasn't registered the XE service properly. In my case, restarts of database and listener services didn't help. Remember, as a best practice the listener must always be started ahead of starting the database for it to register the services.

The fix is to manually instruct the database to register the XE service. To do this, login to sqlplus as sysdba and issue the following commands.

> sqlplus / as sysdba
> Connected to:
Oracle Database 11g Express Edition Release - 64bit Production

SQL> alter system set LOCAL_LISTENER='(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))' scope=both;
alter system register;

Exit sqlplus and restart your OracleServiceXE and listener services.

Now, lsnrctl status command gives the following output;


Default Service           XE
Listener Parameter File   C:\oraclexe\app\oracle\product\11.2.0\server\network\admin\listener.ora
Listener Log File         C:\oraclexe\app\oracle\diag\tnslsnr\SATANNAM-US\listener\alert\log.xml
Listening Endpoints Summary...
Services Summary...
Service "CLRExtProc" has 1 instance(s).
  Instance "CLRExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "xe", status READY, has 1 handler(s) for this service...
Service "xe" has 1 instance(s).
  Instance "xe", status READY, has 1 handler(s) for this service...
The command completed successfully.

You can see that XE service is now registered and ready. Also note that the http port 8080 is up and running - meaning you can now successfully access the APEX url.

Monday, November 7, 2016

Process Cloud Service (PCS) Integration Options

Process is ubiquitous - be it SaaS process extensions, automation of a manual process, gain visibility into a process or just eliminating human errors.

With fully visual, browser based, no IDE platform that runs on the cloud, Process Cloud Service lends itself as a simple yet powerful tool to citizen developers and LOB users alike, to raidly automate their business processes with little to no dependency on IT/DevOps. Cloud platform (PaaS) offerings such as Process Cloud Service and Integration Cloud Service enable modern enterprises leveraging a range of SaaS applications to extend, automate and integrate back with on-prem systems.

Outside of its own instance data, business processes also need data from external data sources. Process Cloud Service offers 3 options to seamlessly integrate with external systems / services;

3) ICS (Integration Cloud Service)

To invoke or call external services using a Service Activity within a business process, we must first create a connector - available under the Integrations section in your process composer.

Out of the box, Process Cloud Service allows connectivity to external services through SOAP / REST protocols. For any other type of integration - for eg., Database, File, Oracle/3rd party apps, you have 2 options;

1) Expose them as SOAP/REST APIs either through a middle tier or using natively available options (eg., APEX ORDS for Database) and call them directly from PCS
2) Use Integration Cloud Service (ICS) to quickly interface your target data source as SOAP/REST using a range of technology, application and SaaS adapters


With this integration option, you can connect to any SOAP web service that is accessible over internet. You have options to either upload a WSDL definition or use a SOAP URL directly.
If you are using URL, notice that all the referenced schema (XSD) files are also imported automatically.

You also have an option to configure the "Read Timeout", "Connection Timeout" and WS-Security parameters for the service.


Process Cloud Service offers extensive support to integrate and connect to REST APIs. Intuitive wizard guides through configuration of REST based services including various HTTP verbs, resources and request-response payloads.

3) ICS Integration

Process Cloud Service (PCS) provides tight-integration to Integration Cloud Service (ICS) among other PaaS / IaaS services such as Documents Cloud Service, Business Intelligence Cloud Service, Storage Cloud and Notification Service.

All it requires is a one-time configuration in PCS workspace and while modeling a process, the service connector display all ICS integrations to choose from.

With all these different integration options, Process Cloud Service not only delivers rapid process automation but also offers extensive connectivity to external systems and services.