Wednesday, February 8, 2017

Zero-2-Eventing in minutes: Dockerize Apache Kafka on Oracle Container Cloud

Messaging platform with extreme scaling, fault-tolerance, replication, parallelism, real-time streaming and load balancing - Apache Kafka is arguably the most commonly used distributed messaging platform today.

A few weeks ago, I was working with one of my customers on their enterprise cloud strategy. They are one of the largest retail brands in the US. As part of their rationalization exercise and "Pivot to the Cloud" strategy, their Kafka event hub had to be containerized and deployed on cloud infrastructure.

The idea of this blog post is to walk you through running a full-stack Docker based Apache Kafka + Zookeeper cluster on Oracle Container Cloud in a matter of minutes without having to deal with the complex infrastructure/network setup, Docker toolset installs, upgrades and maintenance.

If you are new to Oracle Container Cloud, please refer to my earlier blog here.

If you would like to get a feel of the Oracle Container Cloud service, head out to https://cloud.oracle.com/en_US/tryit and request a fully-featured instance.

Once you are logged-in as a cloud administrator, click on Container Cloud Service from the list of services available on the cloud dashboard.

In the Oracle Container Cloud Service console, click "Create Service" to create a new Container Cloud Service instance.

Define the service details on the "Create Service" page. Click Next and Confirm.

Give it a few minutes and you will find a Container Manager Node and Worker Nodes provisioned for use. Click on the service to explore the service details.

Click on the hamburger menu on the container service and choose "Container Console" to open the service administrator console. Login using the administrator user (provided during the service creation).

For users of Apache Kafka on Docker, you would be aware of tens of publicly available containers.

If you are looking to have a simple single container Kafka service where Zookeeper and Kafka brokers co-exist on a single container, I have found spotify/kafka easy to setup & use.
For the more complex multi-tier setup, where Zookeeper and Kafka brokers run on dedicated container nodes, wurstmeister/kafka is the most popular option.

Since, I want to demonstrate how to provision a production grade Kafka stack on Oracle Container Cloud, we will go with the wurstmeister/kafka docker image

Go to the Services section and click "New Service" button. You can see that OCCS offers multiple options to define a service container;

  1. Builder: For the not-so-tech-savvy users where you can simply enter service details and OCCS takes care of building the docker commands for you
  2. Docker Run: If you are a Docker pro, you can simply head out to this tab and enter your Docker Run commands directly. This is also a great option, if you already have existing Docker setup which allows you to simply copy paste your Docker Run command
  3. YAML: For the YAML lovers, you can also define your service using YAML constructs

The cool thing is that, you can use any / either / a combination of these options to define and create your container service. Any changes you make on any of these will reflect immediately on the other automagically.

Let's use the "Builder" tab to define our first Kafka service.

Service Name: Provide a name for the Kafka service. Notice that the service ID is automatically generated which will be used to uniquely identify our service.

Service Description: Describe the service. Eg., My Kafka Event Hub.
Notice that this would automatically create a environment variable "occs:description"

Scheduler: Determines how & where containers will be provisioned across hosts.

Availability: Define availability of the service based on pool, host or tags.

Image: Enter "wurstmeister/kafka" (without quotes).
Since this is a public image available on docker hub, OCCS can pull this automatically. Remember you can also pull from private docker registries that you might have. If so, head over to "Registries" section on the main console to add your docker registry.

Command: If you want to run some commands on container startup that goes here.

In the "Available Options" panel, choose "Ports". This will add a new "Ports" section to your Builder panel. Click "Add". This will define on what port our Kafka service would run.
Leave the IP field empty (this would default to the container IP based on the host it would run - determined dynamically). Enter host port as 9092, container port as 9092 and choose TCP for protocol.
Your first Kafka service should look like below.

Now, head over to the "Docker Run" and "YAML" tabs and notice the service definition created automagically in the background while we were defining the service. Click "Save" to exit.

Let's now create another definition for our Zookeeper service. Kafka uses Zookeeper for cluster and member management.
Follow the same steps as earlier to create the new Zookeeper service which would run on port 2181.

Your Zookeeper service should look like below. Save & Exit.

Now that we have our Kafka and Zookeeper services ready, time to link them up together for our full-fledged Kafka stack on cloud.

Go to "Stacks" section and click "New Stack".
Note: This is the Docker Compose feature. If you already had your Docker Compose YAML files, you can simply copy-paste here to stack up your services.

Provide the new stack a name: MyKafkaStack (This would create a stack id automatically to uniquely identify the stack).

Notice all the services displayed on the right under the "Available Services" section.

Similar to the "Service" definition, "Stacks" offer 2 modes to define and create stacks. Either drag & drop services on UI (or) click on "Advanced Editor" to wire services using YAML constructs. Even better, use a combination of both.
Let's use a combination.

Let's drag & drop MyKafka and MyZookeeper services on to the "Stacks" screen.

Click on "Advanced Editor" to open the YAML composer. Immediately notice that the YAML script is generated based on the service we composed on the UI (drag & drop).

The Kafka service requires a few environment variables to be set to expose itself for external connectivity. Add the following environment variables to the YAML editor under the "MyKafka" service.

- "KAFKA_ADVERTISED_HOST_NAME={{hostip_for_interface .HostIPs \"public_ip\"}}"
- KAFKA_ADVERTISED_PORT=9092
- "KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181"

Note: We want the stack to run on "Any Host" irrespective of IP address changes. The expression above will fetch the IP address of the host dynamically at run-time. You can leverage the "Tips & Tricks" option in the editor to see some tips & examples.

Define the links under the "MyKafka" service to link it to our Zookeeper container, using the YAML construct below;

links:
    -"MyZookeeper:zookeeper"

Ensure that your stack editor looks like below and click Done to exit. Save to exit the stack editor.

Note: A link is shown on the UI indicating the Kafka service is "linked" to the Zookeeper.

OCCS offers a convenient single-click deployment of stacks. Click "Deploy" next to the "MyKafkaStack" to deploy both Kafka and Zookeeper services.

On successful deployment, you should see 2 healthy services running. Note that OCCS allows you to define health checks on containers.

You can also define "webhooks" for your Continuous Integration (CI) / Continuous Delivery (CD) capabilities.

Let's quickly test our new Kafka stack. I am using my local Kafka command line client to test my Kafka service. First create a new topic, start the producer and consumer scripts in your terminals.

We just deployed a Apache Kafka Zookeeper Docker stack on Oracle Cloud. You can now start scaling your Kafka cluster, add more container/hosts and dynamically scale up/down and use this as your cloud-based event hub.

In my next blog, we will see how to rapidly deploy a LAMP stack application on Oracle Container Cloud. Stay Tuned!

Tuesday, February 7, 2017

Simplify Cloud Native, Microservices DevOps with Oracle Container Cloud

"Containers" are becoming the new normal and an indispensable part of cloud native / microservices development. If you are new to the concept of containers, open a new tab and google. Container benefits are out of scope of this article.
With respect to cloud-native development, containers provide DevOps 2 huge benefits;

  • Robust foundation for microservices "style" architecture & scalability
  • Environment parity (Dev-Test-Prod) and seamless hybrid deployment

All things considered, containers are great for "Dev". Are they good for "Ops"?

With even more services to manage, monitor & maintain, containers certainly pose some challenges unless you have a robust, easy to provision management and monitoring platform.

Oracle Container Cloud Service aims to solve exactly that problem. With comprehensive tooling to compose, deploy, orchestrate and manage container-based apps, Oracle container cloud enables rapid creation, deployment & management of enterprise-grade container infrastructure.

Let's take a peek under the hood;

Spin-up or Tear-down containers at-will:

Whether you are looking to quickly setup an infrastructure for testing your container apps or setting up a production-grade container infrastructure to run your apps, you can do it all with just a few clicks.

Oracle container cloud automatically provisions a manager node which will act as the "container management" server and you have the ability to configure the shape and size of the hosts on which your containers would run - called "worker nodes".

Group or Assign hosts to different pools for resource segregation using the "Resource Pools" feature.

In addition, "Tags" feature allows you to tag your resource pools, hosts, services and deployments. Tags provide fine-grained control over hosts/resource pools on which a service/stack can be deployed on.

Discover & Manage DNS information of all your running docker containers from the "Service Discovery" page.

BYOD (Bring Your Own Docker containers) or Start with example stacks:

Oracle container cloud links to the public docker hub registry out-of-the-box where you can pull from thousands of docker images. Whether you have a public docker repository or a private docker hub, you can add them to the container cloud docker registry.

If you are new to Docker containers, you can jumpstart with some of the in-built example services and stacks - Nginx, Apache HTTP server, Mongo, MySQL, MariaDB, Busybox, HAProxy, Wordpress etc..

Focus on building your apps and service stacks:

As I mentioned earlier, the operational complexity with containers and microservices is with the complex orchestration scripts, dependency management, scaling and deployment. Oracle container cloud stands-out in this respect - providing the ability to create single-click deployment of the entire stack, built-in service discovery, quick import existing Docker Run / Docker Compose YAML and on-click scale - all from a single pane of glass.

When it's time to fly:

After successful deployment of Docker containers, it's highly critical to gain insight into your container apps and services.
Oracle Container Cloud offers simple yet powerful monitoring & management dashboards to monitor container/host performance, container health, event audit logs. To top it, OCCS also maintains the running state of the app with self-healing application deployments.

Any modern cloud offering is never complete without REST APIs. OCCS offers complete suite of REST APIs to configure, deploy, administer, monitor, orchestrate & scale your container apps/services.

In my next article here, I will walk you through on how to deploy a full-stack Apache Kafka service with Zookeeper in minutes.

Eager to get started? Get a free trial of the Oracle Container Cloud here and let me know your feedback.

Thursday, December 22, 2016

SOA 12c: Process Large Files Using Oracle MFT & File Adapter Chunked Read Option

SOA 12c adds a new ChunkedRead operation to the JCA File Adapter. Prior to this, users had to use a SynchRead operation and then edit the JCA file to achieve a "chunked read". In this blog, I will attempt to explain how to process a large file in chunks using the SOA File Adapter and some best practices around it. One of the major advantages of chunking large files is that it reduces the amount of data that is loaded in memory and makes efficient use of the translator resources.

"File Processing" means, reading, parsing and translating the file contents. If you just want to move/transfer a file consider using MFT for best performance, efficiency & scalability.

In this example, MFT gets a large customer records file from a remote SFTP location and sends it to SOA layer for further processing. MFT configuration is pretty straight-forward and is out of scope in this entry. For more info on Oracle MFT read here.

SOA 12c offers tight integration with Oracle MFT through the simple to use MFT adapter. If the MFT adapter is configured as a service, MFT can directly pass the file either inline or as a reference to the SOA process. If configured as a reference, it enables a SOA process to leverage MFT to transfer a file.

MFT also provides a bunch of useful file metadata info (target file name, directory, file size etc..) as part of the MFT header SOAP request.
Create a File Adapter:

Drag & drop a File adapter to the external references swimlane of our SOA composite. Follow instructions in the wizard to complete the configuration as shown below. Ensure that you choose the "Chunked Read" operation and define a chunk size - This will be the number of records in the file that will be read in each iteration. For eg., if you have 500 records with a chunk size of 100, the adapter would read the file in 5 chunks.







You will have to create an NXSD schema which can be generated with the sample flat file. The file adapter uses the NXSD to read the flat file and also convert it into XML format.

Implementing the BPEL Process:

Now, create a BPEL process using the BPEL 2.0 specification [This is the default option].
As a best practice, ensure the BPEL process is asynchronous - this will ensure that the "long running" BPEL process doesn't hog threads.

In this case, since we are receiving a file from MFT, we will choose "No Service" template to create a BPEL process with no interface. We will define this interface later with the MFT adapter.


Create MFT Adapter:

Drag and drop an MFT adapter to the "Exposed Services" swimlane of your SOA composite application, provide a name and choose "Service". Now, wire the MFT Adapter service and File Adapter reference to the BPEL process we created. Your SOA composite should look like below;



Processing large file in chunks:

In order to process the file in chunks, the BPEL process invoke that triggers the File Adapter must be placed within a while loop. During each iteration, the file adapter uses the property header values to determine where to start reading.

At a minimum, the following are the JCA adapter properties that must be set;

jca.file.FileName : Send/Receive file name. This property overrides the adapter configuration. Very handy property to set / get dynamic file names
jca.file.Directory : Send/Receive directory location. This property overrides the adapter configuration
jca.file.LineNumber : Set/Get line number from which the file adapter must start processing the native file
jca.file.ColumnNumber : Set/Get column number from which the file adapter must start processing the native file
jca.file.IsEOF : File adapter returns this property to indicate whether end-of-file has been reached or not

Apart from the above, there are 3 other properties that helps with error management & exception handling.

jca.file.IsMessageRejected : Returned by the file adapter if a message is rejected (non-conformance to the schema/not well formed)
jca.file.RejectionReason : Returned by the file adapter in conjunction with the above property. Reason for the message rejection
jca.file.NoDataFound : Returned by the file adapter if no data is found to be read

In the BPEL process "Invoke" activity, only jca.file.FileName and jca.file.Directory properies are available to choose from the properties tab. We will have to configure the other properties manually.

First, let's create a bunch of BPEL variables to hold these properties. For simplicity, just create all variables with a simple XSD string type.

Let's now configure the file adapter properties.

For input, we must first send filename, directory, line number and column number to the file adapter, so the first chunked read can happen. From the return properties (output), we will receive the new line number, column number, end-of-file properties which can be fed back to the adapter within a while loop.

Click on the "source" tab in the BPEL process and configure the following properties. Syntax shown below is for BPEL 2.0 spec, since we built the BPEL process based on BPEL 2.0.

Note: In BPEL 1.1 specification, the syntax was bpelx:inputProperties & bpelx:outputProperties.
Drag & drop an assign activity before the while loop to initialize the variables for the first time the file is read (first chunk) - since we know the first chunk of data will start at line 1 and column 1.

lineNumber -> 1
columnNumber -> 1
isEOF -> 'false'

For the while loop condition, the file adapter must be invoked until end-of-file is reached, enter the following loop condition;

Within the while loop, drag & drop another assign activity to re-assign file adapter properties.

returnIsEOF -> isEOF
returnLineNumber -> lineNumber
returnColumnNumber -> columnNumber

This will ensure that the in the next loop, file adapter would start fetching records from the previous end. For eg., If you have a file with 500 records with a chunk value of 100, returnLineNumber will have a value of 101 after the first loop is completed. This will ensure the file adapter starts reading the file from line number 101 instead of starting over.

Your BPEL process must look like this;

We now have the BPEL process that receives file reference from MFT, reads the large file in chunks.

Further processing like data shaping, transformation can be done from within the while loop.

Thursday, November 17, 2016

SOA 12c RCU: Oracle XE 11g TNS listener does not currently know of SID

Recently, I installed Oracle XE 11g database on my windows machine to host my SOA 12c RCU.
Note: Although XE is not a certified database for SOA 12c, it works just fine for development purposes.

Strangely enough, my RCU utility was unable to connect to the database instance. I kept getting the error that "Unable to connect to the DB. Service not available".
I was pretty sure that all my connect parameters were correct.

Also, worth noting is that, I couldn't connect to the DB apex application running @ http://127.0.0.1:8080/apex/f?p=4950

First suspicion was to check the service name, as sometimes during installation, the domain name gets appended to the service name. eg., instead of orcl, it might be registered as orcl.localdomain

A quick look at the listener.ora file revealed that the default service name was indeed XE.

However, when I ran the lsnrctl status command, I could see that the XE service was not listed.

Default Service           XE
Listener Parameter File   C:\oraclexe\app\oracle\product\11.2.0\server\network\admin\listener.ora
Listener Log File         C:\oraclexe\app\oracle\diag\tnslsnr\SATANNAM-US\listener\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1ipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))
Services Summary...
Service "CLRExtProc" has 1 instance(s).
  Instance "CLRExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully.

This is due to the fact that the listener hasn't registered the XE service properly. In my case, restarts of database and listener services didn't help. Remember, as a best practice the listener must always be started ahead of starting the database for it to register the services.

The fix is to manually instruct the database to register the XE service. To do this, login to sqlplus as sysdba and issue the following commands.

> sqlplus / as sysdba
> Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SQL> alter system set LOCAL_LISTENER='(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))' scope=both;
alter system register;

Exit sqlplus and restart your OracleServiceXE and listener services.

Now, lsnrctl status command gives the following output;

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))

Default Service           XE
Listener Parameter File   C:\oraclexe\app\oracle\product\11.2.0\server\network\admin\listener.ora
Listener Log File         C:\oraclexe\app\oracle\diag\tnslsnr\SATANNAM-US\listener\alert\log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1ipc)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=8080)) Presentation=HTTP)(Session=RAW))
Services Summary...
Service "CLRExtProc" has 1 instance(s).
  Instance "CLRExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "xe", status READY, has 1 handler(s) for this service...
Service "xe" has 1 instance(s).
  Instance "xe", status READY, has 1 handler(s) for this service...
The command completed successfully.

You can see that XE service is now registered and ready. Also note that the http port 8080 is up and running - meaning you can now successfully access the APEX url.

Monday, November 7, 2016

Process Cloud Service (PCS) Integration Options

Process is ubiquitous - be it SaaS process extensions, automation of a manual process, gain visibility into a process or just eliminating human errors.

With fully visual, browser based, no IDE platform that runs on the cloud, Process Cloud Service lends itself as a simple yet powerful tool to citizen developers and LOB users alike, to raidly automate their business processes with little to no dependency on IT/DevOps. Cloud platform (PaaS) offerings such as Process Cloud Service and Integration Cloud Service enable modern enterprises leveraging a range of SaaS applications to extend, automate and integrate back with on-prem systems.

Outside of its own instance data, business processes also need data from external data sources. Process Cloud Service offers 3 options to seamlessly integrate with external systems / services;

1) SOAP
2) REST
3) ICS (Integration Cloud Service)

To invoke or call external services using a Service Activity within a business process, we must first create a connector - available under the Integrations section in your process composer.

Out of the box, Process Cloud Service allows connectivity to external services through SOAP / REST protocols. For any other type of integration - for eg., Database, File, Oracle/3rd party apps, you have 2 options;

1) Expose them as SOAP/REST APIs either through a middle tier or using natively available options (eg., APEX ORDS for Database) and call them directly from PCS
2) Use Integration Cloud Service (ICS) to quickly interface your target data source as SOAP/REST using a range of technology, application and SaaS adapters

1) SOAP

With this integration option, you can connect to any SOAP web service that is accessible over internet. You have options to either upload a WSDL definition or use a SOAP URL directly.
If you are using URL, notice that all the referenced schema (XSD) files are also imported automatically.

You also have an option to configure the "Read Timeout", "Connection Timeout" and WS-Security parameters for the service.


2) REST

Process Cloud Service offers extensive support to integrate and connect to REST APIs. Intuitive wizard guides through configuration of REST based services including various HTTP verbs, resources and request-response payloads.

3) ICS Integration

Process Cloud Service (PCS) provides tight-integration to Integration Cloud Service (ICS) among other PaaS / IaaS services such as Documents Cloud Service, Business Intelligence Cloud Service, Storage Cloud and Notification Service.

All it requires is a one-time configuration in PCS workspace and while modeling a process, the service connector display all ICS integrations to choose from.


With all these different integration options, Process Cloud Service not only delivers rapid process automation but also offers extensive connectivity to external systems and services.