Thursday, 27 October 2016

Learn Functional Reactive Programming with Infinispan, Elm and Node.js at Soft-Shake conference

Tomorrow Friday, 28th October, I'll be speaking at the Soft-Shake conference (Geneva, Switzerland) about writing apps in functional reactive programming style with Infinispan, Elm and Node.js. If you're interested in the topic and live in the area, make sure you come to my talk!

To find out more, head to the Soft-Shake site, where you can find exact details about the rest of the programme, location...etc.


Thursday, 13 October 2016

OpenShift and Node Affinity

OpenShift (and Kubernetes) has a great feature called Affinity. In some scenarios, it might be beneficial to configure it along with Infinispan node affinity.

Before we start... this tutorial is heavily based on our previous blog post about deploying Infinispan on Openshift and OpenShift Scheduler functionality. It is highly recommended to read those articles before continuing this tutorial.

How does the Affinity feature work... in short?

In order to decide on which node given Pod should be running, OpenShift looks at so called Predicates and Priority Functions. A predicate must match the one configured in DeploymentConfiguration and Priority Function is responsible for choosing the best node for running Pods.

Let's assume that we have a sample policy (similar to one provided in OpenShift manual), that uses site as a Predicate along with rack and machine as Priority Functions. Now let's assume we have two nodes:
  • Node 1 - site=EU, rack=R1, machine=VM1
  • Node 2 - site=US, rack=R2, machine=VM2
And two DeploymentConfiguration with Node Selectors (this tells OpenShift on which nodes given DeploymentConfiguration wishes to run) defined as follows:
  • DeploymentConfiguration 1 - site=EU, rack=R5, machine=VM5
  • DeploymentConfiguration 2 - site=JAP, rack=R5, machine=VM5
With the above example only DeploymentConfiguration 1 will be scheduled (on Node 1), since site matches the predicate. In this case rack and machine parameters are not used (because we have only one node).

Note that the default OpenShift's configuration uses region (as a Predicate) and zone (as a Priority Function). However it can be reconfigured very easily

And I need it because....

Some OpenShift deployments might span multiple racks in a data center or even multiple sites. It is important to tell Infinispan where physical machines are located, which will allow to choose better nodes for backing up your data (in distribution mode). 

As the matter of fact, Infinispan uses site, rack and machine. The main goal is to avoid backing up data on the same host.

The implementation

The implementation is pretty straightforward but there are several gotchas. 

The first one is that OpenShift uses regions and zones by default and Infinispan uses sites, racks and machines. The good news is that all those three are optional, so you have two options - reuse existing region and zone (map them to rack and site for example), or adjust OpenShift scheduler settings. In my example I used the former approach.

The second one is the need of hardcoding those parameters into DeploymentConfiguration. Unfortunately Node Selectors are not exposed through Downward API, so there's no other way.

So let's have a look at our DeploymentConfiguration:

  • Line 26 - Zone default used as a rack
  • Line 27 - Region primary used as a site
  • Lines 57 - 59 - Node Selector for scheduling Pods


Combining OpenShift Affinity Service and Infinispan Server Hinting allows to optimize data distribution across the cluster. Keep in mind that your configuration might be totally different (OpenShift Scheduler is a highly configurable thing). But once you understand how it works, you can adjust the hinting strategy for your needs. 

Happy Scaling!

Tuesday, 11 October 2016

Microservices with Wildfly Swarm and Infinispan

Everybody loves Microservices, right?

Today, all of us have slightly different understanding what Microservices are but, among all those definitions and attributes, there's probably one thing that fits them all - they need to be simple.

So let's have a look at some practical example of creating a REST service with Infinispan as a storage wired together using CDI. We will use Wildfly Swarm as a running platform.

Bootstrapping new project

A good way to start a new Wildfly Swarm project is to generate it. The only requirement here is to add "JAX-RS with CDI" and "JPA" as dependencies.

The next step is to add infinispan-embedded artifact. The final pom.xml should look like the following:

Writing some code

Infinispan CDI Extension will take care of bootstrapping Infinispan, so we can dive directly into JAX-RS code:

And that's it!

What's next?

If you'd like to have a look at the complete example, check out my repository. The code is based on fresh build from Infinispan master branch which contains lots of improvements for CDI. You might build it yourself or just wait for 9.0.0.Beta1.

Have a lot of fun with Microservices!

Monday, 3 October 2016

Hotrod clients C++/C# 8.1.0.Alpha1 released!

Dear Infinispan community,

I'm pleased to announce that the 8.1.0.Alpha1 release is out!

You can feel the thrill of an Alpha release downloading the unstable bits here:

We're trying to keep track of the 8.1 trip at this Jira url:
Features list for 8.1
Feedbacks, proposals, hints are welcome!

The Infinispan Team

Friday, 2 September 2016

Configuration management on OpenShift, Kubernetes and Docker

When deploying Infinispan on Docker based Cloud environments, the most critical thing is how to manage configuration. In this blog post we'll explore some of the options.

Extending our Docker image

Creating your own Docker image based on jboss/infinispan-server is quite simple. At first you need to prepare a configuration XML file, which is shipped with Infinispan release. Go to Infinispan download section and grap a server release corresponding to a chosen Docker image tag.  After unpacking it, grab the configuration (I use cloud.xml as a template) and introduce all necessary changes. 

Finally, build your image:

Now, that was quick! Wasn't it?

Using ConfigMaps with OpenShift

If you're using OpenShift, there's a sophisticated tool called ConfigMap. A ConfigMap can store a configuration file (or a configuration directory) and mount it somewhere in the Pod.

Use the command below to create a ConfigMap based on a configuration file:

Now create Infinispan application based on the configuration below (you can use 'oc create -f <file.yaml>' for this):

  • (lines 50 - 52) - ConfigMap volume declaration
  • (lines 45 - 47) - Mounting configuration into /opt/jboss/infinispan-server/standalone/configuration/custom
  • (line 22) - bootstrapping the Infinispan with custom configuration (note there is no xml extension there)

Using ConfigMaps with Kubernetes

Kubernetes ConfigMaps work exactly the same way as in OpenShift.

The command below creates a ConfigMap:

The second step is to create a Deployment with ConfigMap:


If you're using any Docker orchestration tool - have a look at provided tools. OpenShift and Kubernetes ConfigMaps are really great for this.

However if you need a fine grained control - either extend our Docker image (this is the preferred way) or simply fork and modify it.

Happy configuring and scaling!

Thursday, 1 September 2016

Hotrod clients C/C# 8.0.0.Final released!

Dear Infinispan community,
I'm glad to announce the Final release of the C++ and C# clients version 8.0.0.

You can find the download on the Infinispan web site:

Major new features for this release are:
  • queries
  • remote script execution
  • asynchronous operation (C++ only)
plus several minor and internal updates that partially fill the gap between C++/C# and the Java client.

Some posts about the 8 serie of the C++/C# clients have been already published on this blog, you can recall them clicking through the list below.

The equivalent C# examples are collected here:


Friday, 26 August 2016

Infinispan queries with C++ client

With the C++ Hotrod Client version 8 user can now run queries against Infinispan calling the cache.query() method with the query string as argument. The query response can contain result set of: entities, projections as well as aggregate values. The implementation follows the same design of the Java client and is based on Protobuf to define the entity model and to parse the resultset.
The following is an example of C++ query code. Git the full project here.

//Client setup

This is the cache setup on the client side. Two caches are used:
  • the __protobuf_metadata cache, where the protobuf description of the model is stored;
  • the application cache, where the instances of the model's entities resides. 
Marshallers for those caches are based on the Protobuf methods: SerializeToArray() and ParseFromArray() which every Protobuf object provides. 

//Cache population

The classes that populate the cache are defined in the Google Protobuf syntax. The example application uses the C++ representation generated by the protoc compiler. (In this example, such code C++ is provided. The way this code can be generated is described here)

//Queries execution