Friday, 28 October 2016

Infinispan Docker image: custom configuration


In the previous post we introduced the improved Docker image for Infinispan and showed how to run it with different parameters in order to create standalone, clustered and domain mode servers.

This post will show how to address more advanced configuration changes than swapping the JGroups stack, covering cases like creating extra caches or using a pre-existent configuration file.

 

Runtime configuration changes


Since the Infinispan server is based on Wildfly, it also supports the Command Line Interface (CLI) to change configurations at runtime.

Let's consider an example of a custom indexed cache with Infinispan storage. In order to configure it, we need 4 caches, one cache to hold our data, called testCache and other three caches to hold the indexes:  LuceneIndexesMetadata, LuceneIndexesData and LuceneIndexesLocking.

This is normally achieved by adding this piece of configuration to the server xml:


This is equivalent to the following script:



To apply it to the server, save the script to a file, and run:

where CONTAINER is the id of the running container.

Everything that is applied using the CLI is automatically persisted in the server, and to check what the script produced, use the command to dump the config to a local file called config.xml.

Check the file config.xml: it should contain all four caches created via the CLI.

 

 Using an existent configuration file


Most of the time changing configuration at runtime is sufficient, but it may be desirable to run the server with an existent xml, or change configurations that cannot be applied without a restart. For those cases, the easier option is to mount a volume in the Docker container and start the container with the provided configuration.

This can be achieved with Docker's volume support. Consider an xml file called customConfig.xml located on a local folder /home/user/config. The following command:

will create a volume inside the container at the /opt/jboss/infinispan-server/standalone/configuration/extra/ directory, with the contents of the local folder /home/user/config.

The container is then launched with the entrypoint extra/customConfig, which means it will use a configuration named customConfig located under the extra folder relative to where the configurations are usually located at /opt/jboss/infinispan-server/standalone/configuration.

 

Conclusion


And that's all about custom configuration using the Infinispan Docker image.

Stay tuned for the next post where we'll dive into multi-host clusters with the Infinispan Docker image.


Thursday, 27 October 2016

Learn Functional Reactive Programming with Infinispan, Elm and Node.js at Soft-Shake conference

Tomorrow Friday, 28th October, I'll be speaking at the Soft-Shake conference (Geneva, Switzerland) about writing apps in functional reactive programming style with Infinispan, Elm and Node.js. If you're interested in the topic and live in the area, make sure you come to my talk!

To find out more, head to the Soft-Shake site, where you can find exact details about the rest of the programme, location...etc.

Cheers,
Galder

Thursday, 13 October 2016

OpenShift and Node Affinity

OpenShift (and Kubernetes) has a great feature called Affinity. In some scenarios, it might be beneficial to configure it along with Infinispan node affinity.

Before we start... this tutorial is heavily based on our previous blog post about deploying Infinispan on Openshift and OpenShift Scheduler functionality. It is highly recommended to read those articles before continuing this tutorial.


How does the Affinity feature work... in short?


In order to decide on which node given Pod should be running, OpenShift looks at so called Predicates and Priority Functions. A predicate must match the one configured in DeploymentConfiguration and Priority Function is responsible for choosing the best node for running Pods.

Let's assume that we have a sample policy (similar to one provided in OpenShift manual), that uses site as a Predicate along with rack and machine as Priority Functions. Now let's assume we have two nodes:
  • Node 1 - site=EU, rack=R1, machine=VM1
  • Node 2 - site=US, rack=R2, machine=VM2
And two DeploymentConfiguration with Node Selectors (this tells OpenShift on which nodes given DeploymentConfiguration wishes to run) defined as follows:
  • DeploymentConfiguration 1 - site=EU, rack=R5, machine=VM5
  • DeploymentConfiguration 2 - site=JAP, rack=R5, machine=VM5
With the above example only DeploymentConfiguration 1 will be scheduled (on Node 1), since site matches the predicate. In this case rack and machine parameters are not used (because we have only one node).

Note that the default OpenShift's configuration uses region (as a Predicate) and zone (as a Priority Function). However it can be reconfigured very easily

And I need it because....


Some OpenShift deployments might span multiple racks in a data center or even multiple sites. It is important to tell Infinispan where physical machines are located, which will allow to choose better nodes for backing up your data (in distribution mode). 

As the matter of fact, Infinispan uses site, rack and machine. The main goal is to avoid backing up data on the same host.

The implementation


The implementation is pretty straightforward but there are several gotchas. 

The first one is that OpenShift uses regions and zones by default and Infinispan uses sites, racks and machines. The good news is that all those three are optional, so you have two options - reuse existing region and zone (map them to rack and site for example), or adjust OpenShift scheduler settings. In my example I used the former approach.

The second one is the need of hardcoding those parameters into DeploymentConfiguration. Unfortunately Node Selectors are not exposed through Downward API, so there's no other way.

So let's have a look at our DeploymentConfiguration:

  • Line 26 - Zone default used as a rack
  • Line 27 - Region primary used as a site
  • Lines 57 - 59 - Node Selector for scheduling Pods


Conclusion


Combining OpenShift Affinity Service and Infinispan Server Hinting allows to optimize data distribution across the cluster. Keep in mind that your configuration might be totally different (OpenShift Scheduler is a highly configurable thing). But once you understand how it works, you can adjust the hinting strategy for your needs. 

Happy Scaling!

Tuesday, 11 October 2016

Microservices with Wildfly Swarm and Infinispan

Everybody loves Microservices, right?

Today, all of us have slightly different understanding what Microservices are but, among all those definitions and attributes, there's probably one thing that fits them all - they need to be simple.

So let's have a look at some practical example of creating a REST service with Infinispan as a storage wired together using CDI. We will use Wildfly Swarm as a running platform.

Bootstrapping new project


A good way to start a new Wildfly Swarm project is to generate it. The only requirement here is to add "JAX-RS with CDI" and "JPA" as dependencies.

The next step is to add infinispan-embedded artifact. The final pom.xml should look like the following:


Writing some code


Infinispan CDI Extension will take care of bootstrapping Infinispan, so we can dive directly into JAX-RS code:



And that's it!

What's next?


If you'd like to have a look at the complete example, check out my repository. The code is based on fresh build from Infinispan master branch which contains lots of improvements for CDI. You might build it yourself or just wait for 9.0.0.Beta1.

Have a lot of fun with Microservices!