Friday, 2 September 2016

Configuration management on OpenShift, Kubernetes and Docker

When deploying Infinispan on Docker based Cloud environments, the most critical thing is how to manage configuration. In this blog post we'll explore some of the options.

Extending our Docker image

Creating your own Docker image based on jboss/infinispan-server is quite simple. At first you need to prepare a configuration XML file, which is shipped with Infinispan release. Go to Infinispan download section and grap a server release corresponding to a chosen Docker image tag.  After unpacking it, grab the configuration (I use cloud.xml as a template) and introduce all necessary changes. 

Finally, build your image:


Now, that was quick! Wasn't it?

Using ConfigMaps with OpenShift

If you're using OpenShift, there's a sophisticated tool called ConfigMap. A ConfigMap can store a configuration file (or a configuration directory) and mount it somewhere in the Pod.

Use the command below to create a ConfigMap based on a configuration file:


Now create Infinispan application based on the configuration below (you can use 'oc create -f <file.yaml>' for this):

  • (lines 50 - 52) - ConfigMap volume declaration
  • (lines 45 - 47) - Mounting configuration into /opt/jboss/infinispan-server/standalone/configuration/custom
  • (line 22) - bootstrapping the Infinispan with custom configuration (note there is no xml extension there)

Using ConfigMaps with Kubernetes

Kubernetes ConfigMaps work exactly the same way as in OpenShift.

The command below creates a ConfigMap:

The second step is to create a Deployment with ConfigMap:

Conclusion

If you're using any Docker orchestration tool - have a look at provided tools. OpenShift and Kubernetes ConfigMaps are really great for this.

However if you need a fine grained control - either extend our Docker image (this is the preferred way) or simply fork and modify it.

Happy configuring and scaling!


Thursday, 1 September 2016

Hotrod clients C/C# 8.0.0.Final released!

Dear Infinispan community,
I'm glad to announce the Final release of the C++ and C# clients version 8.0.0.

You can find the download on the Infinispan web site:

http://infinispan.org/hotrod-clients/

Major new features for this release are:
  • queries
  • remote script execution
  • asynchronous operation (C++ only)
plus several minor and internal updates that partially fill the gap between C++/C# and the Java client.

Some posts about the 8 serie of the C++/C# clients have been already published on this blog, you can recall them clicking through the list below.

The equivalent C# examples are collected here:

https://github.com/rigazilla/dotnet-client-examples

Enjoy!

Friday, 26 August 2016

Infinispan queries with C++ client

With the C++ Hotrod Client version 8 user can now run queries against Infinispan calling the cache.query() method with the query string as argument. The query response can contain result set of: entities, projections as well as aggregate values. The implementation follows the same design of the Java client and is based on Protobuf to define the entity model and to parse the resultset.
The following is an example of C++ query code. Git the full project here.

//Client setup

This is the cache setup on the client side. Two caches are used:
  • the __protobuf_metadata cache, where the protobuf description of the model is stored;
  • the application cache, where the instances of the model's entities resides. 
Marshallers for those caches are based on the Protobuf methods: SerializeToArray() and ParseFromArray() which every Protobuf object provides. 

//Cache population

The classes that populate the cache are defined in the Google Protobuf syntax. The example application uses the C++ representation generated by the protoc compiler. (In this example, such code C++ is provided. The way this code can be generated is described here)

//Queries execution

Thursday, 18 August 2016

Running Infinispan cluster on Kubernetes

In the previous post we looked how to run Infinispan on OpenShift. Today, our goal is exactly the same, but we'll focus on Kubernetes.

Running Infinispan on Kubernetes requires using proper discovery protocol. This blog post uses Kubernetes Ping but it's also possible to use Gossip Router.

Our goal

We'd like to build Infinispan cluster based on Kubernetes hosted locally (using Minikube). We will expose a service and route it to our local machine. Finally, we will use it to put data into the grid.




Spinning local Kubernetes cluster

There are many ways to spin up a local Kubernetes cluster. One of my favorites is Minikube. At first you will need the 'minikube' binary, which can be downloaded from Github releases page. I usually copy it into '/usr/bin' which makes it very convenient to use. The next step is to download 'kubectl' binary. I usually use Kubernetes Github releases page for this. The 'kubectl' binary is stored inside the release archive under 'kubernetes/platforms/<your_platform>/<your_architecture>/kubectl'. I'm using linux/amd64 since I'm running Fedora F23. I also copy the binary to '/usr/bin'.

We are ready to spin up Kubernetes:


Deploying Infinispan cluster

This time we'll focus on automation, so there will be no 'kubectl edit' commands. Below is the yaml file for creating all necessary components in Kubernetes cluster:

  • (lines 23 - 24) - We added additional arguments to the bootstrap scipt
  • (lines 26 - 30) - We used Downward API for pass the current namespace to the Infinispan
  • (lines 34 - 45) - We defined all ports used by the Pod
  • (lines 49 - 66) - We created a service for port 8080 (the REST interface)
  • (line 64) - We used NodePort service type which we will expose via Minikube in the next paragraph

Save it somewhere on the disk and execute 'kubectl create' command:


Exposing the service port

One of the Minikube's limitations is that it can't use Ingress API and expose services to the outside world. Thankfully there's other way - use Node Port service type. With this simple trick we will be able to access the service using '<minikube_ip>:<node_port_number>'. The port number was specified in the yaml file (we could leave it blank and let Kubernetes assign random one). The node port can easily be checked using the following command:


In order to obtain the Kubernetes node IP, use the following command:


Testing the setup

Testing is quite simple and the only thing to remember is to use the proper address - <minikube_ip>:<node_port>:


Clean up

Minikube has all-in-one command to do the clean up:


Conclusion

Kubernetes setup is almost identical to the OpenShift one but there are a couple of differences to keep in mind:
  • OpenShift's DeploymentConfiguration is similar Kubernetes Deployment with ReplicaSets
  • OpenShift's Services work the same way as in Kubernetes
  • OpenShift's Routes are similar to Kubernetes' Ingresses
Happy scaling and don't forget to check if Infinispan formed a cluster (hint - look into the previous post).

Friday, 12 August 2016

Infinispan Cloud Cachestore 8.0.1.Final

After bringing the MongoDB up-to-date a few days ago, this time it's the turn of the Cloud Cache Store, our JClouds-based store which allows you to use any of the JClouds BlobStore providers to persist your cache data. This includes AWS S3, Google Cloud Storage, Azure Blob Storage and Rackspace Cloud Files.
In a perfect world this would have been 8.0.0.Final, but Sod's law rules, so I give you 8.0.1.Final instead :) So head on over to our store download page and try it out.

The actual configuration of the cachestore depends on the provider, so refer to the JClouds documentation. The following is a programmatic example using the "transient" provider:
 

And this is how you'd configure it declaratively:

This will work with any Infinispan 8.x release.

Enjoy !

Infinispan 8.2.4.Final released!

Dear Infinispan community,

We are proud to announce a new micro release of our stable 8.2 branch. Download it here and try it out!

This maintenance release includes a handful of bug fixes and a bonus new feature. If you are using any other 8.x release, we recommend to upgrade to 8.2.4.Final.

Check out the fixed issues, download the release and tell us all about it on the forum, on our issue tracker or on IRC on the #infinispan channel on Freenode.

We are currently busy working on the upcoming beta release of the 9.0 stream.

Cheers,
The Infinispan team

Tuesday, 9 August 2016

Running Infinispan cluster on OpenShift

Did you know that it's extremely easy to run Infinispan in OpenShift? Infinispan 9.0.0.Alpha4 adds out of the box support for OpenShift (and Kubernetes) discovery!

Our goal

We'd like to build an Infinispan cluster on top of OpenShift and expose a Service for it (you may think about Services as Load Balancers). A Service can be exposed to the outside world using Routes. Finally, we will use REST interface to PUT and GET some data from the cluster.


Accessing the OpenShift cloud

Of course before playing with Infinispan, you will need an OpenShift cluster. There are number of options you can investigate. I will use the simplest path - OpenShift local cluster.

The first step is to download OpenShift Client Tools for your platform. You can find them on OpenShift releases Github page. Once you download and extract the 'oc' binary, make it accessible in your $PATH. I usually copy such things into my '/usr/bin' directory (I'm using Fedora F23). 

Once everything is set and done - spin up the cluster:


Note that you have been automatically logged in as 'developer' and your project has been automatically set to 'myproject'. 

Spinning an Infinispan cluster

The first step is to create an Infinispan app:


Now you need to modify the Deployment Configuration (use 'oc edit dc/infinispan-server' for this) and tell Infinispan to boot up with Kubernetes' discovery protocol stack by using the proper namespace to look up other nodes (unfortunately this step can not be automated, otherwise a newly created Infinispan node might try to join an existing cluster and this is something you might not want). Here's my modified Deployment Configuration:


There is one final step - Kubernetes' PING protocol uses the API to look up other nodes in the Infinispan cluster. By default API access is disabled in OpenShift and needs to be enabled. This can be done by this simple command:


Now we can redeploy the application (to ensure that all changes were applied) and scale it out (to 3 nodes):


Now let's check if everything looks good - you can do it either through the OpenShift web console or by using 'oc get pods' and 'oc logs' commands:


Accessing the cluster

In order to access the Infinispan cluster from the outside world we need a Route:


The newly created Route needs small changes - we need to change the target port to 8080 (this is the REST service). The 'oc edit route/infinispan-server' command is perfect for it. Below is my updated configuration:

  • (line 17) - Modified to 8080 TCP port

Testing the setup

You can easily see how to access the cluster by describing the Route:


Now let's try to play with the data:

Cleaning up

Finally, when you are done with experimenting, you can remove everything using 'oc delete' command:

Conclusion

Running Infinispan cluster inside an OpenShift cloud is really simple. Just 3 steps to remember:
  1. Create an Infinispan app ('oc new-app')
  2. Tell it to use Kubernetes JGroups Stack and in which project look for other cluster members ('oc edit dc/infinispan-server')
  3. Allow access to the OpenShift API ('oc policy add-role-to-user')
Happy scaling!