Wednesday, 20 September 2017

Infinispan 9.1.1.Final is out

Dear Infinispan users,

after 9.1.0.Final we decided to go on a bug-squashing spree before embarking on the feature work that will be part of 9.2.

So, fresh out of the build baking machine is 9.1.1.Final. You can peruse the list of fixed bugs and you can get the bits from our download page.

Enjoy !

Friday, 15 September 2017

DevNation Live video and slides available now!


Last week I gave a talk for DevNation Live on how to handle big data with Infinispan. The recording of the talk is available now from here, and the slides are here too.

In case you missed the talk, don't miss the opportunity to see what Infinispan can do with big data!

Cheers,
Galder

Sunday, 10 September 2017

Multi-tenancy - Infinispan as a Service (also on OpenShift)

In recent years Software as a Service concept has gained a lot of traction. I'm pretty sure you've used it many times before. Let's take a look at a practical example and explain what's going on behind the scenes.

Practical example - photo album application

Imagine a very simple photo album application hosted within the cloud. Upon the first usage you are asked to create an account. Once you sign up, a new tenant is created for you in the application with all necessary details and some dedicated storage just for you. Starting from this point you can start using the album - download and upload photos. 

The software provider that created the photo album application can also celebrate. They have a new client! But with a new client the system needs to increase its capacity to ensure it can store all those lovely photos. There are also other concerns - how to prevent leaking photos and other data from one account into another? And finally, since all the content will be transferred through the Internet, how to secure transmission?

As you can see, multi-tenancy is not that easy as it would seem. The good news is that if it's properly configured and secured, it might be beneficial both for the client and for the software provider. 

Multi-tenancy in Infinispan

Let's think again about our photo album application for a moment. Whenever a new client signs up we need to create a new account for him and dedicate some storage. Translating that into Infinispan concepts this would mean creating a new CacheContainer. Within a CacheContainer we can create multiple Caches for user details, metadata and photos. You might be wondering why creating a new Cache is not sufficient? It turns out that when a Hot Rod client connects to a cluster, it connects to a CacheContainer exposed via a Hot Rod Endpoint. Such a client has access to all Caches. Considering our example, your friends could possibly see your photos. That's definitely not good! So we need to create a CacheContainer per tenant. Before we introduced Multi-tenancy, you could expose each CacheContainer using a separate port (using separate Hot Rod Endpoint for each of them). In many scenarios this is impractical because of proliferation of ports. For this reason we introduced the Router concept. It allows multiple clients to access their own CacheContainers through a single endpoint and also prevents them from accessing data which doesn't belong to them. The final piece of the puzzle is transmitting sensitive data through an unsecured channel such as the Internet. The use of TLS encryption solves this problem. The final outcome should look like the following:


The Router component on the diagram above is responsible for recognizing data from each client and redirecting it to the appropriate Hot Rod endpoint.
As the name implies, the router inspects incoming traffic and reroutes it to the appropriate underlying CacheContainer. To do this it can use two different strategies depending on the protocol: TLS/SNI for the Hot Rod protocol, matching each server certificate to a specific cache container  and path prefixes for REST.
The SNI strategy detects the SNI Host Name (which is used as tenant) and also requires TLS certificates to match. By creating proper trust stores we can match which tenant can access which CacheContainers.
URL path prefix is very easy to understand, but it is also less secure unless you enable authentication. For this reason it should not be used in production unless you know what you are doing (the SNI strategy for the REST endpoint will be implemented in the near future). Each client has its own unique REST path prefix that needs to be used for accessing the data (e.g. http://127.0.0.1:8080/rest/client1/fotos/2).

Confused? Let's clarify this with an example.

Foto application sample configuration

The first step is to generate proper key/trust stores for the server and client:


The next step is to configure the server. The snippet below shows only the most important parts:


Let's analyze the most critical lines:
  • 7, 15 - We need to add generated key stores to the server identities
  • 25, 30 - It is highly recommended to use separate CacheContainers
  • 38, 39 - A Hot Rod connector (but without socket binding) is required to provide proper mapping to CacheContainer. You can also use many useful settings on this level (like ignored caches or authentication).
  • 42 - Router definition which binds into default Hot Rod and REST ports.
  • 44 - 46 - The most important bit which states that only a client using SSLRealm1 (which uses trust store corresponding to client_1_server_keystore.jks) and TLS/SNI Host name client-1 can access Hot Rod endpoint named multi-tenant-hotrod-1 (which points to CacheContainer multi-tenancy-1).

Improving the application by using OpenShift

Hint: You might be interested in looking at our previous blog posts about hosting Infinispan on OpenShift. You may find them at the bottom of the page.

So far we've learned how to create and configure a new CacheContainer per tenant. But we also need to remember that system capacity needs to be increased with each new tenant. OpenShift is a perfect tool for scaling the system up and down. The configuration we created in the previous step almost matches our needs but needs some tuning.

As we mentioned earlier, we need to encrypt transport between the client and the server. The main disadvantage is that OpenShift Router will not be able to inspect it and take routing decisions. A passthrough Route fits perfectly in this scenario but requires creating TLS/SNI Host Names as Fully Qualified Application Names. So if you start OpenShift locally (using oc cluster up) the tenant names will look like the following: client-1-fotoalbum.192.168.0.17.nip.io

We also need to think how to store generated key stores. The easiest way is to use Secrets:


Finally, a full DeploymentConfiguration:



If you're interested in playing with the demo by yourself, you might find a working example here. It mainly targets OpenShift but the concept and configuration are also applicable for local deployment.

Links

Wednesday, 6 September 2017

Join us online on 7th September for DevNation talk on Infinispan!


Tomorrow, 7th of September at 12:00pm EDT, I will be doing an online live tech talk for Red Hat DevNation on showing how to handle big data with Infinispan.

If you didn't manage to attend my talk at Great Indian Developer Summit or Berlin Buzzwords, this is your chance to see it live and direct, with live coding demos included!

You can register to see the talk live visiting the Red Hat DevNation Live site.

Cheers,
Galder

Friday, 21 July 2017

Cluster Counter

In Infinispan 9.1 we introduce the clustered counters. It is a counter distributed and shared among all nodes in the cluster and today we are going to know more about it.

Installation

To use a counter in your Infinispan cluster, first you have to add the maven dependency to your project. As you can see, it is simple as doing:

After adding the module, you can retrieve the CounterManager and start creating and using counters.

CounterManager

Each CounterManager is associated to a CacheManager. But, before showing how to use it, first we have some configuration to be done.

There are two attributes that you can configure: The num-owner - that represents the number of copies of the counter's value in a cluster; and the reliability - that represents the behavior of the counters in case of partitions.

Below, is the configuration example with their default values.
XML:
Programatically:
Then, you can retrieve the CounterManager from the CacheManager, as shown below, and start using the counters!

Counter

A counter is identified by its name. Also, it is initialized with an initial value (by default 0) and it can be persisted, if the value needs to survive a cluster restart.

There are 2 types of counters: strong and weak counters.

Strong Counters

The strong counter provides higher consistency. Its value is known during the update and its updates are applied atomically. This allows to set boundaries and provides conditional operation (as compare-and-set).

Configuration

A strong counter can be configured in the configuration file or programatically. They can be also created dynamically at runtime. Below shows us how it can be done:

XML:
Programatically:
Runtime:

Use Case

The strong counter fits the following uses cases:
  • Global Id Generator
Due to its strong consistency, it can be used as a global identifier generator, as in the example below:

  • Rate Limiter
If bounded, it can be used as a simple rate limiter. Just don't forget to invoke reset()...

  • Simply count "stuff"
Well, it is a counter after all...

Weak Counters

The weak counter provides eventual consistency and its value is not known during updates. It provides faster writes when comparing with the strong counter.

Configuration

As in strong counter, the weak counter can be configure its name and its initial value. In addition, a concurrency-level can be configure to set the number of concurrent updates that can be handled in parallel. Below shows us how to configure it:

XML:
Programatically:
Runtime:

Use Case

The main use case for the weak counter includes all scenarios where its value isn't needed while updating the counter. For example, it can be used to count the number of visits to some resource:

For more information, take a look at the documentation. If you have any feedback, or would like to request some new features, or found some issue, let us know via the forumissue tracker or the #infinispan channel on Freenode.

Monday, 17 July 2017

Conflict Management and Partition Handling

In Infinispan 9.1.0.Final we have overhauled the behaviour and configuration of partition handling in distributed and replicated caches.  Partition handling is no longer simply enabled/disabled, instead a partition strategy is configured. This allows for more fine-grained control of a cache's behaviour when a split brain occurs. Furthermore, we have created the ConflictManager component so that conflicts on cache entries can be automatically resolved on-demand by users and/or automatically during partition merges .


Conflict Manager


During a cache's lifecycle it is possible for inconsistencies to appear between replicas of a cache entry  due to a variety of reasons (e.g replication failures, incorrect use of flags etc).  The conflict manager is a tool that allows users to retrieve all stored replica values for a cache entry. In addition to allowing users to process a stream of cache entries whose stored replicas have conflicting values. Furthermore, by utilising implementations of the EntryMergePolicy interface it is possible for said conflicts to be resolved deterministically.

EntryMergePolicy


In the event of conflicts arising between one or more replicas of a given CacheEntry, it is necessary for a conflict resolution algorithm to be defined, therefore we provide the EntryMergePolicy interface. This interface consists of a single method, "merge", whose output is utilised as the "resolved" CacheEntry for a given key. A non-null return value is put to all replicas of the CacheEntry in question, whereas a null return value results in all replicas being removed from the cache.

The merge method takes two parameters: the "preferredEntry" and "otherEntries". In the context of a partition merge, the preferredEntry is the CacheEntry associated with the partition whose coordinator is conducting the merge (or if multiple entries exist in this partition, it’s the primary replica). However, in all other contexts, the preferredEntry is simply the primary replica. The second parameter, "otherEntries" is simply a list of all other entries associated with the key for which a conflict was detected.

Currently Infinispan provides the following implementations of EntryMergePolicy:


Policy Description
MergePolicies.PREFERRED_ALWAYS Always utilise the "preferredEntry".
MergePolicies.PREFERRED_NON_NULL Utilise the "preferredEntry" if it is non-null, otherwise utilise the first entry from "otherEntries".
MergePolicies.REMOVE_ALL Always remove a key from the cache when a conflict is detected.

Application Usage


For conflict resolution during partition merges, once an EntryMergePolicy has been configured for the cache, no additional actions are required by the user.  However, if an Infinispan user would like to utilise the ConflictManager explicitly in their application, it should be retrieved by passing an AdvancedCache instance to the ConflictManagerFactory

Note, that depending on the number of entries in the cache, the getConflicts and resolveConflict methods are expensive operations, as they both depend on a spliterator which lazily loads cache entries on a per segment basis. Consequently, when operating in distributed mode, if many conflicts exist, it is possible for an OutOfMemoryException to occur on the node searching for conflicts.


Partition Handling Strategies


In 9.1.0.Final the partition handling enabled/disabled option has been deprecated and users must now configure an appropriate PartitionHandling strategy for their application. A partition handling strategy determines what operations can be performed on a cache when a split brain event has occurred. Ultimately, in terms of Brewer’s CAP theorem, the configured strategy determines whether the cache's availability or consistency is sacrificed in the presence of partition(s). Below is a table of the provided strategies and their characteristics:


Strategy Description CAP
DENY_READ_WRITES If the partition does not have all owners for a given segment, both reads and writes are denied for all keys in that segment.

This is equivalent to setting partition handling to true in Infinispan 9.0.
Consistency
ALLOW_READS Allows reads for a given key if it exists in this partition, but only allows writes if this partition contains all owners of a segment. Availability
ALLOW_READ_WRITES Allow entries on each partition to diverge, with conflicts resolved during merge.

This is equivalent to setting partition handling to false in Infinispan 9.0.
Availability

Conflict Resolution on Partition Merge


When utilising the ALLOW_READ_WRITES partition strategy it is possible for the values of cache entries to diverge between competing partitions. Therefore, when the two partitions merge, it is necessary for these conflicts to be resolved. Internally Infinispan utilises a cache's ConflictManager to search for cache entry conflicts and then applies the configured EntryMergePolicy to automatically resolve said conflicts before rebalancing the cache. This conflict resolution is completely automatic and does not require any additional code or input from Infinispan users.

Note, that if you do not want conflicts to be resolved automatically during a partition merge, i.e. the behaviour before 9.1.x, you can set the merge-policy to null (or NONE in xml). 


Configuration

Programmatic



XML




Conclusion


Partition handling has been overhauled in Infinispan 9.1.0.Final to allow for increased control over a cache's behaviour. We have introduced the ConflictManager which enables users to inspect and manage the consistency of their cache entries via custom and provided merge policies.

If you have any feedback on the partition handling changes, or would like to request some new features/optimisations, let us know via the forumissue tracker or the #infinispan channel on Freenode.

Friday, 14 July 2017

Infinispan 9.1 "Bastille"

Dear Infinispan users,

after 3½ months, we are proud to present to you our latest stable release, Infinispan 9.1, codenamed "Bastille".

While minor releases are traditionally evolutionary instead of revolutionary, this release still comes loaded with a number of great features:

Scattered cache

A new clustered cache, similar to a distributed cache, but with a higher write throughput.

Consistency Checker, Conflict Resolution and Automatic merge policies

An overhaul to partition handling which allows much finer control about whether to allow reads and writes in split clusters and how data is reconciled when partitions are merged.

Clustered Counters

An implementation of clustered counters with both strong and weak semantics, threshold events, optional persistence and bounding. Currently these are only available in embedded mode, but they will be usable over Hot Rod in Infinispan 9.2.

Locked Streams

Locked streams allow you to run your stream processing operations knowing that another update cannot be performed while the Consumer is executed on an entry. Note this only works in non transactional and pessimistic transactional caches (optimistic transactional caches are not supported).

API improvements

The compute(), computeIfPresent() and computeIfAbsent() methods on the Cache interface are now implemented as proper distributed operations so that they run local to the entries.
The DeltaAware interface for supporting granular clustered operations has been deprecated in favour of functional commands.

Persistence improvements

The CacheStore SPI now supports write batching. The JDBC, JPA, RocksDB, Remote and File stores have been modified to take advantage of this. You should see great benefits when using write-behind or when using putAll operations.

Remote query with JBoss Marshalling

Remote query now also works with Java entities annotated with Hibernate Search annotations and JBoss Marshalling without requiring ProtoBuf.

HTTP/2 and ALPN support on the REST endpoint

The REST endpoint has been completely rewritten so that it now supports both HTTP/1.1 and HTTP/2 as well as ALPN (even on Java 8). The new endpoint is also 30% faster during reads and 6% faster during writes.

Hot Rod Java client improvements

The Java Hot Rod client now has proper entrySet(), keySet() and values() implementations which iterate over the remote data instead of pulling it all locally.
It is now also finally possible to create and remove caches directly from the client.

Server Administration console improvements

The console has received a number of updates for usability and consistency. It is also finally possible to configure and manage the remote endpoints.

Component upgrades

Hibernate Search 5.8, JGroups 4.0.4, KUBE Ping 1.0.0.Beta1

Bug fixes

We have also dropped the guillotine on a large number of bugs.

If all goes well, we plan to release Infinispan 9.2 at the end of October, with lots of great updates.

So, head over to our download page, consult the upgrading guide and let us know about how you use Infinispan.

Cheers !

The Infinispan team