Friday, 23 January 2015

Infinispan 7.1.0.CR2 released

Dear Infinispan community,

In the spirit of "release early, release often" we are proud to announce the Infinispan 7.1.0CR2 release. We skipped CR1 as our Nexus repository experienced transient issues after we published the artifacts.

CR2 is mostly a maintenance release. Beta1 brought us a lot of major improvements that we further ironed out in CR2 release. There are a few component upgrades and a new feature worth mentioning - exposing cluster and node based statistics to DMR. 

Feel free to join us and shape the future releases on our forums, our mailing lists or our #infinispan IRC channel.

For a complete list of features and bug fixes included in this release, please refer to the release notes. Visit our downloads section to find the latest release.

Infinispan is back to FOSDEM!


Please join us in Brussels next Saturday, 31st January, to get to know more about Infinispan advanced querying capabilities in the session "Querying your datagrid with Lucene, Hadoop and Spark"


Wednesday, 14 January 2015

A Factory of Atomic Objects

Distributed systems aggregate large numbers of heterogeneous components that are subject to failures and asynchrony. To tame such a capricious nature, systems designers resort to non-blocking techniques such as state machine replication. This approach provides consistent non-blocking operations to a shared object replicated at a quorum of machines. State machine replication is a classical paradigm to consistently orchestrate concurrency between remote processes in a distributed system, and as such a weapon of choice to manage metadata operations. This approach is at work in many services such as Apache ZooKeeper, Google Chubby, or Open Replica.

The (experimental) atomic object factory module is an implementation of the state machine replication paradigm over Infinispan. Using the factory is as simple as employing the synchronized keyword in Java: it suffices to call it with a Serializable class, and it wraps for you the dependability, consistency and liveness guarantees of the instantiated object over multiple Infinispan servers. The factory is universal in the sense that it can instantiate an object of any (serializable) class atop an Infinispan cache, making transparently the object replicated and durable, while ensuring strong consistency despite concurrent access.

Basic Usage

Using the AtomicObjectFactory is fairly simple. We illustrate below a base use case. Additional examples are provided in the maven test directories.

AtomicObjectFactory factory = new AtomicObjectFactory(c1); // c1 is a cache
Set set = (Set) factory.getInstanceOf(HashSet.class, "k"); // k is the storage key
set.add("something"); // some call examples
System.out.println(set.toString())
set.addAll(set);
factory.disposeInstanceOf(HashSet.class, "set", true); // to persistently store the object

Limitations & Guarantees

The implementation requires that all the arguments of the methods of the object are Serializable, as well as the object itself. An object created by the factory is atomic provided that the cache which supports it is both synchronous and transactional.

Going Further

White Paper.
The factory is described in Section 4 of the paper titled "On the Support of Versioning in Distributed Key-Value Stores" published at the 33rd IEEE Symposium on Reliable Distributed Systems (SRDS'14). A preprint version of this paper is available at the following location.

High-level Implementation Details.
We built the factory on top of the transactional facility of Infinispan. In more details, when the object is created, we store both a local copy and a proxy registered as a cache listener. We serialize every call in a transaction consisting of a single put operation. When the call is de-serialized, it is applied to the local copy and, in case the calling process was local, the response value is returned.

Friday, 9 January 2015

Infinispan 7.1.0.Beta1

Dear Infinispan community,

We're proud to announce the first Beta release of Infinispan 7.1.0.

Infinispan brings the following major improvements:
  • Near-Cache support for Remote HotRod caches
  • Annotation-based generation of ProtoBuf serializers which removes the need to write the schema files by hand and greatly improves usability of Remote Queries
  • Cluster Listener Event Batching, which coalesces events for better performance
  • Cluster- and node-wide aggregated statistics
  • Vast improvements to the indexing performance
  • Support for domain mode and the security vault in the server
  • Further improvements to the Partition Handling with many stability fixes and the removal of the Unavailable mode: a cluster can now be either Available or Degraded.
Of course there's also the usual slew of bug fixes, performance and memory usage improvements and documentation cleanups.

Feel free to join us and shape the future releases on our forums, our mailing lists or our #infinispan IRC channel.

For a complete list of features and bug fixes included in this release please refer to the release notes. Visit our downloads section to find the latest release.

Thanks to everyone for their involvement and contribution!

Monday, 5 January 2015

Infinispan 7.0.3.Final released !

Dear community,

the new year brings a new release of the stable Infinispan branch. Infinispan 7.0.3.Final is a bug-fix release with a particular focus on partition handling stability. The release is a drop-in replacement for previous 7.0.x releases, however, be aware that we reverted org.infinispan.commons.utils.FileLookup to 6.0.x behaviour to ease upgrade for Hibernate 2nd-level cache and WildFly. Sorry about the breakage. Please consult the release notes for details.

Thanks to everyone involved in this release! 

Visit our downloads section to find the latest release.
If you have any questions please check our forums, our mailing lists or ping us directly on IRC.

Thursday, 18 December 2014

Infinispan 7.0.2.Final is a certified JSR-107 1.0 implementation

The infinispan-jcache module in Infinispan 7.0.2.Final has been certified to be compatible JSR-107 1.0 specification implementation. To get started with Infinispan's JSR-107 implementation, check the Infinispan documentation section on the topic, and remember that Infinispan also implements the JSR-107 annotations for CDI injection of cached values.

Thanks to everyone who has contributed to the module, and to Greg Luck and Brian Oliver for their help in completing the certification.

Cheers,
Galder

Monday, 15 December 2014

Hot Rod Remote Events #4: Clustering and Failover

This blog post is the last in a series that looks at the forthcoming Hot Rod Remote Events functionality included in Infinispan 7.0. First article focused on how to get started receiving remote events from Hot Rod servers. The second article looked at how Hot Rod remote events can be filtered, and the third one showed how to customize contents of events.

In this last article, we'll be focusing on how remote events are fired in a clustered environment and how failover situations are dealt with.

The most important thing to know about remote events in a clustered environment is that when a client adds a remote listener, this is installed in a single node in the cluster and that this node is in charge of sending events back to the client for all affected operations happening cluster wide.

As a result of this, when filtering or event customization is applied, the org.infinispan.filter.KeyValueFilter and/or org.infinispan.filter.Converter instances must be somehow marshallable. This is necessary because when the client listener is installed in a cluster, the filter and/or converter instances are sent to other nodes in the cluster so that filtering and conversion can happen right where the event originates, hence improving efficiency. These classes can be made marshallable by making them extend Serializable, or providing and registering a custom Externalizer for them.

Under normal circumstances, the code and examples showed in previous blog posts work the same way in clustered environment. However, in a clustered environment, a decision needs to be made with regards to how to deal with situations where nodes go down: If a node goes down that does not have the client listener installed, nothing happens. However, when the node containing the client listener goes down, the Hot Rod client implementation transparently fails over the client listener registration to a different node. As a result of this failover, there could be a gap in the event consumption. This gap is solved using one of these solutions:

State Delivery


The @ClientListener annotation has an optional parameter called includeCurrentState. When this is enabled and the client listener is registered, before receiving any events for on-going operations, the server sends ClientCacheEntryCreatedEvent event instances (for methods annotated with @ClientCacheEntryCreated) for all existing cache entries to the client. This offers the client an opportunity to construct some state or computation based on the contents of the clustered cache. When the Hot Rod client transparently fails over registered listeners, it re-registers them in a different node and if includeCurrentState is enabled, clients can recompute their state or computation to reinstate it to what it was before the failover. The downside of includeCurrentState is that it's performance is heavily dependant on the cache size, and hence it's disabled by default.

@ClientCacheFailover


Alternatively, instead of relying on receiving state, users can define a method with @ClientCacheFailover annotation that receives ClientCacheFailoverEvent as parameter inside the client listener implementation:


This method would be called back whenever the node that had this client listener has gone down. This can be handy for situations when the end users just wants to clear up some local state as a result of the failover, e.g. clear a near or L1 cache. When events are received again, the near or L1 cache could be repopulated again.

This callback method of dealing with client listener failover offers a simple, efficient solution to dealing with cluster topology changes affecting client listeners. Depending on the remote event use case, this method might be better suited that state delivery.

Final Words


This post marks the end of the remote event series. In future Infinispan versions, we'll continue improving the technology adding some extra features, and more importantly, we'll start building higher level abstractions on top of remote events, such as Hot Rod client Near Caches.

Cheers,
Galder