Tuesday, 22 December 2009

JBoss Asylum PODCast on Infinispan

So for those hankering to hear my voice again, check out JBoss Asylum Eposide 7 - a PODCast of yours truly talking about Infinispan and my views on cloud data storage. This was recorded at Devoxx 2009 in Antwerp, with Max Andersen and Emmanuel Bernard.


Friday, 18 December 2009

New video demo: Monitoring Infinispan with Jopr console

Over the past few weeks, I've been working on improving how Infinispan is managed and/or monitored and I can finally share some of the results of that effort with you.

In the coming weeks, I'll be sharing some in-depth flash movies explaining everything from installing Jopr, our enterprise management solution for JBoss middleware, to installing the Infinispan Jopr plugin, discovering Infinispan instances automatically or manually...etc

However, before that, I'd like to share a video demo with you where I briefly show a three-node Infinispan cluster being monitored. It demonstrates graphical measurements, and non-graphical information of running Infinispan instances, addition or removal of monitored metrics and finally, execution of management operations on a Infinispan instance.

The Infinispan version used in the video was a snapshot of Infinispan 4.0.0, but you should be able to replicate what's shown in the video with Infinispan 4.0.0.CR3 or higher.

Enjoy :)

Friday, 11 December 2009

Infinispan's third release candidate

I'd happy to announce that after a long wait, we've finally released Infinispan 4.0.0.CR3 - the latest and greatest in a series of release candidates, which will hopefully be the last before we cut the final release. As such, it is really important that you try out CR3 - test it, stress it out, and provide as much feedback as possible on the user forums.

In this release, we've:
  • Fixed the dependency issues on RHQ snapshots, which was causing problems for several people
  • Added the ability to configure the cache instance used by the REST server
  • Updated the query API so it is not restricted to String keys
  • Loads more stuff
As always, the full set of changes are on JIRA. Download this release here, and as always, provide feedback here.


Monday, 23 November 2009

Article: "Introducing the Infinispan Data Grid Platform"

I've written the first article in a two-part series introducing Infinispan as a data grid platform, including some basic usage examples and demos. Have a look, it's on DZone: http://java.dzone.com/articles/infinispan-data-grid-platform


Devoxx 2009 recap

We're back from Devoxx 2009, and thanks to Stephan et al for organising an excellent event. There were some brilliant talks, especially on performance tuning, JDK7 and Scala. And of course Infinspan. :)

Once the recording of my talk on Infinispan is up on Parleys.com, I'll link to it here. But for now, you should check out this PODcast interview I did with Chariot Solutions' Ken Rimple a couple of weeks back - http://techcast.chariotsolutions.com/index.php?post_id=550487

To see what folks have been tweeting about regarding my talk at Devoxx, check out this search on twitter.com!


Friday, 13 November 2009

Second release candidate for 4.0.0

Hi all

I'm pleased to announce a second release candidate for Infinispan 4.0.0. CR2 builds on CR1, fixing a whole pile of issues reported - thanks for the feedback, everyone! In addition, we have started benchmarking and profiling Infinispan using the CacheBenchFwk project, and based on our findings have tweaked and tuned things accordingly. We will publish results of these tests soon.

This release also brings along another tech preview - the Lucene Directory Provider, courtesy of Google Summer of Code student Lukasz Moren and frequent contributor Sanne Grinovero. Excellent work, guys, finally a distributed, in-memory store for Lucene indexes! This provider is bundled in the Infinispan distro, as is a demo showing off the capabilities of such a directory provider. More details on this wiki page.

For full details on what's changed, have a look at the release notes report in JIRA.

As always, we need feedback, especially as close as we are to a final release. Download this release (or add it as a Maven dependency), and report feedback!


Wednesday, 28 October 2009

First release candidate now available

Infinispan 4.0.0.CR1 is now available for download. This is an important release, containing several critical bug fixes on the last beta. Performance has improved too, with a better default JGroups stack. Many thanks to the multitude of contributors and committers who have worked hard to make this release a possibility.

A full change log is available on JIRA. Downloads and documentation are in the usual place. Please test this release with anger; feedback is critical to a high-quality final release. The user forums should be used to provide such feedback.


Tuesday, 20 October 2009

Follow-up on Infinispan Community Day at Devoxx

A follow-up on the joint SEAM and Infinispan community event at Devoxx, this year, the event will be held on Friday, 20 November, at ViaVia Antwerp (http://www.viaviacafe.com/) with the doors opening at 16:30. The first talk will begin at 17:05 with the second one ending by 19:30. Food to follow, lots of interesting discussion, Q&A, etc. We look forward to seeing you all there!


Infinispan based Hibernate Cache Provider available now!

Update (2009/11/13)! Infinispan 4.0.0.Beta2 based Hibernate second level cache provider now available in Hibernate 3.5 Beta2. However, neither Infinispan 4.0.0.Beta2 nor the Infinispan Cache Provider jar are available in the zip distribution. Instead, please download Infinispan 4.0.0.Beta2 from our download site and the Infinispan Cache Provider from our Maven repository.

I've just finished the development of an Infinispan 4.0 based Hibernate second level cache provider. This will be included from next Hibernate 3.5 release onwards but if you cannot wait and wanna play with it in the mean time, just checkout Hibernate trunk from our SVN repository and run 'mvn install'.

I've also written a wiki called "Using Infinispan as JPA/Hibernate Second Level Cache Provider" that should help users understand how to configure the Infinispan cache provider and how to make the most of it!

So, what's good about it? Why should I use it? First of all, since the cache provider is based on Infinispan, you benefit from all the improvements we've done to Infinispan in terms of performance and memory consumption so far and there are more to come!

On top of that, starting with this cache provider, we're aiming to reduce the number of files needed to modify in order to define the most commonly tweaked parameters. So, for example, by enabling eviction/expiration configuration on a per generic Hibernate data type or particular entity/collection type via hibernate.cfg.xml or persistence.xml, users don't have to touch to Infinispan's cache configuration file any more. You can find detailed information on how to do this in the "Using Infinispan as JPA/Hibernate Second Level Cache Provider" wiki

Please direct any feedback to the Infinispan user forum.


Wednesday, 14 October 2009

Infinispan Community Day at Devoxx

We're hosting an Infinispan community day in Antwerp in November, just after Devoxx 2009. This will be on Friday, just after the main conference finishes. Expect core Infinispan committers to be around, running demos and answering specific questions. Buy a beer, get a question answered. :-) This session will also be co-hosted alongside the SEAM community session, so you get 2 interesting techs in one go! The venue is TBD but I will post more details on this shortly. It will be close enough to the main conference venue though.

This is a free event, but please register so we can keep track of numbers.

And FYI, I will be speaking at the conference on Infinispan and the future of data grids - I do hope to see you there!


Monday, 5 October 2009

Another beta for Infinispan

Rather than releasing a CR1 so soon, I have decided to cut another beta instead, mainly due to the sheer volume of changes since the last beta. So here is what you have to look forward to. A pile of bugs fixed thanks to reports made by you, the community. A couple of packages upgraded, including JGroups (to 2.8.0.CR2) and JBoss Marshalling (to 1.2.0.CR4). And two new modules introduced: a tech preview of the query API, and the new RESTful server module, thanks to Navin and Michael accordingly. A full list of issues addressed in this release is available on JIRA.

We're pushing hard for a series of CRs now, to solidify the codebase and make sure it exceeds expectations in performance, scalability, stability and ease of use. Your feedback has always been valuable, please keep it coming!

This release is available for download in the usual place, and the wiki should be used as your primary port-of-call for documentation. Discuss issues on the user forum, and don't forget to tweet about Infinispan! :-)


Wednesday, 23 September 2009

Infinispan Query breaks into 4.0.0.CR1

Hello all,

Querying is an important feature for Infinispan, so we've decided to include a technology preview of this for 4.0.0.CR1 and 4.0.0.GA, even though it is only really scheduled for Infinispan 4.1.0.

Browse to this wiki page to see how the new API works for querying, along with usage examples.

Some of the API has come from JBoss Cache Searchable but has been enhanced and runs slicker. A lot more work is being done under the hood so it makes it easier for users. For example, the API method on the QueryFactory.getBasicQuery() just needs two Strings and builds a basic Lucene Query instance, as opposed to forcing the user to create a Lucene query manually. This is still possible however, should a user want to create a more complex query.

The indexing for Lucene is now done through interceptors as opposed to listeners, and hence more tightly integrated into Infinispan's core.

You can also choose how indexes are maintained. If indexes are shared (perhaps stored on a network mounted drive), then you only want nodes to index changes made locally. On the other hand, if each node maintains its own indexes (either in-memory on on a local filesystem) then you want each node to index changes made, regardless of where the changes are made. This behaviour is controlled by a system property - -
Dinfinispan.query.indexLocalOnly=true. However, this is system property temporary and will be replaced with a proper configuration property once the feature is out of technology preview.

What's coming up?
Future releases of Hibernate Search and Infinispan will have improvements that will change the way that querying works. The QueryHelper class - as documented in the wiki - is temporary so that will eventually be removed, as you will not need to provide the class definitions of the types you wish to index upfront. We will be able to detect this on the fly (see HSEARCH-397)

There will be a better system for running distributed queries. And the system properties will disappear in favour of proper configuration attributes.

And also, GSoC student Lukasz Moren's work involving an Infinispan-based Lucene Directory implementation will allow indexes to be shared cluster-wide by using Infinispan itself to distribute these indexes. All very clever stuff.

Thanks for reading!


Tuesday, 22 September 2009


I will be presenting on Infinispan at Devoxx in Antwerp this November. For details, see:

Remember that you can track where the core Infinispan team will be making public appearances on the Infinispan Talks calendar!

So, see you in Antwerp!

Comparing JBoss Cache, Infinispan and Gigaspaces

Chris Wilk has posted a detailed blog comparing features in JBoss Cache, Infinispan and Gigaspaces.

This well-written article is available here:

and is a good starting point for more in-depth analysis and comparison.


Tuesday, 15 September 2009

Introducing the Infinispan (REST) server

Introducing the Infinispan RESTful server !

The Infinispan RESTful server combines the whole grain goodness of RESTEasy (JAX-RS, or JSR-311) with Infinispan to provide a web-ready RESTful data grid.

Recently I (Michael) spoke to Manik about an interesting use case, and he indicated great interest in such a server. It wasn't a huge amount of work to do the initial version - given that JAX-RS is designed to make things easy.

For those that don't know: RESTful design is using the well proven and established http/web standards for providing services (as a simple alternative to WS-*) - if that still isn't enough, you can read more here. So for Infinispan that means that any type of client can place data in the Infinispan grid.

So what would you use it for?
For non java clients, or clients where you need to use HTTP as the transport mechanism for caching/data grid needs. A content delivery network (?) - push data into the grid, let Infinispan spread it around and serve it out via the nearest server. See here for details on using http and URLs with it.

In terms of clients - you only need HTTP - no binary dependencies or libraries needed (the wiki page has some samples in ruby/python, also in the project source).

Where does it live?
The server is a module in Infinispan under /server/rest (for the moment, we may re-arrange the sources at a later date).

Getting it.
Currently you can download the war from the wiki page, or build it yourself (as it is still new, early days). This is at present a war file (tested on JBoss AS and Jetty) which should work in most containers - we plan to deliver a stand alone server (with an embedded JBoss AS) Real Soon Now.

Questions: (find me on the dev list, or poke around the wiki).

Implemented in scala: After chatting with Manik and co, we decided this would serve as a good test bed to "test the waters" on Scala - so this module is written in scala - it worked just fine with RESTEasy, and Infinispan (which one would reasonably expect, but nice when things do work as advertised !).

Friday, 28 August 2009

Podcast on Infinispan

So a lot of folks have asked me for a downloadable slide deck from my recent JUG presentations on Infinispan. I've gone a step further and have recorded a short 5 minute intro to data grids and Infinispan as a podcast.


Tuesday, 25 August 2009

First beta now available!

So today I've finally cut the much-awaited Infinispan 4.0.0.BETA1. Codenamed Starobrno - after the Czech beer that was omnipresent during early planning sessions of Infinispan - Beta1 is finally feature-complete. This is also the first release where distribution is complete, with rehashing on joins and leaves implemented as well. In addition, a number of bugs reported on previous alpha releases have been tended to.

This is a hugely important release for Infinispan. No more features will be added to 4.0.0, and all efforts will now focus on stability, performance and squashing bugs. And for this we need your help! Download, try out, feedback. And you will be rewarded with a rock-solid, lightning-fast final release that you can depend on.

Some things that have changed since the alphas include a better mechanism of naming caches and overriding configurations, and a new configuration XML reference guide. Don't forget the 5-minute guide for the impatient, and the interactive tutorial to get you started as well.

There are a lot of folk to thank - way too many to list here, but you all know who you are. For a full set of release notes, visit this JIRA page.

Download, try it out, and feedback as much as possible. We'd love to hear from you!


Friday, 21 August 2009

Distribution instead of Buddy Replication

People have often commented on Buddy Replication (from JBoss Cache) not being available in Infinispan, and have asked how Infinispan's far superior distribution mode works. I've decided to write this article to discuss the main differences from a high level. For deeper technical details, please visit the Infinispan wiki.

Scalability versus high availability
These two concepts are often at odds with one another, even though they are commonly lumped together. What is usually good for scalability isn't always good for high availability, and vice versa. When it comes to clustering servers, high availability often means simply maintaining more copies, so that if nodes fail - and with commodity hardware, this is expected - state is not lost. An extreme case of this is replicated mode, available in both JBoss Cache and Infinispan, where each node is a clone of its neighbour. This provides very high availability, but unfortunately, this does not scale well. Assume you have 2GB per node. Discounting overhead, with replicated mode, you can only address 2GB of space, regardless of how large the cluster is. Even if you had 100 nodes - seemingly 200GB of space! - you'd still only be able to address 2GB since each node maintains a redundant copy. Further, since every node needs a copy, a lot of network traffic is generated as the cluster size grows.

Enter Buddy Replication
Buddy Replication (BR) was originally devised as a solution to this scalability problem. BR does not replicate state to every other node in the cluster. Instead, it chooses a fixed number of 'backup' nodes and only replicates to these backups. The number of backups is configurable, but in general it means that the number of backups is fixed. BR improved scalability significantly and showed near-linear scalability with increasing cluster size. This means that as more nodes are added to a cluster, the space available grows linearly as does the available computing power if measured in transactions per second.

But Buddy Replication doesn't help everybody!
BR was specifically designed around the HTTP session caching use-case for the JBoss Application Server, and heavily optimised accordingly. As a result, session affinity is mandated, and applications that do not use session affinity can be prone to a lot of data gravitation and 'thrashing' - data is moved back and forth across a cluster as different nodes attempt to claim 'ownership' of state. Of course this is not a problem with JBoss AS and HTTP session caching - session affinity is recommended, available on most load balancer hardware and/or software, is taken for granted, and is a well-understood and employed paradigm for web-based applications.

So we had to get better
Just solving the HTTP session caching use-case wasn't enough. A well-performing data grid needs to to better, and crucially, session affinity cannot be taken for granted. And this was the primary reason for not porting BR to Infinispan. As such, Infinispan does not and will not support BR as it is too restrictive.

Distribution is a new cache mode in Infinispan. It is also the default clustered mode - as opposed to replication, which isn't scalable. Distribution makes use of familiar concepts in data grids, such as consistent hashing, call proxying and local caching of remote lookups. What this leads to is a design that does scale well - fixed number of replicas for each cache entry, just like BR - but no requirement for session affinity.

What about co-locating state?
Co-location of state - moving entries about as a single block - was automatic and implicit with BR. Since each node always picked a backup node for all its state, one could visualize all of the state on a given node as a single block. Thus, colocation was trivial and automatic: whatever you put in Node1 will always be together, even if Node1 eventually dies and the state is accessed on Node2. However, this meant that state cannot be evenly balanced across a cluster since the data blocks are very coarse grained.
With distribution, colocation is not implicit. In part due to the use of consistent hashing to determine where each cached entry resides, and also in part due to the finer-grained cache structure of Infinispan - key/value pairs instead of a tree-structure - this leads to individual entries as the granularity of state blocks. This means nodes can be far better balanced across a cluster. However, it does mean that certain optimizations which rely on co-location - such as keeping related entries close together - is a little more tricky.

One approach to co-locate state would be to use containers as values. For example, put all entries that should be colocated together into a HashMap. Then store the HashMap in the cache. But that is coarse-grained and ugly as an approach, and will mean that the entire HashMap would need to be locked and serialized as a single atomic unit, which can be expensive if this map is large.

Another approach is to use Infinispan's AtomicMap API. This powerful API lets you group entries together, so they will always be colocated, locked together, but replication will be much finer-grained, allowing only deltas to the map to be replicated. So that makes replication fast and performant, but it still means everything is locked as a single atomic unit. While this is necessary for certain applications, it isn't always be desirable.

One more solution is to implement your own ConsistentHash algorithm - perhaps extending DefaultConsistentHash. This implementation would have knowledge of your object model, and hashes related instances such that they are located together in the hash space. By far the most complex mechanism, but if performance and co-location really is a hard requirement then you cannot get better than this approach.

In summary:

Buddy Replication
  • Near-linear scalability
  • Session affinity mandatory
  • Co-location automatic
  • Applicable to a specific set of use cases due to the session affinity requirement
  • Near-linear scalability
  • No session affinity needed
  • Co-location requires special treatment, ranging in complexity based on performance and locking requirements. By default, no co-location is provided
  • Applicable to a far wider range of use cases, and hence the default highly scalable clustered mode in Infinispan
Hopefully this article has sufficiently interested you in distribution, and has whetted your appetite for more. I would recommend the Infinispan wiki which has a wealth of information including interactive tutorials and GUI demos, design documents and API documentation. And of course you can't beat downloading Infinispan and trying it out, or grabbing the source code and looking through the implementation details.


Defining cache configurations via CacheManager in Beta1

Infinispan's first beta release is just around the corner and in preparation, I'd like to introduce to the Infinispan users an important API change in org.infinispan.manager.CacheManager class that will be part of this beta release.

As a result of the development of the Infinispan second level cache provider for Hibernate, we have discovered that the CacheManager API for definition and retrieval of Configuration instances was a bit limited. So, for this coming release, the following method has been deleted:
void defineCache(String cacheName, Configuration configurationOverride)

And instead, the following two methods have been added:
Configuration defineConfiguration(String cacheName, Configuration configurationOverride);
Configuration defineConfiguration(String cacheName, String templateCacheName,
Configuration configurationOverride);

The primary driver for this change has been the development of the Infinispan cache provider, where we wanted to enable users to configure or override most commonly modified Infinispan parameters via hibernate configuration file. This would avoid users having to modify different files for the most commonly modified parameters, hence improving usability of the Infinispan cache provider. However, to be able to implement this, we needed CacheManager's API to be enhanced so that:

- Existing defined cache configurations could be overriden. This enables use cases like this: Sample Infinispan cache provider configuration will contain a generic cache definition to be used for entities. Via hibernate configuration file, users could redefine the maximum number of entries to be allowed before eviction kicks in for all entities. The code would look something like this:
// Assume that 'cache-provider-configs.xml' contains 
// a named cache for entities called 'entity'
CacheManager cacheManager = new DefaultCacheManager(
Configuration overridingConfiguration = new Configuration();
overridingConfiguration.setEvictionMaxEntries(20000); // max entries to 20.000
// Override existing 'entity' configuration so that eviction max entries are 20.000.
cacheManager.defineConfiguration("entity", overridingConfiguration);

- Be able to define new cache configurations based on the configuration of a given cache instance, optionally applying some overrides. This enables uses cases like the following: A user wants to define eviction wake up interval for a specific entity which is different to the wake up interval used for the rest of entities.
// Assume that 'cache-provider-configs.xml' contains 
// a named cache for entities called 'entity'
CacheManager cacheManager = new DefaultCacheManager(
Configuration overridingConfiguration = new Configuration();
// set wake up interval to 240 seconds
// Create a new cache configuration for com.acme.Person entity
// based on 'entity' configuration, overriding the wake up interval to be 240 seconds
cacheManager.defineConfiguration("com.acme.Person", "entity", overridingConfiguration);

Another limitation of the previous API, which we've solved with this API change, is that in the past the only way to get a cache's Configuration required the cache to be started because the only way to get the Configuration instance was from the Cache API. However, with this API change, we can now retrieve a cache's Configuration instance via the CacheManager API. Example:
// Assume that 'cache-provider-configs.xml' contains 
// a named cache for entities called 'entity'
CacheManager cacheManager = new DefaultCacheManager(
// Pass a brand new Configuration instance without overrides
// and it will return the given cache name's Configuration
Configuration entityConfiguration = cacheManager.defineConfiguration("entity",
new Configuration());

If you would like to provide any feedback to this post, either respond to this blog entry or go to Infinispan's user forums.

Wednesday, 12 August 2009

Coalesced Asynchronous Cache Store

As we prepare for Infinispan's beta release, let me introduce to you one of the recent enhancements implemented which improves the way the current asynchronous (or write-behind) cache store works.

Right until now, the asynchronous cache store simply queued modifications, while a set of threads would apply them. However, if the queue contained N put operations on the same key, these threads would apply each and every modification one after the other, which is not very efficient.

Thanks to the excellent feedback from the Infinispan community, we've now improved the asynchronous cache store so that it coalesces changes and only applies the latest modification on a key. So, if N put operations on the same key are queued, only the last modification will be applied to the cache store.

Internally, the asynchronous concurrent queueing mechanism used performs in O(1) by keeping an map with the latest values for each key. So, this maps acts like the queue but there's a not a need for a queue as such, we only care about making sure the latest values are stored hence, order is not important.

Note that the way threads apply these modifications is that they start working as soon as there are any changes available and so to see these changes coalesced, the system needs to be relatively busy or a lot of changes on the same key need to happen in a relatively short period of time. We could have made these threads work periodically, i.e. every X seconds, but by doing that, we would be letting modifications pile up and the time between operations and the cache store updates would go up, hence increasing the chance that the cache store is outdated.

Finally, there's no configuration modifications required to get the asynchronous cache store to work in the coalesced way, it just works like this out-of-the-box. Example:

<?xml version="1.0" encoding="UTF-8"?>
<infinispan xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:infinispan:config:4.0">
<namedCache name="persistentCache">
<loaders passivation="false" shared="false" preload="true">
<loader class="org.infinispan.loaders.file.FileCacheStore" fetchPersistentState="true" ignoreModifications="false" purgeOnStartup="false">
<property name="location" value="/tmp"/>
<async enabled="true" threadPoolSize="10"/>

Thursday, 30 July 2009

More JUG talks

Adrian Cole and I recently presented at a JUGs in Krakow and Dublin, on Infinispan and JClouds.

Krakow was great - and hot: 39 degree weather! (Don't we just love global warming?) Anna Kolodziejczyk at Proidea (who also organises JDD) organised the event, which attracted an excited and interactive audience of about 40 people. Dublin, beautiful as always, was a polar opposite in terms of weather: a cloudy, windy and wet day, living up to its reputation. Luan O'Carroll of DUBJUG organised the event, at the plush Odessa Club. Again, an inquisitive audience of about 35 people attended, with a lot of questions on the future of data storage in clouds. Thomas Diesler of JBoss OSGi fame made a surprise guest appearance too.

In general, the talks have been very well received and have provoked thought and discussion. As requested by many, I will soon be recording a podcast of this talk and will make it available on this blog.

Apart from a JUG I hope to organise soon in London, the next time I speak about Infinispan publicly will be at JBoss World in September. Hope to see you there!


Monday, 27 July 2009

Increase transactional throughput with deadlock detection

Deadlock detection is a new feature in Infinispan. It is about increasing the number of transactions that can be concurrently processed. Let's start with the problem first (the deadlock) then discuss some design details and performance.

So, the by-the-book deadlock example is the following:
  • Transaction one (T1) performs following operation sequence: (write key_1,write key_2)
  • Transaction two (T2) performs following sequence: (write key_2, write key_1).
Now, if the T1 and T2 happen at the same time and both have executed first operation, then they will wait for each other virtually forever to release owned locks on keys. In the real world, the waiting period is defined by a lock acquisition timeout (LAT) - which defaults to 10 seconds - that allows the system to overcome such scenarios and respond to the user one way (successful) or the other(failure): so after a period of LAT one (or both) transaction will rollback, allowing the other to continue working.

Deadlocks are bad for both system's throughput and user experience. System throughput is affected because during the deadlock period (which might extend up to LAT) no other thread will be able to update neither key_1 nor key_2. Even worse, access to any other keys that were modified by T1 or T2 will be similarly restricted. User experience is altered by the fact that the call(s) will freeze for the entire deadlock period, and also there's a chance that both T1 and T2 will rollback by timing out.

As a side note, in the previous example, if the code running the transactions would(and can) enforce any sort of ordering on the keys accessed within the transaction, then the deadlock would be avoided. E.g. if the application code would order the operation based on the lexicographic ordering of keys, both T1 and T2 would execute the following sequence: (write key_1,write key_2), and so no deadlock would result. This is a best practice and should be followed whenever possible.
Enough with the theory! The way Infinispan performs deadlock detection is based on an algorithm designed by Jason Greene and Manik Surtani, which is detailed here. The basic idea is to split the LAT in smaller cycles, as it follows:

lock(int lockAcquisitionTimeout) {
while (currentTime < startTime + timeout) {
if (acquire(smallTimeout)) break;
testForDeadlock(globalTransaction, key);

What testForDeadlock(globalTransaction, key) does is check weather there is another transaction that satisfies both conditions:
  1. holds a lock on key and
  2. intends to lock on a key that is currently called by this transaction.
If such a transaction is found then this is a deadlock, and one of the running transactions will be interrupted: the decision of which transaction will interrupt is based on coin toss, a random number that is associated with each transaction. This will ensure that only one transaction will rollback, and the decision is deterministic: nodes and transactions do not need to communicate with each other to determine the outcome.

Deadlock detection in Infinispan works in two flavors: determining deadlocks on transactions that spread over several caches and deadlock detection in transactions running on a single(local) cache.

Let's see some performance figures as well. A class for benchmarking performance of deadlock detection functionality was created and can be seen here. Test description (from javadoc):

We use a fixed size pool of keys (KEY_POOL_SIZE) on which each transaction operates. A number of threads (THREAD_COUNT) repeatedly starts transactions and tries to acquire locks on a random subset of this pool, by executing put operations on each key. If all locks were successfully acquired then the tx tries to commit: only if it succeeds this tx is counted as successful. The number of elements in this subset is the transaction size (TX_SIZE). The greater transaction size is, the higher chance for deadlock situation to occur. On each thread these transactions are being repeatedly executed (each time on a different, random key set) for a given time interval (BENCHMARK_DURATION). At the end, the number of successful transactions from each thread is cumulated, and this defines throughput (successful tx) per time unit (by default one minute).

Disclaimer: The following figures are for a scenario especially designed to force very high contention. This is not typical, and you shouldn't expect to see this level of increase in performance for applications with lower contention (which most likely is the case). Please feel free tune the above benchmark class to fit the contention level of your application; sharing your experience would be very useful!

Following diagram shows the performance degradation resulting from running the deadlock detection code by itslef in a scenario where no contention/deadlocks are present.
Some clues on when to enable deadlock detection. A high number of transaction rolling back due to org.infinispan.util.concurrent.TimeoutException is an indicator that this functionality might help. TimeoutException might be caused by other causes as well, but deadlocks will always result in this exception being thrown. Generally, when you have a high contention on a set of keys, deadlock detection may help. But the best way is not to guess the performance improvement but to benchmark and monitor it: you can have access to statistics (e.g. number of deadlocks detected) through JMX, as it is exposed via the DeadlockDetectingLockManager MBean.

Monday, 20 July 2009

Berlin and Stuttgart say hello to Infinispan

Last week I finally put together my presentation on cloud computing and Infinispan. To kick things off, I presented it at two JUG events in Germany.

Berlin's Brandenburg JUG organised an event at the NewThinking Store in Berlin's trendy Mitte district. Thanks to Tobias Hartwig and Ralph Bergmann for organising the event, which drew an audience of about 35 people. Cloud computing was the focus of the evening, and I started the event with my rather lengthy presentation on cloud computing and specific issues around persisting data in a cloud. The bulk of the presentation focused on Infinispan, what it provides as a data grid platform, and what's on the roadmap. After a demo and a short break, Infinispan committer Adrian Cole then spoke about JClouds, demonstrating Infinispan's use of JClouds to back cached state onto Amazon's S3. You can read more about Adrian's presentation on his blog.

Two days later, the Stuttgart JUG arranged for me to speak to their JBoss Special Interest Group on Infinispan. Thanks to Tobias Frech and Heiko Rupp for organising this event, which was held in one of Red Hat's training rooms in Stuttgart. The presentation followed a similar pattern to what was presented in Berlin, to an audience of about 15 people.

In both cases, there was an overwhelming interest in Infinispan as a distributed storage engine. The JPA interface which is on our roadmap generated a lot of interest, as did the query API and to a lesser extent the asynchronous API - which could benefit from a better example in my presentation to demonstrate why this really is a powerful thing.

Overall, it is good to see that folks are interested in and are aware of the challenges involved in data storage on clouds, where traditional database usage is less relevant.

Many people have asked me for downloadable versions of my slides. Rest assured I will put them up - either as PDFs or better still, as a podcast - over the next 2 weeks.

Coming up, I will be in Krakow speaking at their JUG on Thursday the 23rd, and then in Dublin on Tuesday the 29th. Details of these two events are on the Infinispan Talks Calendar. Hope to see you there!


Friday, 17 July 2009

First experiences presenting Infinispan

Last week was one of the most exciting weeks for me since joining the Infinispan team because for the very first time, I was going to present Infinispan to the world :)

Firstly last Tuesday, I introduced Infinispan to Switzerland's Java User Group, where a crowd of around 20 people learned about the usability improvements introduced, the performance and memory consumption enhancements, and forthcoming new features. To finish the presentation, I showed the audience a demo of 3 distributed Infinispan instances connected to an Amazon S3 cache store via JClouds. I received some very positive feedback from the attendees who, in particular, were interested in finding out the differences between grid and cloud computing.

Two days later I went to Brussels to do the same presentation for Belgium's JBoss User Group and the reaction was even better there! A lot of Spring developers attended the presentation who were very keen on integrating Infinispan in their own projects.

From here I'd like to thank all the people who attended these two sessions and in particular the organizers, Jakob Magun and Phillip Oser from Switzerland's Java User Group and Joris De Winne from Belgium's JBoss User Group.


Monday, 13 July 2009

4.0.0.ALPHA6 - another alpha for Infinispan.

Yes, we've felt the need for one more Alpha. This alpha contains a number of bug fixes over Alpha5, as well as some new minor features. Please have a look at the release notes for details.

In addition to code changes, Vladimir Blagojevic has contributed a Doclet to generate a configuration reference. Check this out here. While not all config elements are properly annotated in this release - and as such the configuration reference is somewhat sparse - thanks to this tool, a more complete and up-to-date configuration reference is something you can look forward to in future releases.

Further, Alejandro Montenegro has started compiling steps for an interactive tutorial. Making use of a Groovy shell, this tutorial guides readers through most of Infinispan's APIs in an interactive manner that would hopefully make it easy to learn about Infispan. Please do give this a try and provide Alejandro with feedback!

Please download and try out this release, and feed back with your thoughts and experiences!


Friday, 10 July 2009


I will be presenting on Infinispan, data grids and the data fabric of clouds at JBoss World Chicago, in September 2009. I will cover a brief history of Infinispan and the motivations behind the project, and then talk in a more abstract manner about data grids and the special place they occupy in providing data services for clouds.

In addition, I expect to pontificate on my thoughts on clouds and the future of computing in general to anyone who buys me a coffee/beer! :-)

So go on, convince your boss to let you go, and attend my talk, and hopefully I'll see you there!


Monday, 6 July 2009

Upcoming JUG and JBUG talks on Infinispan

I will be speaking at a number of JUGs around Europe this month. In addition, other core Infinispan devs will also be making JUG and conference appearances. I've put together a calendar of events which you can track, or add to your Google calendar to monitor.


This is a great chance for folks to learn about Infinispan, cloud and distributed computing, and where the project is headed. Hope to see you at one or more of these events!


Thursday, 18 June 2009

Executing TestNG tests relying on @Parameters from Eclipse

If you wanna run TestNG tests relying on @Parameters like the one below from Eclipse, you need to pass some value for the 'basedir' parameter, otherwise Eclipse will complain:

@Test(groups = "unit", enabled = true, testName = "loaders.bdbje.BdbjeCacheStoreIntegrationTest")
public class BdbjeCacheStoreIntegrationTest extends BaseCacheStoreTest {

private String tmpDirectory;

protected void setUpTempDir(String basedir) {
tmpDirectory = basedir + TestingUtil.TEST_PATH + File.separator + getClass().getSimpleName();

Having looked around on the web, it's not clear how to do this and some people even seem to claim that it's not doable. However, having looked at how Maven deals with this, you simply have to pass the parameter as system property and it will work. So, if you wanna run BdbjeCacheStoreIntegrationTest from Eclipse, simply pass a system property like this:

Otherwise, Eclipse will moan with a message like this:
Parameter 'basedir' is required by @Configuration on method setUpTempDir

Saturday, 13 June 2009

High-five for Alpha5

I've just released Infinispan 4.0.0.ALPHA5. Yes, I know you were expecting Beta1 already, but it is taking a little longer than anticipated. Anyway, Alpha5 has got some cool new stuff you'd definitely want to check out.
  • Migration scripts for EHCache
  • Internal performance improvements
  • Newer, faster JBoss Marshalling
As you know, your feedback is important to us, so please do download and try out what will hopefully be the last Alpha before we start releasing Betas.

A full changelog is on JIRA, and this release can be downloaded on our downloads page.


Tuesday, 9 June 2009

Blogger and syntax highlighting of code

If you happen to write a lot of tech articles on Blogger and frequently use code examples, it can be frustrating that Blogger has no built-in support for syntax highlighting. After some frustrating initial experiments with my own styles for code snippets, I found this excellent article:


In no time flat, I have converted all of my previous blog posts and they are now properly highlighted, complete with line numbers. :-) Lovely stuff.


Tuesday, 2 June 2009

Pimp your desktop with an Infinispan wallpaper!

The boys and girls on JBoss.org's creative team have come up with a kick-ass desktop and iPhone wallpaper for Infinispan. Check these out, pimp your desktop today!



Another alpha for Infinispan

Yes, Infinispan 4.0.0.ALPHA4 is ready for a sound thrashing.

What's new? Galder Zamarreño's recent contribution of ripping out the marshalling framework Infinispan "inherited" from JBoss Cache and replacing it with JBoss Marshalling has made the marshalling code much leaner, more modular and more testable, and comes with a nifty performance boost too. What's also interesting is that he has overcome issues with object stream caching (see my blog on the subject) by using JBoss Marshalling streams which can be reset. This too provides a very handy performance boost for short lived streams. (See ISPN-42, ISPN-84)

Mircea Markus has put together a bunch of migration scripts to migrate your JBoss Cache 3.x configuration to Infinispan. More migration scripts are on their way. (See ISPN-53, ISPN-54)

Vladimir Blagojevic has contributed the new lock() API - which allows for explicit, eager cluster-wide locks. (See ISPN-48)

Heiko Rupp has contributed the early days of a JOPR plugin, allowing Infinispan instances to be managed by JBoss AS 5.1.0's embedded console as well as other environments. Read his guide to managing Infinispan with JOPR for more details.

And I've implemented some enhancements to the Future API. Now, rather than returning Futures, the xxxAsync() methods return a NotifyingFuture. NotifyingFuture extends Future, adding the ability to register a notifier such that the caller can be notified when the Future completes. Note that Future completion could mean any of successful completion, exception or cancellation, so the listener should check the state of the Future using get() on notification. For example:

NotifyingFuture<Void> f = cache.clearAsync().attachListener(new FutureListener<Void>() {
public void futureDone(Future<Void> f) {
if (f.get() && !f.isCancelled()) {
System.out.println("clear operation succeeded");

The full change log for this release is available on JIRA. Download this release, and provide feedback on the Infinispan user forums.

Onward to Beta1!


Thursday, 14 May 2009

Alpha3 ready to rumble!

So I've just tagged and cut Infinispan 4.0.0.ALPHA3. (Why are we starting with release 4.0.0? Read our FAQs!)

As I mentioned recently, I've implemented an uber-cool new asynchronous API for the cache and am dying to show it off/get some feedback on it. Yes, Alpha3 contains the async APIs. Why is this so important? Because it allows you to get the best of both worlds when it comes to synchronous and asynchronous network communications, and harnesses the parallelism and scalability you'd naturally expect from a halfway-decent data grid. And, as far as I know, we're the first distributed cache - open or closed source - to offer such an API.

The release also contains other fixes, performance and stability improvements, and better javadocs throughout. One step closer to a full release.

Enjoy the release - available on our download page - and please do post feedback on the Infinispan User Forums.


Wednesday, 13 May 2009

What's so cool about an asynchronous API?

Inspired by some thoughts from a recent conversation with JBoss Messaging's Tim Fox, I've decided to go ahead and implement a new, asynchronous API for Infinispan.

To sum things up, this new API - additional methods on Cache - allow for asynchronous versions of put(), putIfAbsent(), putAll(), remove(), replace(), clear() and their various overloaded forms. Unimaginatively called putAsync(), putIfAbsentAsync(), etc., these new methods return a Future rather than the expected return type. E.g.,

V put(K key, V value);
Future<V> putAsync(K key, V value);

boolean remove(K key, V value);
Future<Boolean> removeAsync(K key, V value);

void clear();
Future<Void> clearAsync();

// ... etc ...

You guessed it, these methods do not block. They return immediately, and how cool is that! If you care about return values - or indeed simply want to wait until the operation completes - you do a Future.get(), which will block until the call completes. Why is this useful? Mainly because, in the case of clustered caches, it allows you to get the best of both worlds when it comes to synchronous and asynchronous mode transports.

Synchronous transports are normally recommended because of the guarantees they offer - the caller always knows that a call has properly propagated across the network, and is aware of any potential exceptions. However, asynchronous transports give you greater parallelism. You can start on the next operation even before the first one has made it across the network. But this is at a cost: losing out on the knowledge that a call has safely completed.

With this powerful new API though, you can have your cake and eat it too. Consider:

Cache<String, String> cache = getCache();
Future<String> f1 = cache.putAsync(k1, v1);
Future<String> f2 = cache.putAsync(k2, v2);
Future<String> f3 = cache.putAsync(k3, v3);


The network calls - possibly the most expensive part of a clustered write - involved for the 3 put calls can now happen in parallel. This is even more useful if the cache is distributed, and k1, k2 and k3 map to different nodes in the cluster - the processing required to handle the put operation on the remote nodes can happen simultaneously, on different nodes. And all the same, when calling Future.get(), we block until the calls have completed successfully. And we are aware of any exceptions thrown. With this approach, elapsed time taken to process all 3 puts should - theoretically, anyway - only be as slow as the single, slowest put().

This new API is now in Infinispan's trunk and yours to enjoy. It will be a main feature of my next release, which should be out in a few days. Please do give it a spin - I'd love to hear your thoughts and experiences.

Tuesday, 12 May 2009

Implementing a performant, thread-safe ordered data container

To achieve efficient ordering of entries in the DataContainer interface for configurations that support eviction, there was a need for a linked HashMap implementation that was thread-safe and performant. Below, I specifically discuss the implementations of the FIFODataContainer and LRUDataContainer in Infinispan 4.0.x. Wherever this document references FIFODataContainer, this also applies to LRUDataContainer - which extends FIFODataContainer. The only difference is that LRUDataContainer updates links whenever an entry is visited as well as added.

After analysing and considering a few different approaches, the one I settled on is a subset of the algorithms described by H. Sundell and P. Tsigas in their 2008 paper titled Lock-Free Deques and Doubly Linked Lists, combined with the approach used by Sun's JDK6 for reference marking in ConcurrentSkipListMap's implementation.

Reference marking? What's that?

Compare-and-swap (CAS) is a common technique today for atomically updating a variable or a reference without the use of a mutual exclusion mechanism like a lock. But this only works when you modify a single memory location at a time, be it a reference or a primitive. Sometimes you need to atomically update two separate bits of information in a single go, such as a reference, as well as some information about that reference. Hence reference marking. In C, this is sometimes done by making use of the assumption that an entire word in memory is not needed to store a pointer to another memory location, and some bits of this word can be used to store additional flags via bitmasking. This allows for atomic updates of both the reference and this extra information using a single CAS operation.

This is possible in Java too using AtomicMarkableReference, but is usually considered overly complex, slow and space-inefficient. Instead, what we do is borrow a technique from ConcurrentSkipListMap and use an intermediate, delegating entry. While this adds a little more complexity in traversal (you need to be aware of the presence of these marker entries when traversing the linked list), this performs better than an AtomicMarkableReference.

In this specific implementation, the 'extra information' stored in a reference is the fact that the entry holding the reference is in the process of being deleted. It is a common problem with lock-free linked lists when you have concurrent insert and delete operations that the newly inserted entry gets deleted as well, since it attaches itself to the entry being concurrently deleted. When the entry to be removed marks its references, however, this makes other threads aware of the fact and cause CAS operations on the reference to fail and retry.


Aside from maintaining order of entries and being thread-safe, performance was one of the other goals. The target is to achieve constant-time performance - O(1) - for all operations on DataContainer.

The class diagram (click to view in full-size) depicts the FIFODataContainer class. At its heart the FIFODataContainer mimics a JDK ConcurrentHashMap (CHM), making use of hashtable-like lockable segments. Unlike the segments in CHM, however, these segments are simpler as they support a much smaller set of operations.

Retrieving data from the container
The use of segments allow for constant-time thread-safe get() and containsKey() operations. Iterators obtained from the DataContainer - which implements Iterable, and hence usable in for-each loops - and keySet() are immutable, thread-safe and efficient, using traversal of the linked list - making use of getNext() and getPrev() helpers. See below for details. Traversal is efficient and constant-time.

Updating the container
When removing an entry, remove() locks the segment in question, removes the entry, and unlinks the entry. Both operations are thread-safe and constant-time. Locking the segment and removing the entry is pretty straightforward. Unlinking involves marking references, and then an attempt at CAS'ing next and previous references to bypass the removed entry. Order here is important - updates to the next reference needs to happen first, read on for more details as to why.

When performing a put(), the entry is created, segment locked and entry inserted into the segment. The entry is then inserted at the tail of the linked list. Again, both operations are thread-safe and constant-time. Linking at the tail involves careful CAS'ing of references on the new entry, the tail dummy entry and the former last entry in the list.

Maintaining a lock-free, concurrent doubly linked list
It is important to note that the entries in this implementation are doubly linked. This is critical since, unlike the JDK's ConcurrentSkipListMap, we use a hashtable to look up entries to be removed, to achieve constant time performance in lookup. Locating the parent entry to update a reference needs to be constant-time as well, and hence the need for a previous reference. Doubly-linked lists make things much trickier though, as there two references to update atomically (yes, that sounds wrong!)

Crucially, what we do not care about - and do not support - is reverse-order traversal. This means that we only really care about maintaining accuracy in the forward direction of the linked list, and treat the previous reference as an approximation to an entry somewhere behind the current entry. Previous references can then be corrected - using the correctPrev() helper method described below - to locate the precise entry behind the current entry. By placing greater importance on the forward direction of the list, this allows us to reliably CAS the forward reference even if the previous reference CAS fails. It is hence critical that whenever any references are updated, the next reference is CAS'd first, and only on this success the previous reference CAS is attempted. The same order applies with marking references. Also, it is important that any process that touches an entry that observes that the next pointer is marked but the previous pointer is not, attempts to mark the previous pointer before attempting any further steps.

The specific functions we need to expose, to support DataContainer operations, are:

void linkAtEnd(LinkedEntry entry);

void unlink(LinkedEntry entry);

LinkedEntry correctPrev(LinkedEntry suggestedPrev, LinkedEntry current);

LinkedEntry getNext(LinkedEntry current);

LinkedEntry getPrev(LinkedEntry current);

These are exposed as protected final methods, usable by FIFODataContainer and its subclasses. The implementations themselves use a combination of CAS's on a LinkedEntry's next and previous pointers, marking references, and helper correction of previous pointers when using getNext() and getPrevious() to traverse the list. Note that it is important that only the last two methods are used when traversing rather than directly accessing a LinkedEntry's references - since markers need to be stepped over and links corrected.

Please refer to Lock-Free Deques and Doubly Linked Lists for details of the algorithm steps.

Sunday, 10 May 2009

jclouds-s3 beta released

jclouds-s3 is the glue between Infinispan and Amazon S3. jclouds provides a cachestore plugin that allows you to persist your infinispan cluster to S3.

Over the last few months, jclouds has evolved with Infinispan's concurrent context.

Under the hood, jclouds is made for infinispan. Its non-blocking engine and FutureCommand design were crafted to meet the challenge of Infinispan's grueling integration tests.

jclouds-s3 is also quite user-friendly, exposing services to S3 in a simple Map interface.

As we are now in beta, please do try out jclouds-s3 and let us know where we can improve. If you have some spare cycles, feel free to lend a hand :)

Regardless, we hope you enjoy the product.


Saturday, 2 May 2009

Keep it in the cloud

Corporate slaves often spend months of paper work only to find their machine obsolete before its powered on.   Forward thinking individuals need playgrounds to try out new ideas.  Enterprise 2.0 projects have to scale with their user base.  In short, there's a lot of demand for flexible infrastructure.  Cloud infrastructure is one way to fill that order.

One popular cloud infrastructure provider is Amazon EC2.  EC2 is basically a pay-as-you-go datacenter.  You pay for CPU, storage, and network resources.  Using the open-source Infinispan data grid, you have a good chance of linear performance as your application needs change.  Using EC2, you can instantly bring on hosts to support that need.  Great match, right?  What's next?

Assuming your data is important, you will need to persist your Infinispan cluster somewhere.   That said, Amazon charges you for traffic that goes in and out of their network... this could get expensive.  So, the next bit is controlling these costs. 

Amazon offers a storage service called S3.  Transferring data between EC2 and S3 is free; you only pay for data parking.  In short, there is a way to control these I/O costs: S3. 

Infinispan will save your cluster to S3 when configured with its high-performance JClouds plug-in.  You specify the S3 Bucket and your AWS credentials and Infinispan does the rest.

In summary, not only does Infinispan shred your license costs, but we also help cut your persistence costs, too!

So, go ahead: Keep it in the cloud!

Friday, 1 May 2009

Tuesday, 28 April 2009

Infinispan: the Start of a New Era in Open Source Data Grids

Over the past few months we've been flying under the radar preparing for the launch of a new, open source, highly scalable distributed data grid platform. We've finally got to a stage where we can announce it publicly and I would like to say that Infinispan is now ready to take on the world!

The way we write computer software is changing. The demise of the Quake Rule has made hardware manufacturers cram more cores on a CPU, more CPUs in a server. To achieve the levels of throughput and resilience that modern applications demand, compute grids are becoming increasingly popular. All this serves to exacerbate existing database bottlenecks; hence the need for a data grid platform.

So why is Infinispan sexy?

1. Massive heap - If you have 100 blade servers, and each node has 2GB of space to dedicate to a replicated cache, you end up with 2 GB of total data. Every server is just a copy. On the other hand, with a distributed grid - assuming you want 1 copy per data item - you get a 100 GB memory backed virtual heap that is efficiently accessible from anywhere in the grid. Session affinity is not required, so you don't need fancy load balancing policies. Of course you can still use them for further optimisation. If a server fails, the grid simply creates new copies of the lost data, and puts them on other servers. This means that applications looking for ultimate performance are no longer forced to delegate the majority of their data lookups to a large single database server - that massive bottleneck that exists in over 80% of enterprise applications!

2. Extreme scalability - Since data is evenly distributed, there is essentially no major limit to the size of the grid, except group communication on the network - which is minimised to just discovery of new nodes. All data access patterns use peer-to-peer communication where nodes directly speak to each other, which scales very well.

3. Very fast and lightweight core - The internal data structures of Infinispan are simple, very lightweight and heavily optimised for high concurrency. Early benchmarks have indicated 3-5 times less memory usage, and around 50% better CPU performance than the latest and greatest JBoss Cache release. Unlike other popular, competing commercial software, Infinispan scales when there are many local threads accessing the grid at the same time. Even though non-clustered caching (LOCAL mode) is not its primary goal, Infinispan still is very competitive here.

4. Not Just for Java (PHP, Python, Ruby, C, etc.) - The roadmap has a plan for a language-independent server module. This will support both the popular memcached protocol - with existing clients for almost every popular programming language - as well as an optimised Infinispan-specific protocol. This means that Infinispan is not just useful to Java. Any major website or application that wants to take advantage of a fast data grid will be able to do so.

5. Support for Compute Grids - Also on the roadmap is the ability to pass a Runnable around the grid. You will be able to push complex processing towards the server where data is local, and pull back results using a Future. This map/reduce style paradigm is common in applications where a large amount of data is needed to compute relatively small results.

6. Management is key! - When you start thinking about running a grid on several hundred servers, management is no longer an extra, it becomes a necessity. This is on Infinispan's roadmap. We aim to provide rich tooling in this area, with many integration opportunities.

7. Competition is Proprietary - All of the major, viable competitors in the space are not open-source, and are very expensive. Enough said. :-)

What are data grids?

Data grids are, to put it simply, highly concurrent distributed data structures. Data grids typically allow you to address a large amount of memory and store data in a way that it is quick to access. They also tend to feature low latency retrieval, and maintain adequate copies across a network to provide resilience to server failure.

As such, at its core, Infinispan presents a humble data structure. But this is also a high specialised data structure, tuned to and geared for a great degree of concurrency - especially on multi-CPU/multi-core architectures. Most of the internals are essentially lock- and synchronization-free, favouring state-of-the-art non-blocking algorithms and techniques wherever possible. This translates to a data structure that is extremely quick even when it deals with a large number of concurrent accesses.

Beyond this, Infinispan is also a distributed data structure. It farms data out across a cluster of in-memory containers. It does so with a configurable degree of redundancy and various parameters to tune the performance-versus-resilience trade-off. Local "L1" caches are also maintained for quick reads of frequently accessed data.

Further, Infinispan supports JTA transactions. It also offers eviction strategies to ensure individual nodes do not run out of memory and passivation/overflow to disk. Warm-starts using preloads are also supported.

JBoss Cache and Infinispan

So where does Infinispan stand against the competition? Let's start with JBoss Cache. It is no surprise that there are many similarities between JBoss Cache and Infinispan, given that they share the same minds! Infinispan is an evolution of JBoss Cache in that it borrows ideas, designs and some code, but for all practical purposes it is a brand new project and a new, much more streamlined codebase.

JBoss Cache has evolved from a basic replicated tree structure to include custom, high performance marshalling (in version 1.4), Buddy Replication (1.4), a new simplified API (2.X), high concurrency MVCC locking (3.0.X) and a new non-blocking state transfer mechanism (3.1.X). These were all incremental steps, but it is time for a quantum leap.

Hence Infinispan. Infinispan is a whole new project - not just JBoss Cache 4.0! - because it is far wider in scope and goals - not to mention target audience. Here are a few points summarising the differences:
  • JBoss Cache is a clustering library. Infinispan's goal is to be a data grid platform, complete with management and migration tooling.
  • JBoss Cache's focus has been on clustering, using replication. This has allowed it to scale to several 10s (occasionally even over 100) nodes. Infinispan's goals are far greater - to scale to grids of several 100's of nodes, eventually exceeding 1000's of nodes. This is achieved using consistent hash based data distribution.
  • Infinispan's data structure design is significantly different to that of JBoss Cache. This is to help achieve the target CPU and memory performance. Internally, data is stored in a flat, map-like container rather than a tree. That said, a tree-like compatibility layer - implemented on top of the flat container - is provided to aid migration from JBoss Cache.
  • JBoss Cache traditionally competed against other frameworks like EHCache and Terracotta. Infinispan, on the other hand, goes head to head against Oracle's Coherence, Gemfire and Gigaspaces.
I have put up some FAQs on the project. A project roadmap is also available, as well as a 5-minute guide to using Infinispan.

Have a look at JIRA or grab the code from our Subversion repository to see where we are with things. If you are interested in participating in Infinispan, be sure to read our community page.

I look forward to your feedback!