Friday, 17 December 2010

Another X'mas present from Infinispan: 4.2.0.FINAL

So Christmas is meant to be full of presents, right?  Yep you heard that right - two releases in one day  :-)  Hot on the heels of Galder's 5.0.0.ALPHA1 release, here's the much-awaited 4.2.0.FINAL.  This is a big release.  Although it only contains a handful of new features - including ISPN-180 and ISPN-609 - it contains a good number of stability and performance improvements over 4.1.0, a complete list of which is available here.  Yes, that is over 75 bugs fixed since 4.1.0!!

This really is thanks to the community, who have worked extremely hard on testing, benchmarking and pushing 4.1.0  - and subsequent betas and release candidates of 4.2 - which has got us here.  This really is helping the project to mature very, very fast.  And of course the core Infinispan dev team who've pulled some incredible feats to get this release to completion.  You know who you are.  :-)

I'd also like to reiterate the availability of Maven Archetypes to jump-start your Infinispan project - read all about that here.

So with that, I'd like to leave you with 4.2.0, Ursus, and say that we are full steam ahead with 5.0 Pagoa now.  :-)  As usual, download 4.2.0 here, read about it here, provide feedback here.

Enjoy, and Happy Holidays!

Xmas arrives early for Infinispan users! 5.0.0.ALPHA1 is out!

Just in time for Christmas, the first release in the 5.x series called 5.0.0.ALPHA1 is out. This release implements one of the most demanded features which is the ability to store non-Serializable objects into Infinispan data grids! You can now do so thanks to the ability to plug Infinispan with Externalizer implementations that know how to marshall/unmarshall a particular type. To find out more on how to implement this Externalizers and how to plug them, check the following article that explain this in great detail.

A very important benefit of using Infinispan's Externalizer framework is that user classes now benefit from a lightning fast marshalling framework based on JBoss Marshalling. Back in the 4.0.0 days when we switched from JDK serialization to JBoss Marshalling, we saw a performance improvement of around 10-15% and we're confident that 5.x user applications will see a similar performance increase once they start providing Externalizer implementations for their own types.

At this stage, it's very important that Infinispan users have a go at implementing their own Externalizer implementations so that we have enough time to make adjustments based on feedback provided. Your input is crucial!!

Staying with the marshalling topic, another novelty included in this release is the ability to plug Hot Rod clients with portable serialization thanks to Apache Avro. This is not tremendously important right now, but once Hot Rod protocol clients have been written in other languages, they'd be able to seamlessly share data even if they're written in different languages! In case you're not aware, a Python Hot Rod client is already in the making...

Finally, details of all issues fixed can be found here, the download is here, and please report issues here. :-)

Enjoy and Merry Christmas to all :)

Announcing project Radargun

Hi all,

Radargun is a tool we've developed and used for benchmarking Infinispan's performance both between releases and compared with other similar products. Initially we shipped under the (poorly named) Cache Benchmark Framework.
Due to increase community interest and the fact that this reached a certain maturity (we used it for benchmarking 100+ nodes clusters) we decided to revamp it a little and also come with another name: Radargun.
You can read more about it here. A good start is the 5MinutesTutorial.


Tuesday, 14 December 2010

4.2.0.CR4 is out

Yep, I've just cut Infinispan 4.2.0.CR4.  This is the last release candidate before a final release, so here is your chance to have your say.  :-)  A few stability issues around rehashing and distributed locking/transactions have been resolved, so please do try this out.

Details of issues fixed are here, the download is here, and please report issues here. :-)


Maven Archetypes

To help you jump-start a new project using Infinispan, we now have Maven Archetypes you can use.  If you don't know what a Maven Archetype is, you should read this article which explains archetypes in more detail.  Of course, this assumes that you are using Maven as a build tool.

We've created two separate archetypes for Infinispan.  The first sets you up with a clean, new directory structure and sample code for a new Infinispan project, including sample configuration files and skeleton code, as well as a Maven pom.xml descriptor containing all necessary dependencies.

The second archetype is targeted towards people using Infinispan and want to report bugs and contribute tests to the project.  It sets up a new project with a skeleton functional test, including all of Infinispan's test helper utilities to simulate network setup and failure from within a test.  More important, the generated skeleton test is structured such that it can easily be assimilated into Infinispan's core test suite if necessary.

For more information on these archetypes, including a simple step-by-step guide, read


Infinispan gains another team member

Infinispan's been fortunate enough to have Pete Muir - of Seam, Weld and the JSR-299/CDI spec fame -  join us.  More details are on Pete's blog on the subject.  Pete's work on Seam has been legendary, with particular regard to fostering and encouraging a very active community, while at the same time pushing the boundaries of Java EE.  With Pete's interest in distributed NoSQL databases and in-memory data grids, expect to see the same on Infinspan!

Welcome aboard, Pete - we're all very excited to have you on board! :-)


Friday, 3 December 2010

4.2.0.CR3 released

Another day another release.  :-)  I've just cut 4.2.0.CR3.  This release contains a number of bug fixes and stability improvements, including ISPN-777 and a whole bunch of memcached server fixes thanks to Galder.  In addition, Tristan's CassandraCacheStore now also supports Key2StringMappers just like the JdbcCacheStores, as per ISPN-809.

For a full list of changes, see the release notes.  As always, download, try out and provide feedback!

Onwards to a final release...


Wednesday, 1 December 2010

Infinispan and JBoss AS 5.x

A lot of people have asked about being able to use Infinispan as a second level cache for Hibernate within JBoss AS 5.x (and its EAP 5.x cousins).

While Infinispan can be used as a Hibernate second level cache with Hibernate 3.5 onwards, Bill deCoste has written a guide to getting Infinispan to work in older versions of Hibernate, specifically with JBoss AS 5.x.  Hope you find this useful!


Thursday, 25 November 2010

Infinispan 4.2.0.CR2 "Ursus" is out!


CR2 release fixes a critical issue from CR1: NumberFormatException when creating a new GlobalConfiguration objects (ISPN-797). Besides this, the client-server following two issues were fixed around client server modules:
- In Memcached, negative items sizes in set/add should return CLIENT_ERROR (ISPN-784)
- In REST, HEAD on nonexistent cache still produces status 500 (ISPN-795)
Big thanks to Galder for fixing these so quickly!


Wednesday, 24 November 2010

Infinispan 4.2.0.CR1 "Ursus" is out!


4.2.CR1 code name "Ursus" has just been released! It contains multiple fixes and improvements, mainly on memcached server and rehashing when cluster topology changes. The jgroups version was also upgraded to 2.11. Another thing that's worth mentioning is that the default port for HotRod server has changed from 11311 to 11222 (here's why).
You can download it from here. Please share your thoughts with us!


Tuesday, 9 November 2010

Infinispan Community Meetup @ Devoxx 2010

Infinispan and Seam are doing it again, and this year Arquillian will be there to keep things in check: a community social in the midst of Europe's premier Java conference, Devoxx in Antwerp. If you plan to be at Devoxx - or just happen to be nearby - and fancy an evening of in-depth conversation about three open source projects set to rock the enterprise, come on by and partake in some of the finest beer in the world with the project core developers. Demos, design discussions, architectural questions are all fair game. Join our pilgrimage to Kulminator at 20:00 on Monday the 15th of November (near the Groenplaats Station).

And a quick note, just because open source is all about Free as in Speech doesn't mean we can't have free beer. Yes, we're sponsoring beer as well as other project-related goodies, but these are limited, first come first served.

Where: Kulminator, close to Groenplaats Station, Antwerp
When: Monday the 15th of November, 2010
What time: 20:00

View Larger Map

Hope to see many of you there!


Thursday, 4 November 2010

Infinispan likes GitHub

It had to happen someday - Infinispan's primary source code repository has now moved to GitHub.  We have abandoned Subversion as a version control tool for the far superior distributed VCS which has found favour in many large and complex open source projects.  I've been experimenting with a GitHub setup for the past few weeks, with a snapshot of the Infinispan repository, and git is a sheer joy to use - a sublime experience, once you get your head around the concepts of distributed version control.  GitHub makes things even sweeter, with an awesome web based UI.

Anyway, a quick summary:
  • Infinispan sources are no longer in Subversion, on
  • The primary repository for Infinispan is now
    • Clone this repository at will!
    • Contributions should take the form of pull requests on GitHub
  • Infinispan's Hudson, release tooling and other systems have been updated to reflect this change
For those used to working with Infinispan's Subversion-based setup, or those new to git, I've put together a short wiki page on getting started with Infinispan on GitHub.

Oh, and I'd love to hear your feedback on this move.

Happy cloning!

BETA is out!!

I've just released the long-awaited first beta of Infinispan 4.2.0, codenamed Ursus.  The blocker that's been keeping Ursus in Alpha for so long - ISPN-180 - is now complete and ready for you to take out for a spin.  Thanks to the work put in by Mircea and Vladimir, ISPN-180 adds support in the distribution algorithm to detect whether Infinispan instances are co-located on the same physical server (or even the same rack) and pick secondary owners of data with this knowledge in mind. This helps ensure maximum durability of data, so if physical machines - or even an entire rack - were to fail, data is not lost.  For more details on ISPN-180, have a look at this wiki page which details its use.

The JOPR/RHQ plugin now works in multi cache manager environments (ISPN-675) and thanks to ISPN-754, cache manager instances can be easily identified when using the JOPR/RHQ management GUI.  As a result of ISPN-754, JMX object names follow best practices as set up by Sun/Oracle and so this means that object names exported to JMX have changed from this version onwards. See this wiki page for detailed information.

For a list of all fixes since Alpha5, have a look at the release notes in JIRA, and as always, download the release here, and let us know what you think using the user forums.

Onward to release candidate and final release phases!  :-)


Friday, 29 October 2010

4.2.0.ALPHA5 is out!

Yes, yet another alpha for Infinispan 4.2.0 codenamed Ursus.  What's new in this alpha?  A lot of bug fixes, reported by you the community, on earlier alphas as well as on 4.1.0 Radegast.  For a full list of what's changed, please consult these release notes.

Please do give this release a spin and provide as much feedback as you can.  A feature-complete beta is imminent and we expect it to be very stable - as far as betas go - and all thanks to your feedback.

Download the release here, and talk to us about it on the forums here.


Wednesday, 27 October 2010

Data-as-a-Service: a talk by yours truly

Last month, at the JavaOne conference in San Francisco, I spoke about data grids.  A BOF session on on cloud-ready data stores using data grids, and a conference session on measuring performance and benchmarking data grids.  But in addition to the official JavaOne talks, I also did two short, 20-minute mini-sessions at the Red Hat booth at the JavaOne pavillion, titled Data-as-a-Service using Infinispan.  The good folks at the Red Hat booth even recorded it and put it online on Vimeo, where it is accessible on-demand.

Data-as-a-Service using Infinispan from JBoss Developer on Vimeo.


Friday, 22 October 2010

Infinispan 4.2.0.ALPHA4 "Ursus" is out!


You can find the release notes here.
Try it and let us know what you think!


Tuesday, 19 October 2010

Welcome Trustin Lee

I'd like to welcome Infinispan's newest full-time core engineer, Trustin Lee.  Trustin's no stranger to open source software, being the founder and project lead of Apache MINA and Netty - the latter of which is used in Infinispan's memcached and Hot Rod server endpoints.  Trustin will be working on all things Infinispan, adding muscle to the core development effort.  Trustin's based in Seoul, South Korea.

Welcome aboard, Trustin!  :)


Wednesday, 13 October 2010

Infinispan 4.2Alpha3 "Ursus" is out!


Infinispan 4.2.ALPHA3 "Ursus" has just been released!
Besides other fixes and improvements, it also contains an Cassandra cache store - thanks to Tristan Tarrant for contributing this!
For a complete list of the realed features refer to the release notes.
Download it and let us know what you think!


Thursday, 30 September 2010

Want to learn about Infinispan at Devoxx 2010?

Yes, I will be speaking at the Devoxx conference this year.  Devoxx - probably the biggest Java/Middleware conference outside of JavaOne - is always awesome with a strong tech focus.

I'll be talking about Infinispan as a NoSQL solution, so this should be interesting to anyone trying to build a data service around Infinispan.

Check out the details here.  See you in Antwerp!


Friday, 24 September 2010

Infinispan’s been harvested, time to evangelise!

Here @ Infinispan we've been extremely busy over the summer baking Infinispan 4.1.0.FINAL and after releasing it just over a couple of weeks ago, it's time to go out and tell the world about it!

As far as I'm concerned, I'll be at the JAOO conference in Aarhus, Denmark, where I'll be speaking about the importance of the brand new Infinispan Server modules, emphasizing the motivation for developing them and showcasing some really exciting use cases.

Just a few days later I'll be in Berlin for Europe's round for hosting JBoss' JUDCon, a developer conference by developers where I'll be introducing a brand-new, innovative data eviction algorithm included in Infinispan 4.1, which increases eviction precision and reduces overhead. I will also be talking about the Infinispan Server modules in this conference. On top of that, Mircea Markus will be explaining how transactions are handled within Infinispan and how to do continuous querying of Infinispan data grids. In total, Infinispan has 4 talks @ JUDCon which is very exciting for us!!

To round things up, on the 14th of October I'll be in Lausanne where I'll be introducing the audience of the local JUG to Infinispan, talking about our motivations to create Infinispan, how different it is to JBoss Cache...etc.

So, if you happen to be around, make sure you come! It'll be fun :)

Friday, 17 September 2010

4.2.ALPHA2 "Ursus" is out!


First of all A BIG THANKS to our very active community for all the input on the previous release!
Second alpha release of 4.2 "Ursus" is out today. It contains a set of bug fixes and improvements - see the detailed list here. For download information go here. And please share your thoughts on our forums!


JavaOne 2010

So there's quite a lot going on at JavaOne 2010 - next week, already! - with regards to Infinispan.  Firstly, I'm running a session and a BOF both related to data grids and Infinispan.  Mark these in your calendar!

The BOF is on Tuesday, at 21:00, titled "A new era for in-memory data grids".  Click on the link for more details.

My main conference session is on Wednesday, at 13:00, titled "Measuring Performance and Looking for Bottlenecks in Java-Based Data Grids".  This should be a fun and interesting talk!

Further, the Red Hat/JBoss booth in the pavilion will be running a series of "mini-sessions", with a chance to meet and interface with the core R&D folk from JBoss including myself.  I'm running a mini-session on building a Data-as-a-Service (DaaS) grid using Infinispan on Tuesday and Wednesday, along with similarly exciting talks by other Red Hatters.  More details here.

And finally, the JBoss party.  JBoss parties at JavaOne have become something of an institution and you sure wouldn't want to be left out!  Details here, make sure you get your invitations early as these always run out fast.

See you in San Francisco!


Thursday, 16 September 2010

Want to become a full-time dev on Infinispan?

From time to time, we do recruit active, valuable community members, to work on Infinispan and related projects on a full-time basis.  If you are interested in working for Infinispan full-time, you should email jobs AT infinispan DOT org and include the following:

  • A résumé, in plain text or as a PDF document (or a link to an online résumé)
  • Details of your contribution to Infinispan or other JBoss projects
    • Including links to JIRAs, changelogs if relevant
    • Links to community documentation, wikis, etc that may be relevant
  • Details of your contribution to other open source projects
    • Including links to issue trackers, source code, etc.
    • Links to community documentation, wikis, etc that may be relevant
  • Your current time zone
Benefits, in addition to the usual corporate benefits of being employed by Red Hat Inc. (One of the 100 best places to work in IT), include:
  • Working with an extremely active open source community
  • Ability to work remotely, communicating via email/IRC
  • Work on some of the most exciting tech and leading engineers in the Java landscape today
  • Help shape cloud-based data storage for the next generation of applications
While we don't have specific timescales for any specific positions vacant, do drop us an email if this is of interest to you.


Monday, 13 September 2010

Infinispan Refcard

DZone today published a Refcard on Infinispan.  For those of you not familiar with DZone's Refcardz, these are quick-lookup "cheat sheets" on various technologies, targeted at developers.

I hope you find the Infinispan Refcard useful.  Download it, save it, pass it on to friends.  Oh yeah, and tweet about it too - tell the world!  :)


Tuesday, 7 September 2010

4.2.ALPHA1 "Ursus" is out!

Hi *,

4.2.ALPHA1 has just been released!
Besides other things, it contains following two features:
- supports deadlock detection for eagerly locking transactions (new)
- an very interesting optimisation for eager locking, which allows one to benefit from eager locking semantics with the same performance as "lazy" locking. You can read more about this here.
For download information go here. For a detailed list of features refer to the release notes.

Enjoy and please share your thoughts on our forums!

Thursday, 2 September 2010

4.1.0.FINAL is out - and announcing 4.2.0

Yes, things have been quiet on this blog as of late, but a lot has been going on.  Let's start with the big news.  After much work on feedback reported on the last Radegast release candidate, Infinispan 4.1.0.FINAL is finally ready.  Many thanks to the community who have worked tirelessly in testing stuff, reporting stuff.

This is a very important release.  If you are using 4.0.0 (Starobrno), I strongly recommend upgrading to Radegast as we have a whole host of big fixes, performance improvements and new features for you.  A full changelog is available on JIRA, but a few key features to note are server endpoints, a Java-based client for the Hot Rod protocol, and the new LIRS eviction algorithm.

Download it, give it a go, and talk about it on the forums.  Tell your friends about it, tweet about it.

The other interesting piece of news is the announcement of a 4.2.0 release.  We've decided to take a few key new features from 5.0.0 and release them earlier, as 4.2.0 - codenamed Ursus.  If you are interested in what's going to be in Ursus, have a look at this feature set, and expect a beta on Ursus pretty soon now!


Wednesday, 18 August 2010

Radegast ever closer to a final release - CR3 is released.

As much as I expected 4.1.0.CR2 to be the last release candidate before a final release, we've decided to release another candidate for you to try out before pushing out a final release.  CR3 is now out and ready to roll.

There has been a lot of activity in the Infinispan community over the last 3 weeks, with lots of people putting CR2 through its paces, and reporting everything from the trivial through to the critical.  This is awesome stuff, folks - keep it up!

This release has fixed a whole bunch of things you guys have reported.  Many thanks to Galder, Mircea, Vladimir and Sanne, working hard around vacations to get this release out.

Detailed release note reports are on JIRA, and it can be downloaded in the usual place.  Use the forums and report stuff - push this as hard as you've been pushing CR2 and we will have that rock-solid final release we all want!


Tuesday, 20 July 2010

Infinispan 4.1.0 "Radegast" 2nd release candidate just released!

I've just released Infinispan 4.1.0.CR2, codenamed Radegast.  Why is this release so important?  Because it is very close to the final version of the Hot Rod wire protocol, and the client and server modules that sit on either end of Hot Rod, allowing remote and non-JVM access to the data grid.  Further, the memached protocol - along with the ability to make use of any existing memcached client - is also supported.

Since the last release candidate, a number of important bugs - as reported by you, the community - have been addressed, all details available on this release note report.

Thanks go out to the community for contributions, lots of testing and feedback, and given that I hope this to be the last release candidate before a final release of 4.1.0, I'm counting on even more feedback, etc. for this release.  Keep 'em coming, people! :-)

The release is available on Sourceforge, please use the user forums for questions and JIRA to report issues.


Tuesday, 6 July 2010

Infinispan 4.1.0.CR1 is now available!

After very busy last few weeks with JUDCon and JBoss World/Red Hat Summit, we're proud to release Infinispan 4.1.0.CR1, the first candidate release of the Infinispan 4.1 series. The release is downloadable in the usual place.

A lot of work has gone into this release primarily with the aim of stabilising new functionality wrote in previous beta/alpha versions. Here are some of the highlights included in this release:
  • An fantastic demo showing how to run Infinispan in EC2. Check Noel O'Connor's blog last month for more detailed information.
  • Enable Hot Rod servers to run behind a proxy in environments such as EC2, and make TCP buffers and TCP no delay flag configurable for both the server and client.
  • Important performance improvements for Infinispan based Lucene directory and Hot Rod client and sever.
  • To avoid confusion, the single jar distribution has been removed. The two remaining distributions are: The bin distribution containing the Infinispan modules and documentation, and the all distribution which adds demos on top of that.
A more detailed changelog can be found here.

Finally, if you're a user of Infinispan 4.0 or 4.1, please make sure you download and try this release out so that any outstanding issues are fixed in time for the final release. Also, if you're interested in finding out more about Infinispan's architecture, don't miss Manik's latest article explaining Infinispan's 'nuts and bolts'.

Galder Zamarreño

Monday, 28 June 2010

JBossWorld and JUDCon post-mortem

Wow, what a week.  We all know Infinispan is sexy and gets a lot of attention, but the last week has been unprecedented!

The first-ever JUDCon, the developer conference that took place the day before JBoss World and Red Hat Summit, was great and I look forward to future JUDCons around the world.  Pics from the first-ever JUDCon are now online, along with some video interviews with Jason Greene and Pete Muir.

Some of the great presentations at JUDCon include Galder Zamarreño's talk on Infinispan's Hot Rod protocol (slides here) and a talk I did with Mircea Markus on the cache benchmarking framework and benchmarking Infinispan (slides here).

JBoss World/Red Hat Summit was also very interesting.  There is clearly a lot of excitement around Infinispan, and we heard about interesting deployments and use cases, lots of ideas and thoughts for further improvement from customers, contributors and partners.

From JBoss World, there were three talks on Infinispan, including Storing Data on Cloud Infrastructure in a Scalable, Durable Manner which I presented along with Mircea Markus (slides), Why RESTful Design for the Cloud is Best by Galder Zamarreño (slides) and Using Infinispan for High Availability, Load Balancing, & Extreme Performance which I presented along with Galder Zamarreño (slides).

In addition to the slides, the first talk was even recorded so if you missed it, you can watch it below:

video platform video management video solutions video player

Further, Infinispan was showcased on Red Hat CTO Brian Stevens' keynote speech (about 28:15 into the video) where Brian talks about data grids and their importance, and I demonstrate Infinispan.

video platform video management video solutions video player

We even had an open roadmap and design session for Infinispan 5.0, which included not just core Infinispan engineers, but contributors, end-users and anyone who had any sort of interest.  I'll post again later with details of 5.0 and what our plans for it will be.

For those of you who couldn't make it to JUDCon and JBoss World, hope the slides and videos on this post will help give you an idea of what went on.


Wednesday, 23 June 2010

JUDCon and the JBoss Community Awards

The first-ever JUDCon kicked off on Monday, and was very well received. It was good to see so many from the Infinispan community, and be able to discuss ideas and thoughts in detail.  I'm hoping to see even more Infinispan-related activity at future JUDCon events.

The JBoss Community Recognition Award winners were also announced at JUDCon, and I was really surprised to find that 4 of the 5 winners were Infinispan contributors.  Sanne Grinovero, Alex Kluge, Phil van Dyck and Amin Abbaspour - thanks for your participation in Infinispan, your peers have recognised your contributions and have voted with mouse clicks!  Congrats!

Given how many Infinispan engineers and contributors are at JBoss World this week, we are having an open Infinispan 5.0 planning and roadmap session.  So if you are around and would like to join in, this will be at 4:00pm on Thursday, in Campground 1.  For those of you not able to make it, discussions will continue via the usual channels of IRC and the developer mailing list.

Now to prepare for my next talk ... :-)


Friday, 28 May 2010

Infinispan 4.1Beta2 released


The second and hopefully last Beta for 4.1 has just been released. Thanks to excellent community feedback, several HotRod client/server issues were fixed. Besides this and other bug-fixes (check this for complete list), following new features were added:
- an key affinity service that generates keys to be distributed to specific nodes
- RemoteCacheStore that allow an Infinspan cluster to be used as an remote data store


Tuesday, 25 May 2010

Infinispan EC2 Demo

Infinispan's distributed mode is well suited to handling large datasets and scaling the clustered cache by adding nodes as required. These days when inexpensive scaling is thought of, cloud computing immediately comes to mind.

One of the largest providers of cloud computing is Amazon with its Amazon Web Services (AWS) offering. AWS provides computing capacity on demand with its EC2 services and storage on demand with its S3 and EBS offerings. EC2 provides just an operating system to run on and it is a relatively straightforward process to get an Infinispan cluster running on EC2. However there is one gotcha, EC2 does not support UDP multicasting at this time and this is the default node discovery approach used by Infinispan to detect nodes running in a cluster.

Some background on network communications

Infinispan uses the JGroups library to handle all network communications. JGroups enables cluster node detection, a process called discovery, and reliable data transfer between nodes. JGroups also handles the process of nodes entering and exiting the cluster and master node determination for the cluster.

Configuring JGroups in Infinispan
The JGroups configuration details are passed to Infinispan in the infinispan configuration file

<transport clusterName="infinispan-cluster" distributedSyncTimeout="50000"
<property name="configurationFile" value="jgroups-s3_ping-aws.xml" />
Node Discovery

JGroups has three discovery options which can be used for node discovery on EC2.

The first is to statically configure the address of each node in the cluster in each of the nodes peers. This simplifies discovery but is not suitable when the IP addresses of each node is dynamic or nodes are added and removed on demand.

The second method is to use a Gossip Router. This is an external Java process which runs and waits for connections from potential cluster nodes. Each node in the cluster needs to be configured with the ip address and port that the Gossip Router is listening on. At node initialization, the node connects to the gossip router and retrieves the list of other nodes in the cluster.

Example JGroups gossip router configuration

<TCP bind_port="7800" />
<TCPGOSSIP timeout="3000" initial_hosts="[12000]"
num_initial_members="3" />
<MERGE2 max_interval="30000" min_interval="10000" />
<FD_SOCK start_port="9777" />

The infinispan-4.1.0-SNAPSHOT/etc/config-samples/ directory has sample configuration files for use with the Gossip Router. The approach works well but the dependency on an external process can be limiting.

The third method is to use the new S3_PING protocol that has been added to JGroups. Using this the user configures a S3 bucket (location) where each node in the cluster will store its connection details and upon startup each node will see the other nodes in the cluster. This avoids having to have a separate process for node discovery and gets around the static configuration of nodes.

Example JGroups configuration using the S3_PING protocol:

<TCP bind_port="7800" />
<S3_PING secret_access_key="secretaccess_key" access_key="access_key"
location=s3_bucket_location" />
<MERGE2 max_interval="30000" min_interval="10000" />
<FD_SOCK start_port="9777" />

EC2 demo

The purpose of this demo is to show how an Infinispan cache running on EC2 can easily form a cluster and retrieve data seamlessly across the nodes in the cluster. The addition of any subsequent Infinispan nodes to the cluster automatically distribute the existing data and offer higher availability in the case of node failure.

To demonstrate Infinispan, data is required to be added to nodes in the cluster. We will use one of the many public datasets that Amazon host on AWS, the influenza virus dataset publicly made available by Amazon.

This dataset has a number components which make it suitable for the demo. First of all it is not a trivial dataset, there are over 200,000 records. Secondly there are internal relationships within the data which can be used to demonstrate retrieving data from different cache nodes. The data is made up for viruses, nucleotides and proteins, each influenza virus has a related nucleotide and each nucleotide has one or more proteins. Each are stored in their own cache instance.

The caches are populated as follows :

  • InfluenzaCache - populated with data read from the Influenza.dat file, approx 82,000 entries
  • ProteinCache - populated with data read from the Influenza_aa.dat file, approx 102,000 entries
  • NucleotideCache - populated with data read from the Influenza_na.dat file, approx 82,000 entries

The demo requires 4 small EC2 instances running Linux, one instance for each cache node and one for the Jboss application server which runs the UI. Each node has to have Sun JDK 1.6 installed in order to run the demos. In order to use the Web-based GUI, JBoss AS 5 should also be installed on one node.

In order for the nodes to communicate with each other the EC2 firewall needs to be modified. Each node should have the following ports open:

  • TCP 22 – For SSH access
  • TCP 7800 to TCP 7810 – used for JGroups cluster communications
  • TCP 8080 – Only required for the node running the AS5 instance in order to access the Web UI.
  • TCP 9777 - Required for FD_SOCK, the socket based failure detection module of the JGroups stack.

To run the demo, download the Infinispan "all" distribution, ( to a directory on each node and unzip the archive.

Edit the etc/config-samples/ec2-demo/jgroups-s3_ping-aws.xml file to add the correct AWS S3 security credentials and bucket name.

Start the one of the cache instances on each node using one of the following scripts from the bin directory:


Each script will startup and display the following information :

[tmp\] ./runEC2Demo-nucleotide.shCacheBuilder called with /opt/infinispan-4.1.0-SNAPSHOT/etc/config-samples/ec2-demo/infinispan-ec2-config.xml
GMS: address=redlappie-37477, cluster=infinispan-cluster, physical address=
Caches created....
Starting CacheManagerCache
Parsing files....Parsing [/opt/infinispan-4.1.0-SNAPSHOT/etc/Amazon-TestData/influenza_na.dat]
About to load 81904 nucleotide elements into NucleiodCache
Added 5000 Nucleotide records
Added 10000 Nucleotide records
Added 15000 Nucleotide records
Added 20000 Nucleotide records
Added 25000 Nucleotide records
Added 30000 Nucleotide records
Added 35000 Nucleotide records
Added 40000 Nucleotide records
Added 45000 Nucleotide records
Added 50000 Nucleotide records
Added 55000 Nucleotide records
Added 60000 Nucleotide records
Added 65000 Nucleotide records
Added 70000 Nucleotide records
Added 75000 Nucleotide records
Added 80000 Nucleotide records
Loaded 81904 nucleotide elements into NucleotidCache
Parsing files....Done
Protein/Influenza/Nucleotide Cache Size-->9572/10000/81904
Protein/Influenza/Nucleotide Cache Size-->9572/20000/81904
Protein/Influenza/Nucleotide Cache Size-->9572/81904/81904
Protein/Influenza/Nucleotide Cache Size-->9572/81904/81904

Items of interest in the output are the Cache Address lines which display the address of the nodes in the cluster. Also of note is the Protein/Influenza/Nucleotide line which displays the number of entries in each cache. As other caches are starting up these numbers will change as cache entries are dynamically moved around through out the Infinispan cluster.

To use the web based UI we first of all need to let the server know where the Infinispan configuration files are kept. To do this edit the jboss-5.1.0.GA/bin/run.conf file and add the line

JAVA_OPTS="$JAVA_OPTS -DCFGPath=/opt/infinispan-4.1.0-SNAPSHOT/etc/config-samples/ec2-demo/"
at the bottom. Replace the path as appropriate.

Now start the Jboss application server using the default profile e.g. -c default -b, where “” is the public IP address of the node that the AS is running on.

Then drop the infinispan-ec2-demoui.war into the jboss-5.1.0.GA /server/default/deploy directory.

Finally point your web browser to http://public-ip-address:8080/infinispan-ec2-demoui and the following page will appear.

The search criteria is the values in the first column of the /etc/Amazon-TestData/influenza.dat file e.g. AB000604, AB000612, etc.

Note that this demo will be available in Infinispan 4.1.0.BETA2 onwards. If you are impatient, you can always build it yourself from Infinispan's source code repository.


Thursday, 13 May 2010

Client/Server architectures strike back, Infinispan 4.1.0.Beta1 is out!

I’m delighted to announce the release of Infinispan 4.1.0.BETA1. For this, our first beta release of the 4.1 series, we’ve finished Hot Rod and Memcached protocol based server implementations and a Java-based Hot Rod client has been developed as a reference implementation. Starting with 4.1.0.BETA1 as well, thanks to help of Tom Fenelly, Infinispan caches can be exposed over a WebSocket interface via a very simple Javascript “Cache” API.

A detailed change log is available and the release is downloadable from the usual place.

For the rest of the blog post, we’d like to share some of the objectives of Infinispan 4.1 with the community. Here at ‘chez Infinispan’ we’ve been repeating the same story over and over again: ‘Memory is the new Disk, Disk is the new Tape’ and this release is yet another step to educate the community on this fact. Client/Server architectures based around Infinispan data grids are key to enabling this reality but in case you might be wondering, why would someone use Infinispan in a client/server mode compared to using it as peer-to-peer (p2p) mode? How does the client/server architecture enable memory to become the new disk?

Broadly speaking, there three areas where a Infinispan client/server architecture might be chosen over p2p one:

1. Access to Infinispan from a non-JVM environment

Infinispan’s roots can be traced back to JBoss Cache, a caching library developed to provide J2EE application servers with data replication. As such, the primary way of accessing Infinispan or JBoss Cache has always been via direct calls coming from the same JVM. However, as we have repeated it before, Infinispan’s goal is to provide much more than that, it aims to provide data grid access to any software application that you can think of and this obviously requires Infinispan to enable access from non-Java environments.

Infinispan comes with a series of server modules that enable that precisely. All you have to do is decide which API suits your environment best. Do you want to enable access direct access to Infinispan via HTTP? Just use our REST or WebSocket modules. Or is it the case that you’re looking to expand the capabilities of your Memcached based applications? Start an Infinispan-backed Memcached server and your existing Memcached clients will be able to talk to it immediately. Or maybe even you’re interested in accessing Infinispan via Hot Rod, our new, highly efficient binary protocol which supports smart-clients? Then, gives us a hand developing non-Java clients that can talk the Hot Rod protocol! :).

2. Infinispan as a dedicated data tier

Quite often applications running running a p2p environment have caching requirements larger than the heap size in which case it makes a lot of sense to separate caching into a separate dedicated tier.

It’s also very common to find businesses with varying work loads overtime where there’s a need to start business processing servers to deal with increased load, or stop them when work load is reduced to lower power consumption. When Infinispan data grid instances are deployed alongside business processing servers, starting/stopping these can be a slow process due to state transfer, or rehashing, particularly when large data sets are used. Separating Infinispan into a dedicated tier provides faster and more predictable server start/stop procedures – ideal for modern cloud-based deployments where elasticity in your application tier is important.

It’s common knowledge that optimizations for large memory usage systems compared to optimizations for CPU intensive systems are very different. If you mix both your data grid and business logic under the same roof, finding a balanced set of optimizations that keeps both sides happy is difficult. Once again, separating the data and business tiers can alleviate this problem.

You might be wondering that if Infinispan is moved to a separate tier, access to data now requires a network call and hence will hurt your performance in terms of time per call. However, separating tiers gives you a much more scalable architecture and your data is never more than 1 network call away. Even if the dedicated Infinispan data grid is configured with distribution, a Hot Rod smart-client implementation - such as the Java reference implementation shipped with Infinispan 4.1.0.BETA1 - can determine where a particular key is located and hit a server that contains it directly.

3. Data-as-a-Service (DaaS)

Increasingly, we see scenarios where environments host a multitude of applications that share the need for data storage, for example in Plattform-as-a-Service (PaaS) cloud-style environments (whether public or internal). In such configurations, you don’t want to be launching a data grid per each application since it’d be a nightmare to maintain – not to mention resource-wasteful. Instead you want deployments or applications to start processing as soon as possible. In these cases, it’d make a lot of sense to keep a pool of Infinispan data grid nodes acting as a shared storage tier. Isolated cache access could easily achieved by making sure each application uses a different cache name (i.e. the application name could be used as cache name). This can easily achieved with protocols such as Hot Rod where each operation requires a cache name to be provided.

Regardless of the scenarios explained above, there’re some common benefits to separating an Infinispan data grid from the business logic that accesses it. In fact, these are very similar to the benefits achieved when application servers and databases don’t run under the same roof. By separating the layers, you can manage each layer independently, which means that adding/removing nodes, maintenance, upgrades...etc can be handled independently. In other words, if you wanna upgrade your application server or servlet container, you don’t need to bring down your data layer.

All of this is available to you now, but the story does not end here. Bearing in mind that these client/server modules are based around reliable TCP/IP, using Netty, the fast and reliable NIO library, they could also in the future form the base of new functionality. For example, client/server modules could be linked together to connect geographically separated Infinispan data grids and enable different disaster recovery strategies.

So, download Infinispan 4.1.0.BETA1 right away, give a try to these new modules and let us know your thoughts.

Finally, don't forget that we'll be talking about Hot Rod in Boston at the end of June for the first ever JUDCon. Don't miss out!


Wednesday, 28 April 2010

Infinispan WebSocket Server

The HTML 5 WebSocket Interface seems like a nice way of exposing an Infinispan Cache to web clients that are WebSocket enabled.

I just committed a first cut of the new Infinispan WebSocket Server to Subversion.

You get a very simple Cache object in your web page Javascript that supports:
  1. put/get/remove operations on your Infinispan Cache.
  2. notify/unnotify mechanism through which your web page can manage Cache entry update notifications, pushed to the browser.
Take a look at:

Friday, 23 April 2010

4.1.0. ALPHA3 is out

I've just cut Infinispan 4.1.0.ALPHA3, codenamed Radegast.  This release contains a number of fixes and bugs reported in 4.0.0 Starobrno as well as earlier alphas, and is quite likely to be the last alpha before a feature-complete 4.1.0.BETA1 is released.

A detailed changelog is available.  The release is downloadable in the usual place.

If you use Maven, please note, we now use the new JBoss Nexus-based Maven repository.  The Maven coordinates for Infinispan are still the same (groud id org.infinispan, artifact id infinispan-core, etc) but the repository you need to point to has changed.  Setting up your Maven settings.xml is described here.


Tuesday, 13 April 2010

Boston, are you ready for Infinispan?

The JBoss World/Red Hat Summit organisers have asked me to put together a short note on what to expect at the conference with regards to Infinispan, and this has been published on the conference website.  I thought I'd share this with you as well.

In addition to the conference, this is also the first time JBoss is running JUDCon (JBoss Users and Developers Conference), a free event (limited on a first-come first-served basis) for the community, also in Boston.  Be sure to sign up as places are limited!

Look forward to seeing you there!

Tuesday, 6 April 2010

Infinispan 4.1Alpha2 is out!

We've just released Infinispan 4.1.0.Alpha2 with even more new functionality for the community to play with. Over the past few weeks we've been going backwards and forwards in the Infinispan development list discussing Infinispan's binary client server protocol called Hot Rod and in 4.1.0.Alpha2 we're proud to present the first versions of the Hot Rod server and java client implementations. Please visit this wiki to find out how to use Hot Rod's java client client and server. Please note that certain functionality such as clients receiving topology and hashing information has not yet been implemented.

Besides, Infinispan 4.1.0.Alpha2 is the first release to feature the new LIRS eviction policy and the new eviction design that batches updates, which in combination should provide users with more efficient and accurate eviction functionality.

Another cool feature added in this release is GridFileSystem: a new, experimental API that exposes an Infinispan-backed data grid as a file system. Specifically, the API works as an extension to the JDK's File, InputStream and OutputStream classes. You can read more on GridFileSystem here.

Finally, you can find the API docs for 4.1.0.Alpha2 here and again, please consider this an unstable release that is meant to gather feedback on the Hot Rod client/server modules and the new eviction design.

Galder & Mircea

Tuesday, 30 March 2010

Infinispan eviction, batching updates and LIRS

DataContainer abstraction represents the heart of Infinispan. It is a container structure where actual cache data resides. Every put, remove, get and other invoked cache operations eventually end up in the data container. Therefore, it is of utmost importance the data container is implemented in a way that does not impede overall system throughput. Also recall that the data container's memory footprint can not grow indefinitely because we would eventually run out of memory; we have to periodically evict certain entries from the data container according to a chosen eviction algorithm.

LRU eviction algorithm, although simple and easy to understand, under performs in cases of weak access locality (one time access entries are not timely replaced, entries to be accessed soonest are unfortunately replaced, and so on). Recently, a new eviction algorithm - LIRS has gathered a lot of attention because it addresses weak access locality shortcomings of LRU yet it retains LRU's simplicity.

However, no matter what eviction algorithm is utilized, if eviction is not implemented in a scalable, low lock contention approach, it can seriously degrade overall system performance. In order to do any meaningful selection of entries for eviction we have to lock data container until appropriate eviction entries are selected. Having such a lock protected data container in turn causes high lock contention offsetting any eviction precision gained by sophisticated eviction algorithms. In order to get superior throughput while retaining high eviction precision we need both low lock contention data container and high precision eviction algorithm implementation – a seemingly impossible feat.

Instead of making a trade-off between the high precision eviction algorithm and the low lock contention there is a third approach: we keep lock protected data container but we amortize locking cost through batching updates. The basic idea is to wrap any eviction algorithm with a framework that keeps track of cache access per thread (i.e. ThreadLocal) in a simple queue. For each cache hit associated with a thread, the access is recorded in the thread’s queue. If thread's queue is full or the number of accesses recorded in the queue reaches a certain pre-determined threshold, we acquire a lock and then execute operations defined by the eviction algorithm - once for all the accesses in the queue. A thread is allowed to access many cache items without requesting a lock to run the eviction replacement algorithm, or without paying the lock acquisition cost. We fully exploit a non-blocking lock APIs like tryLock. As you recall tryLock makes an attempt to get the lock and if the lock is currently held by another thread, it fails without blocking its caller thread. Although tryLock is cheap it is not used for every cache access for obvious reasons but rather on certain pre-determined thresholds. In case when thread's queue is totally full a lock must be explicitly requested. Therefore, using batching updates approach we significantly lower the cost of lock contention, streamline access to locked structures and retain the precision of eviction algorithm such as LIRS. The key insight is that batching the updates on the eviction algorithm doesn't materially affect the accuracy of the algorithm.

How are these ideas implemented in Infinispan? We introduced BoundedConcurrentHashMap class based on Doug Lea's ConcurrentHashMap. BoundedConcurrentHashMap hashes entries based on their keys into lock protected segments. Instead of recording entries accessed per thread we record them in a lock free queue on a segment level. The main reason not to use ThreadLocal is that we could potentially have hundreds of threads hitting the data container, some of them very short lived thus possibly never reaching batching thresholds. When predetermined thresholds are reached eviction algorithms is executed on a segment level. Would running eviction algorithm on a segment level rather than entire data container impact overall eviction precision? In our performance tests we have not found any evidence of that.

Infinispan's eviction algorithm is specified using strategy attribute of eviction XML element. In addition to old eviction approaches, starting with release 4.1.ALPHA2, you can now select LIRS eviction algorithm. LRU remains the default. Also note that starting with 4.1ALPHA2 release there are two distinct approaches to actually evict entries from the cache: piggyback and the default approach using a dedicated EvictionManager thread. Piggyback eviction thread policy, as it name implies, does eviction by piggybacking on user threads that are hitting the data container. Dedicated EvictionManager thread is unchanged from the previous release and it remains the default option. In order to support these two eviction thread policies a new eviction attribute threadPolicy has been added to eviction element of Infinispan configuration schema.

Does eviction redesign based on batching updates promise to live up to its expectations? Ding et al, authors of the original batching proposal, found that their framework increased throughput nearly twofold in comparison with unmodified eviction in postgreSQL 8.2.3. We do not have any numbers to share yet, however, initial testing of BoundedConcurrentHashMap were indeed promising. One of our partner companies replaced their crucial caching component with BoundedConcurrentHashMap and realized a 54% performance improvement on the Berlin SPARQL benchmark for their flagship product. Stay tuned for more updates.


Friday, 12 March 2010

No time to rest, 4.1.0.Alpha1 is here!

"Release quick, release often", that's one of our mottos at Infinispan. Barely a couple of weeks after releasing Infinispan 4.0.0.Final, here comes 4.1.0.Alpha1 with new goodies. The main star for this release is the new server module implementing Memcached's text protocol.

This new module enables you to use Infinispan as a replacement for any of your Memcached servers with the added bonus that Infinispan's Memcached server module allows you to start several instances forming a cluster so that they replicate, invalidate or distribute data between these instances, a feature not present in default Memcached implementation.

On top of the clustering capabilities Infinispan memcached server module gets in-built eviction, cache store support, JMX/Jopr monitoring etc... for free.

To get started, first download Infinispan 4.1.0.Alpha1. Then, go to "Using Infinispan Memcached Server" wiki and follow the instructions there. If you're interested in finding out how to set up multiple Infinispan memcached servers in a cluster, head to "Talking To Infinispan Memcached Servers From Non-Java Clients" wiki where you'll also find out how to access our Memcached implementation from non-Java clients.

Finally, you can find the API docs for 4.1.0.Alpha 1 here and note that this is an unstable release that is meant to gather feedback on the Memcached server module as early as possible.


Tuesday, 23 February 2010

Infinispan 4.0.0.Final has landed!

It is with great pleasure that I'd like to announce the availability of the final release of Infinispan 4.0.0. Infinispan is an open source, Java-based data grid platform that I first announced last April, and since then the codebase has been through a series of alpha and beta releases, and most recently 4 release candidates which generated a lot of community feedback.

It has been a long and wild ride, and the very active community has been critical to this release. A big thank you to everyone involved, you all know who you are.

I recently published an article about running Infinispan in local mode - as a standalone cache - compared to JBoss Cache and EHCache. The article took readers through the ease of configuration and the simple API, and then demonstrated some performance benchmarks using the recently-announced Cache Benchmarking Framework. We've been making further use of this benchmarking framework in the recent weeks and months, extensively testing Infinispan on a large cluster.

Here are some simple charts, generated using the framework. The first set compare Infinispan against the latest and greatest JBoss Cache release (3.2.2.GA at this time), using both synchronous and asynchronous replication. But first, a little bit about the nodes in our test lab, comprising of a large number of nodes, each with the following configuration:
  • 2 x Intel Xeon E5530 2.40 GHz quad core, hyperthreaded processors (= 16 hardware threads per node)
  • 12GB memory per node, although the JVM heaps are limited at 2GB
  • RHEL 5.4 with Sun 64-bit JDK 1.6.0_18
  • InfiniBand connectivity between nodes
And a little bit about the way the benchmark framework was configured:
  • Run from 2 to 12 nodes in increments of 2
  • 25 worker threads per node
  • Writing 1kb of state (randomly generated Strings) each time, with a 20% write percentage

As you can see, Infinispan significantly outperforms JBoss Cache, even in replicated mode. The large gain in read performance, as well as asynchronous write performance, demonstrates the minimally locking data container and new marshalling techniques in Infinispan. But you also notice that with synchronous writes, performance starts to degrade as the cluster size increases. This is a characteristic of replicated caches, where you always have fast reads and all state available on each and every node, at the expense of ultimate scalability.

Enter Infinispan's distributed mode. The goal of data distribution is to maintain enough copies of state in the cluster so it can be durable and fault tolerant, but not too many copies to prevent Infinispan from being scalable, with linear scalability being the ultimate prize. In the following runs, we benchmark Infinispan's synchronous, distributed mode, comparing 2 different Infinispan configurations. The framework was configured with:
  • Run from 4 to 48 nodes, in increments of 4 (to better demonstrate linear scalability)
  • 25 worker threads per node
  • Writing 1kb of state (randomly generated Strings) each time, with a 20% write percentage


As you can see, Infinispan scales linearly as the node count increases. The different configurations tested, lazy stands for enabling lazy unmarshalling, which allows for state to be stored in Infinispan as byte arrays rather than deserialized objects. This has certain advantages for certain access patterns, for example where remote lookups are very common and local lookups are rare.

How does Infinispan comparing against ${POPULAR_PROPRIETARY_DATAGRID_PRODUCT}?
Due to licensing restrictions on publishing benchmarks of such products, we are unfortunately not at liberty to make such comparisons public - although we are very pleased with how Infinispan compares against popular commercial offerings, and plan to push the performance envelope even further in 4.1.

And just because we cannot publish such results, that does not mean that you cannot run such comparisons yourself. The Cache Benchmark Framework has support for different data grid products, including Oracle Coherence, and more can be added easily.

Aren't statistics just lies?
We strongly recommend you running the benchmarks yourself. Not only does this prove things for yourself, but also allows you to benchmark behaviour on your specific hardware infrastructure, using the specific configurations you'd use in real-life, and with your specific access patterns.

So where do I get it?
Infinispan is available on the Infinispan downloads page. Please use the user forums to communicate with us about the release. A full change log of features in this release is on JIRA, and documentation is on our newly re-organised wiki. We have put together several articles, chapters and examples; feel free to suggest new sections for this user guide - topics you may find interesting or bits you feel we've left out or not addressed as fully.

What's next?
We're busy hacking away on Infinispan 4.1 features. Expect an announcement soon on this, including an early alpha release for folks to try out. If you're looking for Infinispan's roadmap for the future, look here.

Cheers, and enjoy!

Tuesday, 16 February 2010

Benchmarking Infinispan and other Data Grid software

Why benchmarking?
Benchmarking is an important aspect for us: we want to monitor our performance improvements between releases and compare ourselves with other products as well. Benchmarking a data grid product such as Infinispan is not a trivial task: one needs to start multiple processes over multiple machines, coordinate between them to make sure everything runs at once and centralize reports. Then there is the question of what access patterns the benchmark should stress.

Introducing the cache benchmarking framework (CBF)
What we've come up with is a tool to help us run our benchmarks and generate reports and charts. And more:
- simple to configure (see config sample bellow)
- simple to run. We supply a set of .sh scripts that connect to remote nodes and start cluster instances for you.
- open source. Everybody can download it, read the code and run the benchmarks by themselves. Published results can be easily verified and validated.
- extensible. It's easy to extend the framework in order to benchmark against additional products. It's also easy to write different data access patterns to be tested.
- scalable. At this moment we've used CBF for benchmarking up to 62 nodes.
- users can test products, configurations, and access patterns on their own hardware and network. This is crucial, since it means educated decisions can be made based on relevant and use-case specific statistics and measurements. Further, the benchmark can even be used to compare performance of different configurations and tuning parameters of a single data grid product, to help users choose a configuration that works best for them

Below is a sample configuration file and generated report.


<master bindAddress="${}" port="${2103:master.port}"/>

<benchmark initSize="2" maxSize="${4:slaves}" increment="1">
<DestroyWrapper runOnAllSlaves="true"/>
<ClusterValidation partialReplication="false"/>
<Warmup operationCount="1000"/>
<WebSessionBenchmark numberOfRequests="2500" numOfThreads="2"/>

<config name="mvcc/mvcc-repl-sync.xml"/>
<config name="repl-sync.xml"/>
<config name="dist-sync.xml"/>
<config name="dist-sync-l1.xml"/>

<report name="Replicated">
<item product="infinispan4" config="repl-sync.xml"/>
<item product="jbosscache3" config="mvcc/mvcc-repl-sync.xml"/>
<report name="Distributed">
<item product="infinispan4" config="dist-*"/>
<report name="All" includeAll="true"/>


And this is what a generated charts look like:

Where can you find CBF?
CBF can be found here. For a quick way of getting up to speed with it we recommend the 5 minutes tutorial.