Friday, 25 November 2011

Infinispan @Devoxx

Compared with the previous editions, this year's Devoxx was not that different: well organised, packed with interesting presentations and full rooms. And plenty of Belgian beer :)
Pete Muir, Sanne Grinovero and myself were also given the chance to speak. And we did take our time with a three hours deep dive into the Infinispan ecosystem, plenty of live demos and good discussions.
If you couldn't make it and you can't wait for the video to be published don't worry, the demo is available online here. Give it a spin and let us know what you think!


Wednesday, 23 November 2011

More on transaction performance: use1PcForAutoCommitTransactions

What's use1PcForAutoCommitTransactions all about?

Don't be scared the name, use1PcForAutoCommitTransactions is a new feature (5.1.CR1) that does quite a cool thing: increases your transactions's performance.
Let me explain.
Before Infinispan 5.1 you could access the cache both transactional and non-transactional. Naturally the non-transactional access is faster and offers less consistency guarantees.But we don't support mixed access in Infinispan 5.1, so what what's to be done when you need the speed of non-transactional access and you are ready to trade some consistency guarantees for it?
Well here is where use1PcForAutoCommitTransactions helps you. What is does is forces an induced (autoCommit=true) transaction to commit in a single phase. So only 1 RPC instead of 2RPCs as in the case of a full 2 Phase Commit (2PC).

At what cost?

You might end up with inconsistent data if multiple transactions modify the same key concurrently. But if you know that's not the case, or you can live with it then use1PcForAutoCommitTransactions will help your performance considerably.

An example

Let's say you do a simple put operation outside the scope of a transaction:

Now let's see how this would behave if the cache has several different transaction configurations:

Not using 1PC...

The put will happen in two RPCs/steps: a prepare message is sent around and then a commit.

Using 1PC...

The put happens in one RPC as the prepare and the commit are merged. Better performance.

Not using autoCommit

An exception is thrown, as this is a transactional cache and invocations must happen within the scope of a transaction.


Tuesday, 22 November 2011

Infinispan 5.1.0.BETA5 is out!

Infinispan 5.1.0.BETA5 has just been the released with a few interesting additions and important fixes:
  • Locks acquired within a transaction are now reordered in order to avoid deadlocks. There's no new configuration required to take advantage of this feature. More information on how lock reordering works can be found here.
  • One of the aims of Infinispan 5.1 'Brahma' series is to move away from JAXB and instead use Stax based XML parsing. Ahead of that, a new configuration API based on builders has been developed. Expect to hear more about it and examples on using the API in the next few days.
Amongst the fixes included in this release, it's worth mentioning:
  • The demo paths that were broken in 5.1.0.BETA4 have now been fixed.
  • Some of the Infinispan jars in 5.1.0.BETA4 were showing duplicate classes. This was the result of an OSGI bundle generation bug, and so to avoid the issue 5.1.0.BETA5 OSGI bundle generation has been disabled. This functionality will be re-enabled once the issue has been fixed by the Maven Felix plugin.
As always, please keep the feedback coming. You can download the release from here and you get further details on the issues addressed in the changelog.


Friday, 11 November 2011

Some worth mentioning improvements for pessimistic transactions

Pessimistic transactions were added in 5.1 and are the "rebranding" of eager transactions from previous Infinispan releases. But besides the re-branding, the code also brought some worth mentioning performance optimisation:
  • a single RPC happens for acquiring lock on a key, disregarding the number of invocations. So if you call cache.put(k,v) in a loop, within the scope of the same transaction, there is only one remote call to the owner of k.
  • if the key you want to lock/write maps to the local node then no remote locks are acquired. In other words there won't be any RPCs for writing to a key that maps locally. This can be very powerful used in conjunction with the KeyAffinityService, as it allows you to control the locality of you keys.
  • during the two phase commit (2PC), the prepare phase doesn't perform any RPCs: this optimisation is based on the fact locks are already acquired on each write. This means that then number of RPCs during transactions lifespan is reduced with 1.
  • for some writes to the cache (e..g cache.put(k,v)) two RPCs were performed: one to acquire the remote lock and one to fetch the previous value. The obvious optimisation in this case was to make a single RPC for both operations - which we do starting with 5.1.

Thursday, 10 November 2011

Fewer deadlocks, higher throughput

Here's the problem: first transaction (T1) writes to key a and b in this order. Second transaction (T2) writes to key b and a - again order is relevant. Now with some "right timing" T1 manages to acquire lock on a and T2 acquires lock on b. And then they wait one for the other to release locks so that they can progress. This is what is called a deadlock and is really bad for your system throughput - but I won't insist on this aspect, as I've mentioned it a lot in my previous posts.

What I want to talk about though is a way to solve this problem. Quit a simple way - just force an order on your transaction writes and you're guaranteed not to deadlock: if both T1 and T2 write to a then b (lexicographical order) there won't be any deadlock. Ever.
But there's a catch. It's not always possible to define this order, simply because you can't or because you don't know all your keys at the very beginning of the transaction.

Now here's the good news: Infinispan orders the keys touched in a transaction for you. And it even defines an order so that you won't have to do that. Actually you don't have to anything, not even enable this feature, as it is already enabled for you by default.
Does it sound too good to be true? That's because it's only partially true. That is lock reordering only works if you're using optimistic locking. For pessimistic locking you still have to do it the old way - order your locks (that's of course if you can).

Wanna know more about it? Read this.

Expect and enjoy this feature in our next release 5.1.0.BETA5.

Stay tunned!

Wednesday, 9 November 2011

Single lock owner: an important step forward

The single lock owner is a highly requested Infinispan improvement. The basic idea behind it is that, when writing to a key, locks are no longer acquired on all the nodes that own that key, but only on a single designated node (named "main owner").

How does it help me?

Short version: if you use transactions that concurrently write to the same keys, this improvement significantly increases your system' throughput.

Long version: If you're using Infinispan with transactions that modify the same key(s) concurrently then you can easily end up in a deadlock. A deadlock can also occur if two transaction modify the same key at the same time - which is both inefficient and counter-intuitive. Such a deadlock means that at one transaction(or both) eventually rollback but also the lock on the key is held for the duration of a lockAquistionTimout config option (defaults to 10 seconds). These deadlocks reduces the throughput significantly as transactions threads are held inactive during deadlock time. On top of that, other transactions that want to operate on that key are also delayed, potentially resulting in a cascade effect.

What's the added performance penalty?

The only encountered performance penalty is during cluster topology changes. At that point the cluster needs to perform some additional computation (no RPC involved) to fail-over the acquired locks from previous to new owners.
Another noticeable aspect is that locks are now being released asynchronously, after the transaction commits. This doesn't add any burden to the transaction duration, but it means that locks are being held slightly longer. That's not something to be concerned about if you're not using transactions that compete for same locks though.
We plan to benchmark this feature using Radargun benchmark tool - we'll report back!

Want to know more?

You can read the single lock owner design wiki or/and follow the JIRA JIRA discussions.

More locking improvements in Infinispan 5.1.0.BETA4

The latest beta in the Infinispan 5.1 "Brahma" series is out. So, what's in Infinispan 5.1.0.BETA4? Here are the highlights:
  • A hugely important lock acquisition improvement has been implemented that results in locks being acquired in only a single node in the cluster. This means that deadlocks as a result of multiple nodes updating the same key are no longer possible. Concurrent updates on a single key will now be queued in the node that 'owns' that key. For more info, please check the design wiki and keep an eye on this blog because Mircea Markus, who's the author of this enhancement, will be explaining it in more detail very shortly. Please note that you don't need to make any configuration or code changes to take advantage of this improvement.

  • A bunch of classes and interfaces in the core/ module have been migrated to an api/ and commons/ module in order to reduce the size of the dependencies that the Hot Rod Java client had. As a result, there's been a change in the hierarchy of Cache and CacheContainer classes, with the introduction of BasicCache and BasicCacheContainer, which are parent classes of existing Cache and CacheContainer classes respectively. What's important is that Hot Rod clients must now code againts BasicCache and BasicCacheContainers rather than Cache and CacheContainer. So previous code that was written like this will no longer compile:
    import org.infinispan.Cache;
    import org.infinispan.manager.CacheContainer;
    import org.infinispan.client.hotrod.RemoteCacheManager;
    CacheContainer cacheContainer = new RemoteCacheManager();
    Cache cache = cacheContainer.getCache();
    Instead, if Hot Rod clients want to continue using interfaces higher up the hierarchy from the remote cache/container classes, they'll have to write:
    import org.infinispan.BasicCache;
    import org.infinispan.manager.BasicCacheContainer;
    import org.infinispan.client.hotrod.RemoteCacheManager;
    BasicCacheContainer cacheContainer = new RemoteCacheManager();
    BasicCache cache = cacheContainer.getCache();
    Previous code that interacted against the RemoteCache and RemoteCacheManager should work as it used to:
    import org.infinispan.client.hotrod.RemoteCache;
    import org.infinispan.client.hotrod.RemoteCacheManager;
    RemoteCacheManager cacheContainer = new RemoteCacheManager();
    RemoteCache cache = cacheContainer.getCache();
    We apologise for any inconvenience caused, but we think that the Hot Rod clients will hugely benefit from this vastly reducing the number of dependencies they need.

  • Finally, a few words about the ZIP distribution file. In BETA4 we've added some cache store implementations that were missing from previous releases, such as the RemoteCacheStore that talks to Hot Rod servers, and we've added a brand new demo application that implements a near-caching pattern using JMS. Please be aware that this demo is just a simple prototype of how near caches could be built using Infinispan and HornetQ.

As always, please keep the feedback coming. You can download the release from here and you get further details on the issues addressed in the changelog.