Tuesday, 14 August 2018

Hotrod clients C++ and C# 8.3.0.Beta1 are out!

Dear Infinispanners,

The C++ and C# 8.3.0.Beta1 releases are available!

Main feature for this release is: transactions. Clients can now run sequence of hotrod operations in a transactional way. Basic methods are provided to begin, commit or rollback a transaction over an hotrod connection (hotrod 2.7 and Infinispan 9.3+ are required).
API are quite easy to use:
Source code, binaries and docs are at the usual place. Thank you for following us!
The Infinispan Team
[1] Release notes
[2++] C++ code for 8.3.0.Beta1
[2#] C# code for 8.3.0.Beta1
[3] Downloads

Monday, 13 August 2018

Node.js client 0.5.0 released with improved stability and better OSX integration

Infinispan Node.js client 0.5.0 was released last week. It comes with much improved stability under heavy load conditions and hence it's a recommended upgrade for any current users.

On top of that, a configuration option called topologyUpdates (true (default) / false) has been added to disable topology updates. This can be useful when trying to access Infinispan server running within a Docker container on MacOs. Without this option to disable topology updates, Node.js client receives internal Docker IP addresses on first contact which cannot be accessed from outside Docker on MacOs. See this previous blog post for more details.

If you're a Node.js user and want to store data remotely in Infinispan server instances, please give the client a go and tell us what you think of it via our forum, via our issue tracker or via Zulip on Infinispan channel.

Cheers,
Galder



Friday, 3 August 2018

Infinispan 9.4.0.Beta1 is out!

Infinispan users,

We have just released 9.4.0.Beta1 which includes bug fixes and improvements. Highlights of this release include:

  • Removal of WebSocket server support (ISPN-9386);
  • One step closer to remove compatibility mode, by dropping it from Remote Queries, Tasks and Scripts (ISPN-9180, ISPN-9182)
  • Recovery Support for Hot Rod client transactions (ISPN-9261)
  • Fixed issue with Hot Rod client near cache for async operations (ISPN-9393)
  • Improvements in Ickle (ISPN-9378)
  • Additional Segmented Stores
  • RocksDB supports single database segmentation (ISPN-9375)
  • RemoteStore segmented for additional stream performance (ISPN-9376)
  • RocksDB now allows for properties to be provided to configure underlying database (ISPN-9371)
  • Component Upgrades:
    • Protostream upgraded to version 4.2.1.Final (ISPN-9399)
    • Hibernate ORM upgraded to version 5.3.4.Final (ISPN-9406)
  • Other bug fixes.

The full list of 9.4.0.Beta1 fixes are here.

You can find both releases on our download page. Please report any issues in our issue tracker and join the conversation in our Zulip Chat to shape up our next release.

Enjoy,
The Infinispan Team

Tuesday, 24 July 2018

Infinispan Spark connector 0.8 released


The Infinispan Spark connector version 0.8 has been released and is available in Maven central and SparkPackages.

This is a maintenance only release to bring compatibility with Spark 2.3 and Infinispan 9.3.

For more information about the connector, please consult the documentation and also try the docker based sample.

For feedback and general help, please use the Infinispan chat.



Monday, 16 July 2018

Infinispan 9.3.1.Final and 9.4.0.Alpha1

We have 2 new releases to announce today:

9.3.1.Final includes some important bug fixes, and we recommend all users of 9.3.0.Final to upgrade:
  • Fix for CVE-2018-1131 that allows unchecked deserialization in the server from binary java , XML and JSON payloads
  • Fixed transcoding from JSON/XML to java objects with deployed entities (ISPN-9336)
  • Look up key in cache loader if the entry has expired but hasn't yet been removed from the data container (ISPN-9370)
  • Avoid circular references in exceptions, as they were causing stack overflows with logback 1.2.x (ISPN-9362)
See the full list of bug fixes here


9.4.0.Alpha1 its the first iteration towards our next big release. Highlights include:
  • The Spring Cache provider now supports two configuration properties with which you can determine how long to wait for read and write operations respectively (ISPN-9301).
  • You can now obtain nanosecond-resolution statistics for average read/write/remove time (ISPN-9352).
  • Queries now throw an AvailabilityException if the cache is in degraded mode and partition mode isn’t ALLOW_READ_WRITES ([ISPN-9340)
  • Admin Console: You can now delete cache from Administration console (ISPN-7291).
  • Following up on the segmented data container in 9.3.0.Final, cache stores can now be segmented as well, allowing for better performance for bulk operations (ie. cache.size(), cache.entrySet().stream())
  • The server-side Hot Rod parser is now generated automatically (ISPN-8981
The full list of 9.4.0.Alpha1 fixes is here.


You can find both releases on our download page. Please report any issues in our issue tracker and join the conversation in our Zulip Chat to shape up our next release.
 

Monday, 2 July 2018

Hotrod clients C++ and C# 8.3.0.Alpha1 are out!

Dear Infinispanners,

The C++ and C# 8.3.0.Alpha1 releases are available!

Both the clients come with these new features:
  • counter operations, to use cluster distributed counters [1]
  • admin operations, to create/remove cache programmatically at runtime [2]
For the .NET Core lovers, there's a work in progress to implement the dotnet core build for the C# client [3].
Features list, code and bits are available as usual: [4] [5] [6].

Cheers,
The Infinispan Team

[1] Clustered Counters
[2] Hot Rod Admin Tasks
[3] How to build à la .NET Core manière
[4] Release notes
[5++] C++ code for 8.3.0.Alpha1
[5#] C# code for 8.3.0.Alpha1
[6] Downloads

Wednesday, 27 June 2018

Making Java objects queryable by Infinispan remote clients

The following is a common question amongst Infinispan community users:
How do I make my Java objects queryable by remote clients? 

Annotation Method


The simplest way is to take advantage Infinispan Protostream annotations to mark your objects queryable and decide how each object field should be indexed. Example:

Then, the ProtoSchemaBuilder can inspect the annotated class and derive a Google Protocol Buffers schema file from it. Example:

Finally, the schema file needs to be registered in the “___protobuf_metadata” cache:

Although this is by far the easiest way to make your Java objects queryable, this method might not always be viable. For example, you might not be able to modify the Java object classes to add the annotations. For such use cases, a more verbose method is available that does not require modifying the source code of the Java object.

Plain Object Method


For example, given this Java object:

A Protocol Buffers schema must be defined where comments are used to define the object as queryable and decide how each field is indexed:

This method also requires a Protostream message marshaller to be defined which specifies how each field is serialized/deserialized:

This method still requires the Protocol Buffers schema to be registered remotely, but on top of that, the schema and marshaller need to be registered in the client:

Clearly, this second method is a lot more verbose and more laborious when refactoring. If any changes are made to the Java object, the marshaller and Protocol Buffer schema need to also be changed accordingly. This is done automatically in the first method.

Both methods are demonstrated in full in the queryable-pojos demo.

Cheers
Galder

Tuesday, 26 June 2018

Infinispan 9.3.0.Final is out!

We're delighted to announce the release of Infinispan 9.3.0.Final, which is a culmination of several months of hard work by the entire Infinispan community. Here's a summary of what you can find within it:

  • First final release to work with both Java 8 and Java 10. Note that Infinispan only works in classpath mode.
  • Transaction support Hot Rod. The java Hot Rod client can participate in Java transactions via Synchronization or XA enlistment. Note that recovery isn't supported yet.
  • Caches can now configure the maximum number of attempts to start a CacheWriter/CacheLoader on startup before cache creation fails.
  • Write-behind stores are now fault-tolerant by default.
  • Segmented On Heap Data Container. It improves performance of stream operations.
  • Server upgraded to Wildfly 13.
  • We have introduced several WildFly feature packs to make it easier for Infinispan to be utilised on WildFly instances via the Server Provisioning Plugin. The following feature packs have been created, most notably:
    • infinispan-feature-pack-client
      • All of the modules required to connect to a hotrod server via the client
    • infinispan-feature-pack-embedded
      • The modules required for embedded instances of Infinispan
    • infinispan-feature-pack-embedded-query
      • The same as above but with query capabilities
    • infinispan-feature-pack-wf-modules
      • This is equivalent to the Wildfly-modules.zip
  • Hibernate second-level cache provider works with Hibernate ORM 5.3.
  • The Hot Rod Server allows now to use multiple protocols with a Single Port. The initial version supports HTTP/1.1, HTTP/2 and Hot Rod. Switching protocols can be done using TLS/ALPN and HTTP/1.1 Upgrade header.
  • Admin console - improved all editors (schema, scripts, JSON data) to include syntax highlighting.
  • Several enhancements in the Java Hot Rod client allowing to read and write data in different formats such as JSON, for cache operations and deployed filters/converters.
  • Cluster wide max idle expiration.
  • Component Upgrades
    • Hibernate Search 5.10
    • Hibernate ORM 5.3
  • Numerous bug fixes which improve stability
For more details, please check our issue tracking release notes.

Thanks to everyone involved in this release! Onward to Infinispan 9.4!

Cheers,
Galder

Tuesday, 5 June 2018

Thanks Great Indian Developer Summit & Voxxed Days Zurich

A few days after Devoxx France, I headed for Great Indian Developer Summit in Bangalore where I spoke about handling streaming data on top of a Kubernetes platform. This was a very similar talk to the one I gave at JFokus but with some important changes. Together with Clement we created a small RxJava 2 façade for Infinispan. When combined with Vert.x RxJava 2 API, we could finally have an idiomatic way of handling streaming data asynchronously and coordinating events purely using RxJava 2 APIs. This is crucial for working with streaming data in an efficient way. On top of that, I made some changes to push the binary data used by the demo outside of the deployment.

Unfortunately Murphy struck during the presentation and I was unable to run the live coding demo. A problem with Docker size image during preparation combined with a cleanup I ran before the talk meant some of the images had to be re-downloaded. The wireless internet connection at the conference nor the mobile connection were good enough for me to recover it. Once back in the hotel where I had a stable connection I was able to record a screencast of the steps I would have followed during the Great Indian Developer Summit talk. You can find this screencast below:



The code from the demo can be found here. The live coding steps I followed are defined here. Finally the slides can be found here:

Finally, for something slightly different, back in March I joined Ray Tsang for a talk at Voxxed Days Zurich. This was a really fun talk to be part of! We combined past stories of my time at JBoss support with Ray's Kubernetes troubleshooting experience to create an engaging talk :). You can find the video below:



The week after Sebastian Łaskawiec and I travelled to Red Hat Summit as part of the work we did to integrate Red Hat Data Grid (Infinispan product version) into the Scavenger Hunt game presented on the keynote of the last day. Both Sebastian and I have been working on a blog series which will be published very soon.

Cheers,
Galder

Monday, 4 June 2018

Infinispan 9.3.0.CR1

Dear Infinispan Community,

we're glad to announce that 9.3.0.CR1 is out!

This is the first release which works with both Java 8 and Java 10. Pre-releases of Java 11 work too. Note that Infinispan still only works in classpath mode.

Highlights of this release include:
  • Expanded transaction support in Hot Rod, which can now participate in Java transactions via Sync or Xa enlistment. Transaction recovery isn't supported yet.
  • Caches can now configure the maximum number of attempts to start a CacheWriter/CacheLoader on startup before cache creation fails.
  • Write-behind stores are now fault-tolerant by default
  • Segmented On-Heap Data Container improves stream operation performance
  • We have introduced several WildFly feature packs to make it easier for Infinispan to be utilised on WildFly instances via the Server Provisioning Plugin. The following feature packs have been created:
    • infinispan-feature-pack-client: all of the modules required to connect to a hotrod server via the client
    • infinispan-feature-pack-embedded: the modules required for embedded instances of Infinispan
    • infinispan-feature-pack-embedded-query: the same as above but with query capabilities
    • infinispan-feature-pack-wf-modules: this is equivalent to the wildfly-modules.zip
  • 2-Level cache now works with Hibernate ORM 5.3
  • The server now allows multiple protocols with a Single Port. The initial version supports HTTP/1.1, HTTP/2 and Hot Rod. Switching protocols can be done using TLS/ALPN and HTTP/1.1 Upgrade header.
  • Admin console - improved all editors (schema, scripts, JSON data) to include syntax highlighting
  • Component Upgrades: Hibernate Search 5.10 and Hibernate ORM 5.3
Numerous bug fixes which improve stability are also included (here is the full list of the solved issues).

As usual, you can find all the bits on our website. If you find any issues, don't hesitate to report them on our issue tracker.

Friday, 11 May 2018

Infinispan 9.3.0.Beta1

Infinispan users,

We have just released 9.3.0.Beta1 which includes 38 fixes. Highlights of this release include:
  • Conflict Resolution Improvements
    • MergePolicy.NONE is now the default merge-policy
    • Conflict Resolution during a merge is now non-blocking and tolerant of node failures
  • Reactive Streams based Cache Loader SPI available
  • Infinispan can now be built and tested with Java 10/11
  • Max Idle expiration is now cluster-wide including events
  • The Java Hot Rod client can handle data in multiple formats
  • Improved merge after long GC pauses avoiding data loss
  • Admin console supports counters in standalone mode
  • Lots of bug fixes, test fixes, and documentation improvements
As usual, you can find all the bits on our website. If you find any issues, don't hesitate to report them on our issue tracker.

Enjoy,
The Infinispan Team

Wednesday, 2 May 2018

Infinispan chat moves to Zulip

For over 9 years Infinispan has used IRC for real-time interaction between the development team, contributors and users. While IRC has served us well over the years, we decided that the time has come to start using something better. After trying out a few "candidates" we have settled on Zulip



Zulip gives us many improvements over IRC and over many of the other alternatives out there. In particular:
  • multiple conversation streams
  • further filtered with the use of topics
  • organization management to organize users into groups
  • it's open source

So, if you want to chat with us, join us on the Infinispan Zulip Organization.

Infinispan 9.2.2.Final and 9.3.0.Alpha1 are out

We have two releases to announce:

first of all is 9.2.2.Final which introduces a second-level cache provider for the upcoming Hibernate ORM 5.3 as well as numerous bugfixes. [1]

Next is 9.3.0.Alpha1 which is the first iteration of our next release.
The main item here, aside from bugfixes and preparation work for upcoming features, is the upgrade of our server component to WildFly 12.

Go and get them on our download page [3]

Monday, 30 April 2018

Danke Javaland, Merci Devoxx France!

Ever since we started the Infinispan project, I simply don't recall a time busier than the current one! As usual we're working on new features and supporting our community and enterprise users, but on top of that we're working on a big scale demo that we hope you are going to like it!

All of that has been happening with at the same time juggling one of the busiest periods in Infinispan evangelisation that I can remember! Here's what I have been up to past month and a half:


On 18th of March I gave a brand new presentation at Javaland where I explored the Java RPC framework landscape. Since 2010, Infinispan has offered a binary remote, client/server API for interacting with the data. As more clients have been developed, we've been noticing that we're spending more time that'd we'd like implementing features across different clients. In this talk I looked at existing Java RPC frameworks from the point of view of Infinispan's remote API requirements. If we were to implement our binary protocol again, which option would fit best? The spectrum can be very vast so I limited myself to some know options (HTTP 1.1 REST), some options we have expertise on (Netty), some upcoming players (gRPC) and some lesser known but very powerful players (Aeron). The talk was not recorded but the slides can be found here:



Javaland was a very enjoyable conference set in a theme park near Cologne. The sessions were a mix in German and English. Although my German is not very good, I was able to follow some of the sessions.
After Javaland, I turned my attention to Devoxx France which was held in Paris mid-April. I had a couple of sessions at Devoxx France. The first was a streaming data analysis 3 hour university talk delivered along with Clement Escoffier from the Vert.x team. We already delivered this session at JFokus earlier this year, but at Devoxx France it was recorded so you'll soon to be able to watch it! In the mean time, slides can be found below:

On top of that, Ray Tsang from Google joined Clement and I to deliver a 3 hour streaming data hands-on lab at Devoxx France. This was the same workshop we delivered in Devoxx Belgium and Codemotion Madrid in 2017. The big difference was that instead of having the users run it locally on their laptops within a virtual machine, they could run OpenShift and all the components inside of Google Cloud Platform. The users had a better experience as a result of not having to deal with a virtual machine :). I'd like to say special thanks to Ray for the Google Cloud Platform test accounts and the support during the workshop.


Finally, many thanks to all attendees that came to the sessions, and to the organisers/sponsors for creating two outstanding events!

In the next blog post I'll be talking about Great Indian Developer Summit and Voxxed Days Zurich. Stay tuned! :)

Cheers,
Galder

Thursday, 29 March 2018

Infinispan Spark connector 0.7 released!

A new version of connector that integrates Infinispan and Apache Spark has just been released!

This release brings compatibility with Infinispan 9.2.x and Spark 2.3.0.

Also included a new feature that allows to create and delete caches on demand, passing custom configurations when required. For more details, please consult the documentation.

To quickly try the connector, make sure to check the Twitter demo, and for any issues or suggestions, please report them on our JIRA.

Cheers!


Tuesday, 27 March 2018

Infinispan 9.2.1.Final

Infinispan users,

we have just released 9.2.1.Final which includes 65 fixes. Highlights of this release include:
  • Many fixes/improvements to the REST endpoint
    • Configurable CORS settings
    • HTTP/2 now works
    • Accept-Encoding and Content-Encoding handling
  • It is now possible to retrieve the list of cache names over Hot Rod
  • Substantial performance improvements when iterating over the file store
  • Lots of bug fixes, test fixes and documentation improvements
As usual you can find all the bits on our website. If you find any issues, don't hesitate to report them on our issue tracker.

Enjoy

The Infinispan Team

Wednesday, 21 March 2018

Clustering Vert.x with Infinispan

Welcome to the third in a multi-part series of blog posts about creating Eclipse Vert.x applications with Infinispan. In the previous blog posts we have seen how to create REST and PUSH APIs using the Infinispan Server. The purpose of this tutorial is to showcase how to create clustered Vert.x applications using Infinispan in embedded mode.


Why Infinispan ?


Infinispan can be used for several use cases. Among them we find that it can be used as the underlying framework to cluster your applications. Infinispan uses peer-to-peer communication between nodes, so the architecture is not based on master/slave mode and there is no single point of failure. Infinispan supports replication and resilience across data centers, is fast and reliable. All the features that make this datagrid a great product, make it a great cluster manager. If you need to create clustered applications or microservices, this can be achieved with Vert.x using the Vert.x-Infinispan cluster manager.


Creating a clustered application


The code of this tutorial is available here.

Dummy Application


Let’s start with a dummy clustered system with 3 verticles.


WebService Status Producer

Produces randomly [0,1,2] values every 1000 milliseconds and sends them to the event bus "ids" address.



Reboot Consumer

Consumes messages from the event bus "ids" address, and launches a "reboot" that lasts for 3000 milliseconds whenever the value is 0. If a reboot is already happening, we don’t need to relaunch any new reboot. When a reboot starts or ends, a message is sent to the event bus to the "reboot" address.

Notice that:
  • We use a simple boolean to check if there is a reboot going on. This is safe because every verticle is executed from a single event loop thread, so there won’t be multiple threads executing the code at the same time.
  • An ID is generated to identify the Verticle. The message sent to the event bus is a JsonObject



Monitoring

Consumes monitoring messages from the event bus "reboot" address and logs them.



Clustering the dummy Application


To create a cluster of these applications, we just need to do 2 things:

  1. Add the cluster manager maven dependency.  
  2. Run and deploy each verticle in cluster mode. Each Verticle class has a main method that deploys each verticle separetly. Example for the Monitoring verticle.

Running the application, we can monitor the logs
Each clustered application contains - or embeds - an Infinispan instance. Under the hood, the 3 Infinispan instances will form a cluster.


What if I need to scale


Imagine you need to scale the Reboot Consumer application. We can run it multiple times, let’s say 2 more times. The two new instances will join the cluster. In this case, we have “ID-93EB” ”ID-45B8” and “ID-247A”, so now we have a cluster of 5. It's very simple but if we have a look to the monitoring console, we will notice reboots are now happening in parallel.

3 Reboot Consumers


As I mentioned before, this example is a dummy application. But in real life you could need to trigger a process from a verticle that runs multiple times and need to be sure this process is happening just once at a time. How can we fix this ? We can use Vert.x Shared Data structures API.

Shared data API to rescue


In this particular case, we are going to use a clustered lock. Using the lock, we can now synchronise the reboots among the 3 nodes.


Using Shared Data API, one reboot at a time

Vert.x clustered lock in this example is using an emulated version of the new Clustered Lock API of Infinispan introduced in 9.2 which has been freshly released. I will come back to share about this API in particular in further blog posts. You can read about it on the documentation or run the infinispan-simple-tutorial.

One node at a time


When clustering applications with Vert.x, there is something you need to take care of. It is important to understand that each node contains an instance of the datagrid. This means that scaling up and down needs to be done one at a time. Infinispan, as other datagrids, reshuffles the data when a new node joins or leaves a cluster. This process is done following a distributed hashing algorithm, so not every data is moved around, just the data that is supposed to live in the new node, or the data owned by a leaving node. If we just kill a bunch of nodes without taking care of the cluster, consequences can be harming! This is something quite obvious when dealing with databases : we just don’t kill a bunch of database instances without taking care of every instance at a time. Even when Infinispan data is only in memory we need to take care about it in the same way. Openshift, which is built on top of Kubernetes, helps dealing properly and safely with these scale up and down operations.


Conclusions


As you have seen, creating clustered applications with Vert.x and Infinispan is very straightforward. The clustered event bus is very powerful. In this example we have seen how to use a clustered lock, but other shared data structures built on top of Counters are available.


About the Vert.x Infinispan Cluster Manager status


At the time of this writing, Infinspan 9.2.0.Final has been released. From vert.x-infinispan cluster manager point of view, before Vert.x 3.6 (which is not out yet) the cluster manager is using Infinispan 9.1.6.final and it’s using an emulation layer for locks and counters. In this tutorial we are using Vert.x 3.5.1 version.

This tutorial will be updated with the version using Infinispan 9.2 as soon as the next vert.x-infinispan will be released, which will happen in a few months. Meanwhile, stay tuned!

Tuesday, 13 March 2018

Final release for Hotrod clients C++ and C# 8.2.0 are out!

We're pleased to announce the availability of the 8.2.0.Final release of the C++ and C# Hotrod clients.
Here is what happened in the 8.2.0 episode:

C++
  • SASL: PLAIN, MD5, EXTERNAL, GSSAPI (linux only)
  • Continuous Queries
  • getAll operation
  • simplified remote exec API

C#
  • SASL: PLAIN, MD5, EXTERNAL
  • Continuous Queries
  • GetAll operation
  • simplified remote exec API

You can find more info and even the binaries at the usual places [1][2][3][4]

In the backstage people are already working on the 8.3.0 episode, you can partecipate expressing your opinion or adding your ideas here [5][6].

Thank you for reading.

The Infinispan Team

[1] Project Issues
[2] C++ Source
[3] C# Source
[4] Download
[5] C++ Features List for 8.3.0
[6] C# Features List for 8.3.0

Wednesday, 7 March 2018

REST with HTTP/2

HTTP has become one of the most successful and heavily used network protocols around the world. Version 1.0 was created in 1996 and received a minor update 3 years later. But it took more than a decade to create HTTP/2 (which was approved in 2015). Why did it take so long? Well, I wouldn’t tell you all the truth if I didn’t mention an experimental protocol, called SPDY. SPDY was primarily focused on improving performance. The initial results were very promising and inside Google’s lab, the developers measured 55% speed improvement. This work and experience was converted into HTTP/2 proposal back in 2012. A few years later, we can all use HTTP/2 (sometimes called h2) along with its older brother - HTTP/1.1.

Main differences between HTTP/1.1 and HTTP/2


HTTP/1.1 is a text-based protocol. Sometimes this is very convenient, since you can use low level tools, such as Telnet, for hacking. But it doesn’t work very well for transporting large, binary payloads. HTTP/2 solves this problem by using a completely redesigned architecture. Each HTTP message (a request or a response) consists of one or more frames. A frame is the smallest portion of data travelling through a TCP connection. A set of messages is aggregated into a, so called stream.



HTTP/2 allows to lower the number of physical connections between the server and the client by multiplexing logical connections into one TCP connection. Streams allow the server to recognize, which frame belongs to which conversation.

How to connect using HTTP/2?

There are two ways for starting an HTTP/2 conversation.

The first one, and the most commonly used one, is TLS/ALPN. During TLS handshake the server and the client negotiate protocol for further communication. Unfortunately JDK below 9 doesn’t support it by default (there are a couple of workarounds but please refer to your favorite HTTP client’s manual to find some suggestions).

The second one, much less popular, is so called plain text upgrade. During HTTP/1.1 conversation, the client issues an HTTP/1.1 Upgrade header and proposes new conversation protocol. If the server agrees, they start using it. If not, they stick with HTTP/1.1.

The good news is that Infinispan supports both those upgrade paths. Thanks to the ALPN Hack Engine (the credit goes to Stuart Douglas from the Wildfly Team), we support TLS/ALPN without any bootstrap classpath modification.

Configuring Infinispan server for HTTP/2

Infinispan’s REST server already supports plain text upgrades out of the box. TLS/ALPN however, requires additional configuration since the server needs to use a Keystore. In order to make it even more convenient, we support generating keystores automatically when needed. Here’s an example showing how to configure a security realm:

The next step is to bind the security realm to a REST endpoint:

You may also use one of our configuration examples. The easiest way to get it working is to use our Docker image:

Let’s explain a couple of things from the command above:
  • -e "APP_USER=test" - This is a user name we will be used for REST authentication.
  • -e "APP_PASS=test" - Corresponding password.
  • ../../docs/examples/configs/standalone-rest-ssl.xml - Here is a ready-to-go configuration with REST and TLS/ALPN support
Unfortunately, HTTP/2 functionality has been broken in 9.2.0.Final. But we promise to fix it as soon as we can :) Please use 9.1.5.Final in the meantime.

Testing using CURL

Curl is one of my favorite tools. It’s very simple, powerful, and… it supports HTTP/2. Assuming that you already started Infinispan server using `docker run` command, you can put something into the cache:

Once, it’s there, let’s try to get it back:

Let’s analyze CURL switches one by one:
  • -k - Ignores certificate validation. All automatically generated certificates and self-signed and not trusted by default.
  • -v - Debug logging.
  • -u test:test - Username and password for authentication.
  • -d test - This is the payload when invoking HTTP POST.
  • -H “Accept: text/plain” - This tells the server what type of data we’d like to get in return.

Conclusions and links

I hope you enjoyed this small tutorial about HTTP/2. I highly encourage you to have a look at the links below to learn some more things about this topic. You may also measure the performance of your app when using HTTP/1.1 and HTTP/2. You will be surprised!

Tuesday, 6 March 2018

Accessing Infinispan inside Docker for Mac

Connecting to Infinispan instances that run inside Docker for Mac using the Java Hot Rod client can be tricky. In this blog post we'll be analyzing what makes this environment tricky and how to get around the issue.

The tricky thing about Docker for Mac is that internal container IP addresses are not accessible externally. This is a known issue and it can be hard to workaround it. In container orchestrators such as Openshift, you can use Routes to allow external access to the containers. However, if running vanilla Docker for Mac, the simplest option is to map ports over to the local machine.

Why is this important? When someone connects using the Hot Rod protocol, the server returns the current topology to the client. When Infinispan runs inside of Docker, this topology by default contains internal IP addresses. Since those are not accessible externally in Docker for Mac, the client won't be able to connect.

To workaround the issue, Infinispan server Hot Rod endpoint can be configured with external host/port combination, but doing this would require modifying the server's configuration. A simpler method to get around the issue is to configure the client's intelligence to be Basic. By doing this the server won't send topology updates nor will the client be able to locate where keys are located using hashing. This has a negative performance impact since all requests to Infinispan single server or server cluster would need to go over the same IP+port. However, for demo or sample applications on Mac environments, this is reasonable thing to do.

So, how do we do all of this?

First, start Infinispan server and map Hot Rod's default port 11222 to the local 11222 port:

docker run -it -p 11222:11222 jboss/infinispan-server:9.2.0.Final

Open your IDE and create a project with this dependencies:



Finally, create a class that connects to Infinispan and does a simple put/get sequence:



Cheers,
Galder

Monday, 5 March 2018

A SWIG based framework to build Hotrod client prototype in your preferred language

If your are working on a non Java/C++/C#/JS application and you need to interact with Infinispan via Hotrod you may be interested in the idea behind the HotSwig[1] project.

Hotswig proposes a framework to build Hotrod client prototypes quickly and for a generic SWIG[2] supported language.
As people familiar with C++ and C# Infinispan native clients know, SWIG plays a role in both the projects:

  • is used to build the base of the C# client wrapping the C++ core with a C# layer;
  • is used in the C++ project to run (part of) the Java test suite against the client, in this way: a Java wrapper is built via SWIG to make the C++ client looks like its Java big brother so it can be tested with the Java test suite.

The main goal was to produce for a specific language an almost complete client reusing the C++ core features and the following workflow has been setup to do that:

  • the whole C++ interface is processed by SWIG. The resulting wrapper exposes almost all the C++ functions;
  • a user friendly adaptation layer is build on top of the SWIG result.

This approach doesn't work for the HotSwig goal, mainly because the effort need by the second step is usually not-negligible and prevents the rapid development of prototype in a generic language.

In the HotSwig approach, this limitation is removed moving the adaptation layer from the target language to the C++ side and then letting SWIG generate a ready to use client prototype. So the HotSwig workflow is the following:

  • build an adaptation facade around the C++ core to make it SWIG friendly (do the adaptation work once for all on the C++ side);
  • explicitly define what we want in the produced SWIG wrapper (keep things simple excluding everything by default);
  • run SWIG to produce the client.

At the moment HotSwig is just a proof of concept, but you can try to run it and produce a ready to work Infinispan client for the language you need. Examples are already provided for python, ruby and Octave, but HotSwig should work with all the SWIG supported languages. If you get it to run in your preferred programming language, please share your experience with us.

I've listed here[3] some tasks for the roadmap, with the idea to test the flexibility of the framework trying to extend it in different directions. Maybe the idea is good and it can grow up from a PoC to something that can really help devs. You can add you ideas of course.

So if you need to do math against your Infinispan data set why don't you try the Octave client? Or maybe you want to do analytics with R, or presentation with PHP. Or you just like parenthesis and you want to use Lisp. Or you're working for the Klingon empire and you must use ylDoghQo'[4]... well ok just joking now...

Thanks for reading!

Cheers
The Infinispan Team


[1] https://github.com/rigazilla/hotswig
[2] http://www.swig.org/
[3] https://github.com/rigazilla/hotswig/issues
[4] https://www.kli.org/about-klingon/klingon-phrases

Wednesday, 28 February 2018

Infinispan 9.2.0.Final


Infinispan 9.2.0.Final "Gaina" is out !


Our three-month time-boxing for a minor release plan got a little bit skewed this time in order to accommodate for some additional overhauls. This also means that, for a minor release, this is much meatier than usual.

Core improvements

  • Conflict resolution
    Automatic conflict resolution after a partition merge is now supported for all partition handling strategies and is enabled by default. Furthermore, it is now possible to deploy custom EntryMergePolicy implementations to the server
  • Reactive streams-based distributed Iteration improvements
    Distributed iterator now uses less threads and allows for efficient parallel retrieval providing for improved throughput
  • Biased reads for scattered caches
    Originator can read the ‘backup’ copy locally until the data gets overwritten again. Together with improved read performance this migrates data to nodes that use it. 
  • Off-heap sizing
    Off-heap requires less overhead per entry and provides for more accurate sizing allowing you to maximize your memory used
  • Exception based evictionA new "eviction" that instead of removing old entries prevents new entries being inserted (supported by all memory storage and eviction types)

API improvements

  • Multimap caches
    Available for both Embedded and for Hot Rod, these maps which can store multiple values for the same key
  • Clustered Counters
    Clustered counters are now available for Hot Rod and in non-clustered deployments.
  • Clustered Locks
    Available in embedded mode. They allow concurrent synchronization between nodes in the same cluster
  • Wildcard configurations
    Implicitly use a predefined configuration for all caches whose name matches a wildcard. This is particularly useful when using Infinispan through an API which doesn't allow for additional configuration properties (such as JCache).
  • Cluster-wide cache admin with optional persistence
    The CacheManager API has been enhanced with methods to create/destroy caches across a cluster, in both Embedded and Hot Rod scenarios (REST will come in 9.3). Optionally, configurations can be made persistent across restarts.
  • Cache Stream
    Overloaded collect() method to take Supplier so that collect() in clustered environments is more user-friendly.

Data Interoperability


Transcoding is a powerful new feature which allows for transparent conversion between a number of formats across different endpoints. For example, it is now possible to write ProtoBuf-encoded data through the Hot Rod endpoint and retrieve that same data as a JSON document through the REST endpoint and vice versa. Additionally, such data is also indexable and queryable.

Indexing and Query

  • POJO queries over Hot Rod
    It is now possible to directly use Hibernate Search-annotated objects through JBoss Marshalling/Java serialization without the need for ProtoBuf over Hot Rod.
  • Broadcast queries
    Clustered queries have been unified with non-clustered queries under a single API, making their use transparent.

 

Infinispan Server

  • Rebased on WildFly 11
    The server baseline has been updated to WildFly 11
  • Async Hot Rod server
    The Hot Rod server now uses async ops, sparing CPU cycles from context-switching and reducing the latency.
  • Queries over REST
    The REST endpoint now supports running Ickle queries. This is fully integrated with the above-mentioned JSON support, so your results will be returned to you as JSON documents.
  • Netty Hot Rod Client
    The Hot Rod Java client network layer has been completely rewritten to use Netty, bringing true asynchronous calls and some performance benefits.

 

Management, monitoring and logging

  •     Console support for counters
  •     Improved remote protocol access logging
  •     Jolokia is now integrated as a part of the server.

 

Infinispan on OpenShift


We have been doing a lot of work in making Infinispan a first-class citizen of OpenShift. Check out the OpenShift templates for more details.

Integrations

  • JCache 1.1
    This release is now aligned with JCache 1.1.
  • Hibernate second-level cache provider
    Traditionally shipped by our friends on the Hibernate ORM team, this component has now changed ownership over to us. This release includes a provider for both Hibernate 5.1 and 5.2.
  • Azure cloud discoveryCourtesy of JGroups' extras, we now support discovery in Azure.

 

The codename


In the grand-old tradition of giving major and minor Infinispan releases a beer-themed codename, 9.2 is no exception.

"Gaina", which means "chicken" in the milanese dialect, also happens to be one of the great beers of the Birrificio Lambrate in Milan.

 

Onwards to 9.3


We have already started working on our next release, 9.3 which should be with you at the end of May. This will continue the work to make Infinispan fully asynchronous inside out, reducing resource usage and increasing performance. We are also working on a new modular API which will improve usability, increase interoperability between embedded and remote scenarios and take advantage of reactive designs. Transactions should finally make their appearance in Hot Rod and security will be greatly enhanced, by taking advantages of the great work done by our friends over on the Elytron team. We have much more planned, so please consult our roadmap for details.

 

Download, learn and play


You will find downloads, documentation, tutorials, quickstarts and demos over on our website.

Please let us know on our forum, on IRC, on our issue tracker if you have any issues with this release, if there is any feature you would like to see in the future, or just to chat.


Wednesday, 21 February 2018

Infinispan 9.2.0.CR3

This should have been the announcement for Final, but we discovered a number of performance regressions as well as a few important bugs that needed fixing. We also slipped in a few features and improvements. So, without further ado, here's what is new and noteworthy in Infinispan 9.2.0.CR3:
  • Various component upgrades
    • Netty 4.1.21
    • Hibernate Search 5.9.0.Final
    • Protostream to 4.2.0.CR1
  • Features/Enhancements
    • Azure discovery
    • Use async ops in the Hot Rod server
    • Simplified client configuration when security is enabled
  • Lots of documentation updates
    • REST server changes
    • Data Encoding
    • Server tasks
  • And many bugfixes

Get your artifacts from maven, the distributions from our download page, the fixed issues from our issue tracker and read the updated documentation. Come and talk to us on IRC (#infinispan on Freenode) or ask questions on the forum.

Monday, 19 February 2018

Distributed iteration improvements

Infinispan hasn't always provided a way for iterating upon entries in a distributed cache. In fact the first iteration wasn't until Infinispan 7. Then in Infinispan 8, with the addition of Java 8, we fully integrated this into distributed streams, which brought some minor iteration improvements in performance.

We are proud to announce that with Infinispan 9.2 there are even more improvements. This contains no API changes, although those will surely come in the future. This one is purely for performance and utilization.

New implementation details

 

There are a few different aspects that have been changed.  A lot of these revolve around the amount of entries being retrieved at once, which if you are familiar with DistributedStreams can be configured via the distributedBatchSize method. Note that if this is not specified it defaults to the chunk size in state transfer.

Entry retrieval is now pull based instead of push

Infinispan core (embedded) has added rxjava2 and reactive streams as dependencies and rewrote all of the old push style iterator code over to pull style to fully utilize the Publisher and Subscriber interfaces.

With this we only pull up to the batchSize in entries at a time from any set of nodes. The old style utilized push with call stack blocking, which could return up two times the amount of entries. Also since we aren't performing call stack blocking, we don't have to waste threads as these calls to retrieve entries are done async and finish very quickly irrespective of user interaction. The old method required multiple threads to be reserved for this purpose.

Streamed batches

The responses from a remote node are written directly to the output stream so there are no intermediate collections allocated. This means we only have to iterate upon the data once as we retain the iterator between requests. On the originator we still have to store the batches in a collection to be enqueued for the user to pull.

Rewritten Parallel Distribution

Great care was taken to implement parallel distribution in a way to vastly reduce contention and ensure that we properly follow the batchSize configuration.

When parallel distribution is in use the new implementation will start 4 remote node requests sharing the batch size (so each one gets 1/4). This way we can guarantee that we only have the desired size irrespective of the number of nodes in the cluster. The old implementation would request batchSize from all nodes at the same time. So not only did it reserve a thread for node but could easily swamp your JVM memory, causing OutOfMemoryErrors (which no one likes). The latter alone made us force the default to be sequential distribution when using an iterator.

The old implementation would write entries from all nodes (including local) to the same shared queue. The new implementation has a different queue for each request, which allows for faster queues with no locking to be used.

Due to these changes and other isolations between threads, we can now make parallel distribution the default setting for the iterator method. And as you will see this has improved performance nicely.

Performance


We have written a JMH test harness specifically for this blog post, testing 9.1.5.Final build against latest 9.2.0.SNAPSHOT. The test runs by default with 4GB of heap with 6 nodes in a distributed cache with 2 owners. It has varying entry count, entry sizes and distributed batch sizes.

Due to the variance in each test a large number of tests were ran and with different permutations to make sure it covered a large amount of test cases. The JMH test that was ran can be found at github. All the default settings were used for the run except -t4 (runs with 4 worker threads) was provided. This was all ran on my measly laptop (i7-4810MQ and 16 GB) - maxing out the CPU was not a hard task.

CAVEAT: The tests don't do anything with the iterator and just try to pull them as fast as they can. Obviously if you have a lot of processing done between iterations you will likely not see as good of a performance increase.

The entire results can be found here. It shows each permutation and how many operations per second and finds the difference (green shows 5% or more and red shows -5% or less).


Operation Average Gain Code
Specified Distribution Mode 3.5% .entrySet().stream().sequentialDistribution.iterator()
Default 11% .entrySet().iterator()
No Rehash 14% .entrySet().stream().disableRehashAware().iterator()

The above 3 rows show a few different ways you could have been invoking the iterator method. The second row is probably by far the most used case. In this case you should see around a 11% increase in performance (results will vary). This is due to the new pulling method as well as parallel distribution becoming the new default running mode. It is unlikely a user was using the other 2 methods, but are provided for a more complete view.

If you were specifying a distribution mode manually, either sequential or distribution you will only see a few percent faster run (3.5%), but every little bit helps! Also if you can switch to parallel you may want to think about doing so.

Also you can see if you were running with rehash disabled prior, it has even more gains (14%). Those don't even include the fact that no rehash was 28% faster than with before (which means it is about 32% faster in general now). So if you can get away with a at most once guarantee, disabling rehash will provide the best throughput.

Whats next? 


As was mentioned this is not exposed to the user directly. You still interact with the iterator as you would normally. We should remedy this at some point.

Expose new method

We would love to eventually expose a method to return a Publisher directly to the user so that they can get the full benefits of having a pull based implementation underneath.

This way any intermediate operations applied to the stream before would be distributed and anything applied to the Publisher would be done locally. And just like the iterator method this publisher would be fully rehash aware if you have it configured to do so and would make sure you get all entries delivered in an exactly once fashion (rehash disabled guarantees at most once).

Another side benefit is that the Subscriber methods could be called on different threads so there is no overhead required on the ISPN side for coordinating these into queue(s). Thus the Subscriber should be able to retrieve all entries faster than just doing an iterator.

Java 9 Flow

Also many of you may be wondering why we aren't using the new Flow API introduced in Java 9. Luckily the Flow API is a 1:1 conversion of reactive streams. So whenever Infinispan will start supporting Java 9 interfaces/classes, we hope to properly expose these as the JDK classes.

Segment Based Iteration 

With Infinispan 9.3, we hope to introduce data container and cache store segment aware iteration. This means when iterating over either we would only have to process entries that map to a given segment. This should reduce the time and processing for iteration substantially, especially for cache stores. Keep your eyes out for a future blog post detailing these as 9.3 development commences.

Give us Feedback

We hope you find a bit more performance when working with your distributed iteration. Also we value any feedback on what you want our APIs to look like or find any bugs. As always let us know at any of the places listed here.

Sunday, 18 February 2018

Thanks JFokus!!

We're now back from JFokus and we'd like to thank organizers, attendees, volunteers and sponsors for making JFokus a very enjoyable experience! :)

From an Infinispan perspective, we started the week with a Streaming Data deep-dive session presented together with Clement Escoffier. This was a 3h long session, so there was plenty to go through but we managed to do it on time. The final demo did not fully work, but this is something we will improving it in the near future. Slides can be found in [1] [2] [3] [4] [5] [6] and the code can be found here. This session was not recorded.

Next day I had a talk on streaming data analysis on top of Kubernetes where I went through some of the topics explained in the deep dive. This was mostly a live coding session showing how to work with streaming data on top OpenShift/Kubernetes which run on Google Cloud. This session was recorded. I'll keep an eye for when the video becomes available to share it here. The code from this session can be found here, slides here and the live coding instructions here.

The rest of the conference was a blast, with many networking opportunities. During this networking I started working on an RxJava2 API facade for Infinispan remote API, which would make it easier to fit with other reactive toolkits out there, such as Vert.x :). More news on this soon

Cheers,
Galder

Thursday, 15 February 2018

Hotrod clients C++/C# 8.2.0.Beta1 are out!

Dear Infinispanners,
C++ and C# 8.2.0.Beta1 releases are available!

These releases contain all the 8.2.0 features.

Worth a mention is the improvement in the remote execution API: we moved the JBossMarshaller basic implementation from test to the distro in order to simplify the data management on the application side. Test examples [1] and [2] have been updated accordingly.

Next step will be a CR release containing improvements on API docs (doxygen)

Check the release notes, browse the source code (C++, C#) or download the releases!

Cheers,
The Infinispan Team

Friday, 9 February 2018

RESTful queries coming to Infinispan 9.2


One of the interesting features in the upcoming Infinispan 9.2 release is the possibility to execute queries over the REST endpoint, enabling users to take advantage of the easy-to-use and expressiveness of the Ickle query language, that combines a subset of JP-QL with full-text features. You can learn more info about Ickle in a previous post.

Besides exposing query over REST, Infinispan 9.2 also adds support for mapping between JSON and Protobuf formats, allowing an efficient storage in binary format while exposing queries, reading and writing content as JSON documents.

To illustrate those new capabilities, this post will walk you through a sample app from scratch!


Sample app


Running the server

We start by running the Infinispan Server 9.2.0.CR2 (the latest release candidate):


This will get you a fresh instance of Infinispan running, with login and password 'user' and the REST port 8080 mapped to localhost. TIP: if you run more than one container, they'll form a cluster automatically.

Creating an indexed cache

Next step is to create an indexed cache called 'pokemon'. We make use of the CLI  (Command Line Interface) to create this cache. In the future, with ISPN-8529, we'll also be able to create cache with arbitrary configuration using REST, but for now we execute a CLI recipe:


Creating the schema

In order to be able to query, we need to define a protobuf schema for our data. The schema follows the Protobuf 2 format (Protobuf 3 support is coming) and allows for extensions to define indexing properties (analyzers, storage, etc).

Here's how it looks like:


The protobuf schema can contain some comments on top of fields and messages with "annotations" to control indexing. Hibernate Search users will recognize some of those pseudo annotations we are using here: they resemble closely their counterpart.


Registering the schema

Once we have our schema, we can easily register it via REST:



Populating the cache

We're now ready to put some data in the cache. As mentioned earlier, ingesting can be done by sending JSON documents directly. Once Infinispan receives those documents, it will convert them to protobuf, index and store them.

In order to match a particular inbound document to an entity in the schema, Infinispan uses a special meta field called _type that must be provided in the document. Here's an example of a JSON document that conforms to our schema:

Writing the document is easy:


we can retrieve content by key as JSON:


Querying


The new query endpoint can be called with an "action" parameter named "search", after the cache name. The simplest query, which returns all data can be done with:

http://localhost:8080/rest/pokemon?action=search&query=from Pokemon


If you do not want to return all the fields, use a Select clause:

http://localhost:8080/rest/pokemon?action=search&query=Select name, speed from Pokemon


Pagination can be controlled with the offset, max_results URL parameters:

http://localhost:8080/rest/pokemon?action=search&query=from Pokemon&offset=2&max_results=20


Grouping is also possible:

http://localhost:8080/rest/pokemon?action=search&query=select count(p.name) from Pokemon p group by generation


Example of a query result:

http://localhost:8080/rest/pokemon?action=search&query=select name,pokedex_number,against_fire from Pokemon order by against_fire asc&max_results=5

Results:




Conclusion

 

Infinispan 9.2 makes it easier to quickly ingest and query datasets using the ubiquitous JSON format, without sacrificing type safety and storage size.

By storing Protobuf, this will also enable other clients like the Hot Rod C#/C++ clients to query, read and write data simultaneously with REST clients.

The full source code for the demo, along with instructions on how to populate the whole dataset can be found at Github.

Finally, please try out this new feature in your own dataset and let us know how it goes!




Wednesday, 7 February 2018

Data Container Changes Part 3

Just over a year ago we detailed some improvements to the data container, including the availability of Off Heap storage in part 2. There have been quite a few fixes for Off Heap especially around memory size estimations with Infinispan 9.2. There is also a brand new "eviction" strategy that has a bit of a twist.

Eviction Strategy Resurrected


Some of you may have remembered that Infinispan used to have an eviction strategy. This was originally used to decide what eviction algorithm was used, such as LRU or LIRS. This was removed when the new data container was introduced. Well... it is back again, but it will be used for a slightly different purpose.

The eviction strategy still has NONE & MANUAL which are exactly the same as before.

Remove strategy


There is a new REMOVE strategy that is configured by default if eviction size is greater than 0. This strategy essentially enables eviction and removes old entries as new ones are inserted.

Exception strategy


We have a brand new "eviction" strategy that provides new functionality. It is unique in that it doesn't really evict, but rather prevent entries from being inserted.  This is the EXCEPTION strategy which blocks new entries from being inserted (or updated if they exceed memory size) by throwing a ContainerFullException when the size is reached.

This strategy only works on transactional caches that always have 2 phase commit enabled. This can be useful if you want to always have only so many entries and to give priority to currently inserted entries. This strategy has better performance than REMOVE since it doesn't have to bookkeep all entries to know what to remove as well.

Note this strategy works across all storage types: OBJECT, BINARY and OFFHEAP and works with both MEMORY and SIZE based "eviction types. This makes it just as flexible as the REMOVE eviction strategy and hope it finds some uses by people.

How to Configure EXCEPTION Strategy


This is how you can enable MEMORY based EXCEPTION "eviction" using xml configuration.
This is how you configure the same thing but programmatically.

Off Heap Memory Size Allocations & Estimations


Before the off heap memory based eviction only counted the allocated memory chunks for the stored entries themselves. This unfortunately meant that the size estimate is a bit less than what it should have been. There are a few things that we improved since then, including reducing the overhead of our allocations. Note all of the below things require no configuration changes and users should just get the benefits.

Reduced per object overhead


Prior the overhead for immutable entries with eviction, Infinispan itself use to allocate 2 chunks of memory with one being 28 bytes and adding 8 bytes to the actual object. Now we only allocate an additional 16 bytes to the object memory block itself (saving the extra allocation and requiring less on the object) when using eviction. Due to memory allocation overhead this saves much more than the 20 bytes as the allocator also has its own overhead.

We also shaved off 4 bytes off of all entries if expiration was not used, meaning overhead for an immutable cache entry without eviction only requires 21 bytes of overhead from ISPN when using off heap (retained in the same allocation block).

Per allocation memory sizing estimations


Internally ISPN allocates a new chunk of memory for each object. This is done currently to leverage the underlying OS allocator to handle features such as fragmentation or compaction (if the allocator does so). Unfortunately this means that each object has its own overhead from the allocator. Thus we now take that into account when estimating the memory used by adding 8 bytes overhead and aligning to 16 bytes. This seems to be a pretty common way for allocators to work. If possible we could allow for tweaking these values, but they are hard coded currently.

Accounting for Address Count


As was mentioned in the prior blog post about off heap, we allocate a single block of memory to hold address counters for our lookups when using Off Heap. Unfortunately we didn't account for that in the memory eviction count. We now account for that in the eviction mechanism, thus your memory eviction size must be greater than the address count rounded up to the nearest power of 2, multiplied by 8. What a mouthful...

Wrap up


Off heap has been overhauled quite a bit to try to reduce memory usage, fix bugs and more accurately estimate the memory used. We hope that along with the new eviction strategy are welcome additions to various applications.

Please make sure to contact us if you have any feedback, find any bugs or have any questions! You can get in contact with various places listed on our website.