Synchronizing Neo4j causal cluster bookmarks

Synchronizing Neo4j causal cluster bookmarks between application instances to Scale your applications and read your own writes.
July 12, 2021 by Michael

Neo4j cluster is available in the Neo4j enterprise version and of course in Neo4j Aura.

A cluster provides causal consistency via the concept of bookmarks:

On executing a transaction, the client can ask for a bookmark which it then presents as a parameter to subsequent transactions. Using that bookmark the cluster can ensure that only servers which have processed the client’s bookmarked transaction will run its next transaction. This provides a causal chain which ensures correct read-after-write semantics from the client’s point of view.

When you use a Session from any of the official drivers directly (for example from the Neo4j Java Driver, or at least version 6 of Spring Data Neo4j, you hardly every see bookmarks yourself and there’s mostly no need to.

While Spring Data Neo4j 5.x still required some manual setup, SDN 6 does not need this anymore.

So in theory, all applications running against Neo4j should be fine in terms of “reading their own writes”.

But what about multiple instances of an application running against a cluster, for example to scale the application, too?

This does actually require some work. In the following example I refer to the Java driver, but the API of the other drivers and of course, the behavior of the server is identical.

When creating a new session the driver allows to pass in a collection or iterable of Bookmarks and not just a single object. Those bookmarks are then taken all to the cluster. As soon as the first transaction in this session starts, the routing will make sure that the requests go to a cluster member that has reached at last the latest transaction defined by the collection of bookmarks. There is no need to keep the bookmarks in order.. That allows us just collect new bookmarks and pass them on. We don’t have to do anything on the client side about sorting.

That work is done already internally by Spring Datas implementation of Bookmark managers.

But what can we do with that information to make sure multiple instances of the same application read the latest writes?

As soon as SDN becomes aware of a new bookmark, we must grab it and push it into exchange. A Redis pubsub topic, or a JMS queue configured as pubsub or even some Kafka configuration will do.

I have created to projects to demonstrate such a setup with both SDN 6 and the prior version, SDN5+OGM:

Common setup

Both projects require a locally running Redis instance. Please consult the Redis documentation for your OS or use a Docker container.

Conceptional, every messaging system that supports pubsub should do.

Spring Data Neo4j 6

The example projected is here: bookmark-sync-sdn6.

SDN 6 publishes bookmarks received from the driver as ApplicationEvents, so we can listen on them via ApplicationListener. For the other way round, the Neo4jTransactionManager can be seeded with a Supplier>.

The whole setup is as follows, please read the JavaDoc comments:

Spring Data Neo4j 5 + Neo4j-OGM

The example projected is here: bookmark-sync-sdn5.

In SDN5+OGM we can use the BookmarkManager interface provided by SDN5. We run a completely custom (and also better implementation) than the default one relying on Caffeine-Cache (which is not necessary at all).

The principle is the same, though: When new bookmarks are received via the transaction system, they will be published, new bookmarks received on the exchange will be the new seed. Please note, in SDN5 bookmark support must be enabled with @EnableBookmarkManagement

Note: Don’t combine SDN6 and SDN5 config! Those configurations above are for two separate example projects.

Photo by Chiara F on Unsplash

No comments yet

One Trackback/Pingback
  1. Java Weekly, Issue 394 | Baeldung on September 30, 2021 at 2:45 PM

    […] >> Synchronizing Neo4j Causal Cluster Bookmarks [] […]

Post a Comment

Your email is never published. We need your name and email address only for verifying a legitimate comment. For more information, a copy of your saved data or a request to delete any data under this address, please send a short notice to from the address you used to comment on this entry.
By entering and submitting a comment, wether with or without name or email address, you'll agree that all data you have entered including your IP address will be checked and stored for a limited time by Automattic Inc., 60 29th Street #343, San Francisco, CA 94110-4929, USA. only for the purpose of avoiding spam. You can deny further storage of your data by sending an email to, with subject “Deletion of Data stored by Akismet”.
Required fields are marked *