Yet another incarnation of my ongoing scrobbles

These days my computer work is mostly concerned with all things Neo4j. Being it Spring Data Neo4j, the Cypher DSL, our integration with other frameworks and tooling or general internal things, among them being part of the Cypher language group.

In contrast to a couple of years, I don’t spent that much time around a computer in non-working hours anymore. My sleep got way better and I feel in general better. For reference, see the slides of a talk I wanted to give in 2020.

And I have to be honest: I feel distanced and tired of a couple of things I used to enjoy more a while back.

Last week however Hendrik Schreiber published japlscript and a collection of derived work: Java libraries that allow scripting of Apple applications on OS X and macOS.

As it happens, I have – as part of – a service that receives everything I play on iTunes and logs it. I have been doing this since 2005. The data accumulated with that service lead to several variations of this article and talk about database centric applications with Spring Boot and jOOQ. Here is the latest English version of it (in the speaker deck are more variations).

The sources behind that talk are in my repository bootiful-database. The repo started of with an Oracle database which I eventually replaced with Postgres. In both databases I can use modern SQL, for example window functions, all kinds of analytics, common table expressions and so on.

The actual live data is still in Daily Fratze. What that is? See DailyFratze: Ein tägliches Foto Projekt oder wie es zu “Macht was mit SQL und Spring” kam. However, there’s a caveat: The database of the service has been an older MySQL version for way too long. While it has charts like this visible to my users:

the queries are not as nice as the one in the talks.

When I wrote this post at the end of 2020, I had a look at MariaDB 10.5. It was super easy to import my old MySQL data from 5.6 into it and to my positive surprise, all SQL features I used from the talk could be applied.

So last week, the first step of building something new was migrating from MySQL 5.6 to MariaDB latest and to my positive (and big) surprise again: It was a walk in the park. Basically replacing repositories and updating the mysql-server package and accepting the new implementation. Hardly any downtime, even the old JDBC connector I use here and there can be reused. That’s developer friendly. My daily picture project just keeps running as is.

Now what?

  • Scrobbling with Hendriks Library. Ideally in a modular way, having separate sources and sinks and an orchestrating application. It basically screams Java modules.
  • Finally put the queries and stuff I talked so often about to a live application

I created scobbles4j. The idea is to have model represented in both SQL and Java records, a client module and a fresh server.

My goal is to keep the client dependency free (apart the modules integrating with Apple programs) and later use the fantastic JDK11+ HTTP Client. For the server I picked Quarkus. Why? It has been a breath of fresh air since 2019, a really pleasant to work with project and community. I was able to contribute several things Neo4j to it (and they even sent my shirts for that, how awesome!), but I never had the chance to really use it.

Java modules

Once you get the idea of modules, they help a lot on the scale of libraries and small applications like the one I want to build to keep things organized. Have a look at the sources api. It’s main export is this and implementations like the one for Apple Music can provide it like that. You see in the last linke, that the package is not exported, so the service implementation just can stay public and is still not accessible. Neat. The client to this services need to know only the APIs: requiring service apis and loading them.

Thing you can explorer: How I added a “bundle” project, combining launcher and all sources and sinks and using jlink to create a custom runtime image.

Testing in a modular world is still sadly problematic. Maven / Surefire will stuff everything in test on the module path, IDEA on the class path. The later is easier, the former next to impossible without patching the module path (if you don’t want to have JUnit and friends in your main dependencies). Why? There’s only one per artifact. As main and test sources are eventually combined, having an open module in test is forbidden.

There are a couple of posts like this, but tbh, I wasn’t able to make this fly. Eventually, I just opened my module explicitly in the surefire setup, which works for me with pure Maven and IDEs. Probably another approach like having explicit test modules is the way but this I find overkill for white box testing (aka plain unit tests).


One fact that is maybe not immediate obvious: Quarkus projects don’t have a default parent pom. They require you to import their dependency management and configure a custom Maven plugin. Nothing hard, see for yourself yourself. You probably won’t even notice it when create a new project at However, it really helps you in a multi-module setup such as scrobbles4j. Much easier than one module wanting to have a separate parent.

I went the imperative way. Mainly I want to use Flyway for database migrations without additional fuzz. As I wanted to focus on queries and the results and also because I like it: Server side rendering it is. Here I picked Qute.

And about that SQL: To jOOQ or not to jOOQ? Well: I have only a hand full of queries, nothing to dynamic and I just want to have some fun and keep it simple. So no jOOQ, no building this time. And also, JDK 17 text blocks. I love them.

What about executing that SQL? If I had chosen the reactive way, I could have used the Quarkus reactive client. I haven’t. But nobody in their right mind will work with JDBC directly. Hmm… Jdbi to the rescue, the core module alone. I didn’t want mapping. In the Spring world, I would have picked JDBCTemplate. Also:

Deploying with fun

One good decision in the Quarkus world is that they don’t create fat jars by default, but a runnable jar in a file structure that feels like a fat jar. Those jars had their time and they had been necessary and they solved issues. This solution that you just can rsync somewhere in seconds, the quick restart times makes it feel like you’re editing PHP files again.

I was only half joking here:

It is actually what I am doing: I created the application in such a way that it is runnable on a fresh scheme and usable by other people too. But I configured flyway in such a way that it will baseline an existing scheme and hold back any version up to a given one (see and I can use the app with my original scheme.

However, I am not stupid. I am not gonna share the schema between to applications directly. I did create another database user with read-rights only (apart for Flyways schema history) and a dedicated schema and just created views in that schema for the original tables in the other one. The views do some prepping as well and are basically an API contract. Jordi here states:

I think he’s right. This setup is an implementation of the Command Query Responsibility Segregation (CQRS) pattern. Some non-database folks will argue that this is maybe the Wish variant, but I am actually a fan and I like it that way.


I needed that: A simple to deploy, non-over engineered project with actually some fun data.
There’s tons of things I want to explore in JDK 17 and now I have something to entertain me with in the coming winter evenings without much cycling.

As always, a sane mix is important. I wasn’t in the mood to build something for a while now but these days I am ok with that and I can totally accept it that my life consists of a broad things of topics and not only programming, IT and speaking about those topics. Doesn’t make anyone a bad developer if they don’t work day and night. With the nice nudge and JDK 17 just been released, it really kicked me.

If you’re interested in the actual application: I am running it here: Nope, it isn’t styled and yes, I am not gonna change it in the foreseeable future. That’s not part of my current focus.

Next steps will be replacing my old AppleScript based scrobbler with a source/sink pair. And eventually, I will add a writing endpoint to the new application.

| Comments (2) »


Multiple instances of the same configuration-properties-class in Spring Boot

Spring Boots @ConfigurationProperties is a powerful annotation. Read the full story in the documentation.

These days it seems widely known that you can put it on type-level of one of your property classes and bind external configuration to that class.

What is less know is that you can use it on @Bean annotated methods as well. These methods can return more or less arbitrary classes, which should ideally behave Java-beans like.

I have been propagating this solution in my Spring Boot Buch for JDBC databases already. Here’s a configuration class for using multiple datasources:

import javax.sql.DataSource;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.context.annotation.Profile;
public class MultipleDataSourcesConfig {
	@Primary @Bean	
	public DataSourceProperties dataSourceProperties() {
		return new DataSourceProperties();
	@Primary @Bean	
	public DataSource dataSource(
		final DataSourceProperties properties) {
		return properties
	public DataSourceProperties dataSourceH2Properties() {
		return new DataSourceProperties();
	public DataSource dataSourceH2(
		final DataSourceProperties properties
	) {	
		// Alternativly, you can use
		// dataSourceH2Properties()
		// instead of a qualifier
		return properties

The above configuration provides two beans of type DataSourceProperties and both can be configured with a structure familiar for people working with JDBC. The main benefits I see: Reuse of existing property-classes, familiar structure, automatic data conversion based on Spring Boot mechanics.

app.datasource-pg.url = whatever
app.datasource-pg.username = spring_postgres
app.datasource-pg.password = spring_postgres
app.datasource-pg.initialization-mode = ALWAYS
app.datasource-h2.url = jdbc:h2:mem:test_mem
app.datasource-h2.username = test_mem
app.datasource-h2.password = test_mem

The bean method names will define their names, but that could also be done explicit on the @Bean annotation.

As you have multiple beans of the same type, you must use @Primary to mark one as the default so that you don’t have to qualify it everywhere. For all non-primary ones, you must inject them with a qualifier.

The above example then creates data sources accordingly.

While I introduced new namespaces for the properties above, you can also an existing one, like I am doing here with Spring Data Neo4j 5 + OGM: Domain 1 uses the default properties via while Domain 2 uses different ones via The project in question is an example of using different connections for different Spring Data (Neo4j) repositories, read the full story here.

A very similar project but for Spring Data Neo4j 6 is here as well: dn6-multidb-multi-connections.

In the later example these properties


are mapped to two different instances of org.springframework.boot.autoconfigure.neo4j.Neo4jProperties via this configuration:

import org.springframework.boot.autoconfigure.neo4j.Neo4jProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
@Configuration(proxyBeanMethods = false)
public class Neo4jPropertiesConfig {
	public Neo4jProperties moviesProperties() {
		return new Neo4jProperties();
	public Neo4jDataProperties moviesDataProperties() {
		return new Neo4jDataProperties();
	public Neo4jProperties fitnessProperties() {
		return new Neo4jProperties();
	public Neo4jDataProperties fitnessDataProperties() {
		return new Neo4jDataProperties();

and can be processed further down the road. Here: Creating the necessary beans for SDN 6. Note that the injected properties are not qualified. The default or primary ones will be used:

@Configuration(proxyBeanMethods = false)
	basePackageClasses = MoviesConfig.class,
	neo4jMappingContextRef = "moviesContext",
	neo4jTemplateRef = "moviesTemplate",
	transactionManagerRef = "moviesManager"
public class MoviesConfig {
	@Primary @Bean
	public Driver moviesDriver(Neo4jProperties neo4jProperties) {
		var authentication = neo4jProperties.getAuthentication();
		return GraphDatabase.driver(neo4jProperties.getUri(), AuthTokens.basic(
			authentication.getUsername(), authentication
	@Primary @Bean
	public Neo4jClient moviesClient(Driver driver, DatabaseSelectionProvider moviesSelection) {
		return Neo4jClient.create(driver, moviesSelection);
	@Primary @Bean
	public Neo4jOperations moviesTemplate(
		Neo4jClient moviesClient,
		Neo4jMappingContext moviesContext
	) {
		return new Neo4jTemplate(moviesClient, moviesContext);
	@Primary @Bean
	public DatabaseSelectionAwareNeo4jHealthIndicator movieHealthIndicator(Driver driver,
		DatabaseSelectionProvider moviesSelection) {
		return new DatabaseSelectionAwareNeo4jHealthIndicator(driver, moviesSelection);
	@Primary @Bean
	public PlatformTransactionManager moviesManager(Driver driver, DatabaseSelectionProvider moviesSelection
	) {
		return new Neo4jTransactionManager(driver, moviesSelection);
	@Primary @Bean
	public DatabaseSelectionProvider moviesSelection(
		Neo4jDataProperties dataProperties) {
		return () -> DatabaseSelection.byName(dataProperties.getDatabase());
	@Primary @Bean
	public Neo4jMappingContext moviesContext(ResourceLoader resourceLoader, Neo4jConversions neo4jConversions)
		throws ClassNotFoundException {
		Neo4jMappingContext context = new Neo4jMappingContext(neo4jConversions);
		return context;

I do actually work for Neo4j and part of my job is the integration of our database connector with Spring Boot and we contributed the auto configuration of the driver.

There’s a lot of effort having sane defaults and an automatic configuration that doesn’t do anything surprising. We sometimes feel irritated when we find custom config that replicates what the build in does, but with different types and names.

This is not necessary, even not in a complex scenario of multiple connections for different repositories like shown above but especially not when dealing with properties: When you want to have the same set of properties multiple times in different namespaces, do yourself a favor and use the combination of @Bean methods returning existing property classes mapped to the namespace in question via @ConfigurationProperties.

If you have those configurational property classes at hand, use them for the actual bean creation: Autoconfiguration will step away if it sees your instances.

| Comments (1) »


Review: Sketchnotes in der IT

Lisa Maria Moritz und der dpunkt.verlag waren so nett, mir ein Rezensionsexemplar von “Sketchnotes in der IT” zuzuschicken. Dieses Review ist auf Deutsch, da Lisa Marias Buch auch auf Deutsch verfasst ist.

Lisa Maria ist Senior Consultant bei INNOQ; sie betreibt nicht nur das Blog, sondern begleitet auch regelmässig Software Architektur im Stream mit Sketchnotes.

Sketchnotes werden seit einigen Jahren mit wachsendem Erfolg insbesondere zu Vorträgen erstellt und in sozialen Medien verbreitet. Ganz massiv fielen sie mir bereits 2017 auf, als ich ebenfalls für INNOQ tätig war. Meine geschätzte frühere Kollegin Joy hat sie eingesetzt und viele Vorträge und Workshops damit begleitet. Was sind Sketchnotes? Wikipedia schreibt zum Thema:

Sketchnotes sind Notizen, die aus Text, Bild und Strukturen bestehen. Der Begriff setzt sich zusammen aus Sketch (englisch sketch ‘Skizze’) und Note (englisch note ‘Notiz’ von lateinisch notitia ‘Kenntnis, Nachricht’).

Die Sketchnote-Erstellung wird “sketchnoting” oder “visual note taking” genannt. Häufig werden Sketchnotes als Alternative zur konventionellen Mitschrift angefertigt. Im Gegensatz zu Texten sind Sketchnotes nur selten linear strukturiert.


Lisa Maria Moritz beschreibt und visualisiert auf rund 170 Seiten in “Sketchnotes in der IT” ihren Ansatz, visuelle Notizen in Vorträgen, Meetings oder bei alltäglichen Aufgaben einzusetzen. Sie gibt einen kurzen Überblick über die Anfänge von Sketchnoting sowie eine sehr sinnvolle Beschreibung von grundsätzlichen Layouts. Die Tipps zur Werkzeugauswahl, sowohl analog und digital, sind kurz und knapp, aber ausreichend. Das Buch ist unter anderem sinnvoll für Menschen wie mich, die verstehen möchte, wie die Gedanken hinter Sketchnoting funktionieren, aber insbesondere auch für diejenigen, die selber in dieser Form Notizen erstellen möchten. Dabei hilft insbesondere die umfangreiche Symbolbibliothek.

Das Buch ist in einen textlastigen “Anleitungsteil” und in eine Symbolbibliothek unterteilt. Letztere nimmt rund die Hälfte der knapp 170 Seiten ein. Die Symbolbibliothek ist hilfreich zum Aufbau einer eigenen “Sprache” für erste Gehversuche mit Sketchnoting.

Der Textteil erklärt nach einer Einleitung und einer Sketchnote über Sketchnotes die Grundlagen, das Handwerkszeug und verschiedene Einsatzszenarios von Sketchnotes. Der Abschnitt zum Handwerkszeug ist angenehm kurz und vermeidet eine “Materialschlacht”. Sprich: Es fängt nicht damit an, eine Vielzahl Stifte, Papier, Geräte und Software zu kaufen. Natürlich sollte analoges oder digitales Schreib- und Zeichenmaterial verfügbar sein, aber das scheint mir in der Natur der Sache zu liegen.

Ich persönlich fand die Sketchnote über Sketchnotes am interessantesten. Warum? Offen gesprochen, weil ich die meisten Sketchnotes zwar als hübsch anzusehen empfinde, aber nicht als für mich zugänglich. Durch das Buch habe ich gelernt, wie Menschen ihre Sketchnotes strukturieren und diese Sketchnotes ihnen wiederrum dabei helfen, sich an gehörtes, erarbeitetes oder gelesenes zu erinnern oder zu vertiefen.

Mein Kopf funktioniert so nicht: Bereits zu Schulzeiten in den 1990er und danach im Studium habe ich gerne und viel mitgeschrieben. Meine Handschrift war und ist grässlich, es bereitet mir selber Mühe, Mitschriften nach einigen Tagen zu entziffern. Daher beschäftige ich mich in der Regel immer kurz nach der Mitschrift mit derselben, übertrage sie ins (digitale) Reine und arbeite sie dadurch nach, lerne und verinnerliche das Gehörte. So wie ich die Ausführungen des Buches verstanden habe, soll dieser Prozess bereits beim Sketchnoting erfolgen, die fertige Sketchnote später nur noch als Ankerpunkt für das Verinnerlichte dienen.

“Sketchnotes in der IT” beinhaltet viele schöne Beispiele von Sketchnotes und ich habe beim Lesen versucht, das jeweils visualisierte nachzuvollziehen. Die Symbolbibliothek unterstützt dabei. Mir gelingt das nur mit Anstrengung, wenn überhaupt. Ich möchte damit keine negative Aussage über das Buch machen, im Gegenteil. Es hat mir einen guten Anreiz gegeben, mich einmal ernsthaft mit dem Thema zu beschäftigen.

“Sketchnotes in der IT” ist zum Preis von 22,90€ als Print beim dpunkt.verlag und natürlich bei anderen Anbietern erhältlich.

Lisa Maria und Eberhard haben ein weiteres Buch veröffentlicht, “Sketchnote zu Software Architektur im Stream”. Diese Buch ist als digitale Variante kostenlos auf Leanpub verfügbar oder als Print auf Amazon. Doch dazu lasse ich die beiden selber zu Wort kommen:

| Comments (0) »


Neo4j, Java and GraphQL

Recently, I realized I am an old person:

but I can even add more to it: Until 2021, I was able to make my way around GraphQL almost everywhere except for one tiny thing I made for our family site.

I mean, I got the idea of GraphQL, but it never clicked with me. I totally love declarative query languages, such as SQL and Cypher. GraphQL never felt like this to me, despite it’s claim during time of writing “GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API”. I perceive it more like schema declaration that happens to be usable as query language too, which is at least aligned with GraphQLs own description:

Anyway, just because I didn’t click with it doesn’t mean it’s not fulfilling a need other people do have.

I happen to work at a company that creates a world famous Graphdatabase named “Neo4j”. Now, “Graph” is about 71% of GraphQL and the suspicion is quite close that GraphQL is a query lange for a Graph database. It isn’t, at least not native to Neo4j. A while back I needed to explain this to several people when I was approached by our friends at VMWare discussing their new module spring-graphql and saw more than one surprised face.

Now, what does Neo4j have? It has first and foremost, Cypher. Which is a great. But it has also neo4j/graphql.

Neo4j GraphQL

Neo4j’s officially supported product is neo4j/graphql.

The Neo4j GraphQL Library, released in April this year, builds on top of Cypher. My colleagues Darrell and Daniel have blogged a lot about it (here and here) and of course there are great talks.

Technically, the Neo4j GraphQL library is a JavaScript library, usable in a Node environment, together with something like the Apollo GraphQL server. Under the hood, the library and it’s associated object mapper translates the GraphQL scheme into Cypher queries.

Architecturally you setup a native Graph database (Neo4j) together with a middleware that you can either access directly or via other applications. This is as far as I can dive into Neo4j GraphQL. I wholeheartedly recommend following Oskar Hane, Dan Starns and team for great content about it.

Neo4j GraphQL satisfies the providers of APIs in the JavaScript space. For client applications (regardless of actual clients or other server site applications), the runtime for that Api doesn’t matter. But what about old people like me, doing Java in the backend?

Neo4j, GraphQL and Java

Actually, there are a ton of options. Let me walk you through:


This is the one that comes most closely to what neo4j-graphl does: It takes in a schema definition and builds the translation to Cypher for you. Your job is to provide the runtime around it. You’ll find it here: neo4j-graphql/neo4j-graphql-java.

This library parses a GraphQL schema and uses the information of the annotated schema to translate GraphQL queries and parameters into Cypher queries and parameters.

Those Cypher queries can then executed, via the Neo4j-Java-Driver against the graph database and the results can be returned directly to the caller.

The library does not make assumptions about the runtime and the JVM language here. You are free to run it in Kotlin with KTor or Java with… Actually whatever server that is able to return JSON-ish structure.

As of now (July 2021), the schema augmented by neo4j-graphql-java differs from the one augmented by neo4j/graphql, but according to the readme, work is underway to support the same thing.

What I do like a lot about the library: It uses the Cypher-DSL for building the underlying Cypher queries. Why? We – Gerrit and me – always hoped that it would prove useful to someone else outside our own object mapping.

How to use it? I created a small gist Neo4jGraphqlJava that uses Javalin as a server and JBang to run. Assuming you have JBang installed, just run:

jbang -uneo4j -psecret -abolt://localhost:7687

and point it to your Neo4j instance of choice. It comes with a predefined GraphQL scheme:

type Person {
	name: ID!
	born: Int
	actedIn: [Movie] @relation(name: "ACTED_IN")
type Movie {
	title: ID!
	released: Int
	tagline: String
type Query {
	person: [Person]

but in the example gist, you can use --schema to point it to another file. The whole translation happens in a couple of lines

var graphql = new Translator(SchemaBuilder.buildSchema(getSchema()));
var yourGraphQLQuery = "// Something something query";
graphql.translate(yourGraphQLQuery).stream().findFirst().ifPresent(cypher -> {
// Do something with the generated Cypher				

Again, find the whole and executable program here.

The benefit: It’s all driven by the GraphQL scheme. I get it, that there are many people out there finding this whole query language intimidating and prefer using GraphQL for this whole area. And I don’t mean this in any way or form ironical or pejorative.

To put something like this into production not much more is needed: Of course, authentication is a pretty good idea, and maybe restricting GraphQL query complexity (while nobody would write a super deep nested query by hand with GraphQL, it’s easy enough to generate one).

Some ideas could be found here and here.

Using Spring Data Neo4j as a backend for GraphQL

Wait what? Isn’t the “old” enterprise stuff made obsolete by GraphQL? Not if you ask me. To the contrary, I think in many situations these things can benefit from each other. Why else do you think that VMWare created this?

When I started playing around with Spring Data Neo4j as a backend for a GraphQL scheme, VMWares implementation wasn’t quite there yet and so I went with Netflix DGS and the result can be found here michael-simons/neo4j-aura-sdn-graphql. It has “Aura” in the name as it demonstrates in addition Spring Data Neo4js compatibility with Aura, the hosted Neo4j offering.

Netflix DGS – as well as spring-graphql – are schema first design, so the project contains a GraphQL schema too.

Schema-First vs Object-First

As the nice people at VMWare wrote: “GraphQL provides a schema language that helps clients to create valid requests, enables the GraphiQL UI editor, promotes a common vocabulary across teams, and so on. It also brings up the age old schema vs object-first development dilemma.”

I wouldn’t speak of a dilemma, but of a choice. Both are valid choices, and sometimes one fits better than the other.

Both Netflix-DGS and spring-graphl will set you up with infrastructure based on graphql-java/graphql-java and your task is to bring it to live.

My example looks like this:

import graphql.schema.DataFetchingEnvironment;
import graphql.schema.SelectedField;
import java.util.List;
import java.util.concurrent.CompletableFuture;
public class GraphQLApi {
	private final MovieService movieService;
	public GraphQLApi(final MovieService movieService) {
		this.movieService = movieService;
	public List<?> people(@InputArgument String nameFilter, DataFetchingEnvironment dfe) {
		return movieService.findPeople(nameFilter, withFieldsFrom(dfe));
	@DgsData(parentType = "Person", field = "shortBio")
	public CompletableFuture<String> shortBio(DataFetchingEnvironment dfe) {
		return movieService.getShortBio(dfe.getSource());
	private static List<String> withFieldsFrom(DataFetchingEnvironment dfe) {
		return dfe.getSelectionSet().getImmediateFields().stream().map(SelectedField::getName)

This is an excerpt from The MovieService being called in the example is of course backed by a Spring Data Neo4j repository.

The whole application is running here:

I like the flexibility the federated approach brings: When someone queries your API and asks for a short biography of a person, the service goes to Wikipedia and fetches it, transparently returning it via the GraphQL response.

Of course, this requires a ton more knowledge of Java.
However, if I wanted to do something similar in a NodeJS / Apollo environment, I both think that this is absolutely possible and that I have to acquire knowledge too.

Using SmallRye GraphQL with Quarkus and custom Cypher queries

SmallRye GraphQL is an implementation of Eclipse MicroProfile GraphQL and GraphQL over HTTP. It’s the guided option when you want to do GraphQL with Quarkus.

My experiments on that topic are presented here: michael-simons/neo4j-aura-quarkus-graphql with a running instance at Heroku too: I liked that approach so much I even did a front end for it.

Which approach exactly? Deriving the GraphQL from Java classes. shows how:

public class BooksAndMovies {
	private final Context context;
	private final PeopleService peopleService;
	public BooksAndMovies(Context context, PeopleService peopleService,
	) {
		this.context = context;
		this.peopleService = peopleService;
	public CompletableFuture<List<Person>> getPeople(@Name("nameFilter") String nameFilter) {
		var env = context.unwrap(DataFetchingEnvironment.class);
		return peopleService.findPeople(nameFilter, null, env.getSelectionSet());

and a simple PoJo or a JDK16+ record (I’m running the app as a native application and there we’re still on JDK11, so the linked class is not a record yet).

import org.eclipse.microprofile.graphql.Description;
import org.eclipse.microprofile.graphql.Id;
@Description("A person has some information about themselves and maybe played in a movie or is an author and wrote books.")
public class Person {
	private String name;
	private Integer born;
	private List<Movie> actedIn;
	private List<Book> wrote;

In this example I don’t use any data mapping framework but the Cypher-DSL and the integration with the Java driver alone to query, map and return everything (in an asynchronous fashion):

public CompletableFuture<List<Person>> findPeople(String nameFilter, Movie movieFilter,
	DataFetchingFieldSelectionSet selectionSet) {
	var returnedExpressions = new ArrayList<Expression>();
	var person = node("Person").named("p");
	var match = match(person).with(person);
	if (movieFilter != null) {
		var movie = node("Movie").named("m");
		match = match(person.relationshipTo(movie, "ACTED_IN"))
	if (selectionSet.contains("actedIn")) {
		var movie = node("Movie").named("m");
		var actedIn = name("actedIn");
		match = match
			.optionalMatch(person.relationshipTo(movie, "ACTED_IN"))
			.with(person, collect(movie).as(actedIn));
	if (selectionSet.contains("wrote")) {
		var book = node("Book").named("b");
		var wrote = name("wrote");
		var newVariables = new HashSet<>(returnedExpressions);
		newVariables.addAll(List.of(person.getRequiredSymbolicName(), collect(book).as("wrote")));
		match = match
			.optionalMatch(person.relationshipTo(book, "WROTE"))
	Stream.concat(Stream.of("name"), selectionSet.getImmediateFields().stream().map(SelectedField::getName))
		.filter(n -> !("actedIn".equals(n) || "wrote".equals(n)))
		.map(n ->
	var statement = makeExecutable(
				.map(v ->"name").contains(anonParameter(nameFilter)))
	return executeReadStatement(statement, Person::of);

The Cypher-DSL really shines in building queries in an iterative way. The whole PeopleService is here.

Restricting query complexity

In anything that generates fetchers via GraphQL-Java you should consider using an instance of graphql.analysis.MaxQueryComplexityInstrumentation.

In a Spring application you would do this like

public class MyConfig {
	public SimpleInstrumentation max() {
		return new MaxQueryComplexityInstrumentation(64);

in Quarkus like this:

import graphql.GraphQL;
import graphql.analysis.MaxQueryComplexityInstrumentation;
import javax.enterprise.event.Observes;
// Also remember to configure
public final class GraphQLConfig {
	public GraphQL.Builder configureMaxAllowedQueryComplexity(@Observes GraphQL.Builder builder) {
		return builder.instrumentation(new MaxQueryComplexityInstrumentation(64));

This prevents the execution of arbitrary deep queries.


There is a plethora of options to use Neo4j as a native graph database backing your GraphQL API. If you are happy running a Node based middleware, you should definitely go with neo4j/graphql. I don’t have an example myself, but the repo has a lot.

A similar approach is feasible on the JVM with neo4j-graphql/neo4j-graphql-java. It is super flexible in regards of the actual runtime. Here is my example:

Schema-first GraphQL and a more static approach with a Spring Data based repository or for that mapper, any other JVM OGM, is absolutely possible. Find my example here: michael-simons/neo4j-aura-sdn-graphql. Old school enterprise Spring based data access frameworks are not mutual exclusive to GraphQL at all.

Last but not least, Quarkus and Smallrye-GraphQL offers a beautiful Object-first approach, find my example here: michael-simons/neo4j-aura-quarkus-graphql. While I wanted to use “my” baby, the Cypher-DSL to handcraft my queries, I do think that this approach will work just excellent with Neo4j-OGM.

In the end, I am quite happy to have made up my mind a bit more about GraphQL and especially the accessibility it brings to the table. Many queries translate wonderful to Neo4j’s native Cypher.

I hope you enjoyed reading this post as much as I enjoyed writing it and the examples along it. Please make sure you visit the medium pages of my colleagues shared first here.

Happy coding and a nice summer.

The feature image on this post has been provided my Gerrit… If get the English/German pun in this, here’s one more in the same ball park 😉

| Comments (2) »


Synchronizing Neo4j causal cluster bookmarks

Neo4j cluster is available in the Neo4j enterprise version and of course in Neo4j Aura.

A cluster provides causal consistency via the concept of bookmarks:

On executing a transaction, the client can ask for a bookmark which it then presents as a parameter to subsequent transactions. Using that bookmark the cluster can ensure that only servers which have processed the client’s bookmarked transaction will run its next transaction. This provides a causal chain which ensures correct read-after-write semantics from the client’s point of view.

When you use a Session from any of the official drivers directly (for example from the Neo4j Java Driver, or at least version 6 of Spring Data Neo4j, you hardly every see bookmarks yourself and there’s mostly no need to.

While Spring Data Neo4j 5.x still required some manual setup, SDN 6 does not need this anymore.

So in theory, all applications running against Neo4j should be fine in terms of “reading their own writes”.

But what about multiple instances of an application running against a cluster, for example to scale the application, too?

This does actually require some work. In the following example I refer to the Java driver, but the API of the other drivers and of course, the behavior of the server is identical.

When creating a new session the driver allows to pass in a collection or iterable of Bookmarks and not just a single object. Those bookmarks are then taken all to the cluster. As soon as the first transaction in this session starts, the routing will make sure that the requests go to a cluster member that has reached at last the latest transaction defined by the collection of bookmarks. There is no need to keep the bookmarks in order.. That allows us just collect new bookmarks and pass them on. We don’t have to do anything on the client side about sorting.

That work is done already internally by Spring Datas implementation of Bookmark managers.

But what can we do with that information to make sure multiple instances of the same application read the latest writes?

As soon as SDN becomes aware of a new bookmark, we must grab it and push it into exchange. A Redis pubsub topic, or a JMS queue configured as pubsub or even some Kafka configuration will do.

I have created to projects to demonstrate such a setup with both SDN 6 and the prior version, SDN5+OGM:

Common setup

Both projects require a locally running Redis instance. Please consult the Redis documentation for your OS or use a Docker container.

Conceptional, every messaging system that supports pubsub should do.

Spring Data Neo4j 6

The example projected is here: bookmark-sync-sdn6.

SDN 6 publishes bookmarks received from the driver as ApplicationEvents, so we can listen on them via ApplicationListener. For the other way round, the Neo4jTransactionManager can be seeded with a Supplier>.

The whole setup is as follows, please read the JavaDoc comments:

Spring Data Neo4j 5 + Neo4j-OGM

The example projected is here: bookmark-sync-sdn5.

In SDN5+OGM we can use the BookmarkManager interface provided by SDN5. We run a completely custom (and also better implementation) than the default one relying on Caffeine-Cache (which is not necessary at all).

The principle is the same, though: When new bookmarks are received via the transaction system, they will be published, new bookmarks received on the exchange will be the new seed. Please note, in SDN5 bookmark support must be enabled with @EnableBookmarkManagement

Note: Don’t combine SDN6 and SDN5 config! Those configurations above are for two separate example projects.

Photo by Chiara F on Unsplash

| Comments (1) »