Review: DevOps Tools for Java Developers

Note: Both JFrog and O’Reilly sent me a paper copy of DevOps Tools for Java Developers for review (or my reading pleasure, or hopefully both). The copies came with no strings attached and this article is my honest opinion.

The book is written by Ixchel Ruiz, Melissa McKay, Stephen Chin and Baruch Sadogursky. 3 of them I met personally and all of them come very much from the developer side of things and are known people in the Java world. All of them work at JFrog these days.

The latter is an interesting fact: JFrog is a public company that positions itself these days as the one stop solution for software supply chain management with the punchline “manage your binaries from developer to production”. A (long) while back I got to know JFrog primary for it’s (Maven) compatible software repository solution Artifactory which has evolved into a central solution to manage binaries, container-images and many more. Together with with build pipelines, distribution and a bunch of other things they target the complete requirements of developers and operations from build to production.

On that premise it only make sense that JFrog supports its people to publish about tooling for developers and it’s a great approach reading such a book written by developers.

The introductory chapter: DevOps for (or Possibly Against) Developers

The part really got me quickly. Of course Baruch refers to both the Phoenix Project Book by Gene Kin and others and The DevOps Handbook from the same author: Both are written coming from the operation side of things, not from the developers. So the question is implied: Is DevOps something coming from operation agains developers? Spoilers: It is not.

I really love how that chapter makes it clear that DevOps is very much not engineering per se, but is about collaborations and a shared endeavors. There are no DevOps engineers but site reliability, production, infrastructure, QA and some more disciplines of engineering and together the build an amalgam of development and operations that eventually fosters better quality, produces savings, deploys features faster and strengthens security.

Funny enough in these odd times we are living in: Baruch also addresses the fact that many companies – including modern software companies and not only companies of the past century – are cutting costs at all places. They can do so by layoffs, salary and benefits reductions or by doing better with the same resources, without grinding them down. This is also what DevOps is: Working together.


The table of contents reads like a proper build chain really. Taken verbatim from here you’ll find:

  • (One introductory chapter I covered explicitly)
  • The System of Truth
  • An Introduction to Containers
  • Dissecting the Monolith
  • Continuous Integration
  • Package Management
  • Securing Your Binaries
  • Deploying for Developers
  • Mobile Workflows
  • Continuous Deployment Patterns and Antipatterns

I like that order a lot! It starts with source control and I think it makes totally sense todo so: There are new people to the field everyday and we just cannot assume that everyone has an idea what that is and in addition, I would bet personal money on it that in way to many enterprise organization source control manage is still neglected, even in 2022. I skipped over most the Git specific things, but the history and intend of these topic are written well by Stephen.

Containers and its terminology is ubiquitous but seldom distinctly explained. That chapter by Melissa covers not only the what, but also the way and then the necessary best practices. It’s written in a refreshingly different way than the usual “you must / should do this” approach, either in written words or especially in the style of conference driven development. Excellent content.

What I like the most is how she explained that once secrets and stuff made it into an image, there’s no way to get them out again due to the layered nature and layered filesystem of images. Therefor I am gonna repeat this here: Be careful when crafting your first container image, don’t delay proper secrets management to a later stage.

That monolith chapter by Ixchel: Lucky me, in my day job, I am a library maintainer and can enjoy the luxury of watching the space from the outside. For better or worse. I was consulting in a previous life and have seen the heights of micro service architecture. These days, the pendulum seems to go into the different direction again. I read the anti patterns section closely and I violently agree. It’s good to start with them, especially if you hand this book to a junior person. I’m not so much in agreement with Spring Boot, Quarkus, Micronaut and Helidon being dedicated micro services frameworks though. They can be used in such a fashion, but you can also build (good or bad) monoliths with it. I recommend having an intensive look at the Moduliths effort by Herrn Drotbohm. There’s middle ground (of course, with different tradeoffs).

Can we assume that Continuous Integration (and later continuous deployment) is a given? I am not sure… Hearing how many manual steps it took in companies to react to log4shell back at the end of 2021 I have my doubts. Anyway, I really dig the reference to eXtreme Programming (XP) practices in the introduction of that chapter: “Code integration should be regular and rarely complicated”. If it is, you are already screwed. It goes on further stressing the importance of build and test failures and how they should be easy to investigate and fix. I would like to add that the day you opt to live with one red or one flaky test, is the second opportunity to shoot yourself. And with little surprise, I agree with the scripted builds, too. Nice UIs will only help you so much… And not everyone is equally fluid or able to navigate a UI and it’s pattern.

I did expect a bit more about central solutions in Package Management, but that chapter is a pretty solid introduction to build tools, especially how Maven handles package resolution. It is an important topic, though. Interesting note here is the rather short section about Docker: Here package management is complicated: Docker tags are fluid and prone to change… How to achieve reproducible builds with that? There seem to be no perfect answer.

Securing Your Binaries If you are familiar with any of the previously chapters and think the book isn’t for you, it might still be: Supply chain attacks and everything related to it are on the rise. This chapter by Stephen written together with my long time good acquaintance Sven is an essential read. It starts with the SolarWinds case and takes it from there, covering static and dynamic security analysis, the different roles of different people and many more things and even interactive application security testing and self-protection.

In addition, the chapter includes a proper introduction of the CVSS (Common Vulnerability Scoring System), which comes in quite handy. The section about the fact that vulnerabilities can be combined into different and new attack vectors cannot be underestimated.

The chapter ends with Quality Management and using the Quality Gate method as barriers as well as point of actions for risk assessment, countermeasures and risk tracking. I was a bit irritated for various reasons that “Clean Code” by Robert Martin is mentioned here, but its about making clear that “clean” code (whatever that is anyway) does not cover secure code. I do however disagree that only “clean implementations” opens up the full potential for security measures.

I’m very pleased to see Ana-Maria Mihalceanu in this book, too. She wrote Deploying for Developers. This chapter contains the culmination of several other chapters in the introduction of Kubernetes and how you can programmatically deploy from your build descriptor your application into a cluster, packaged as a OCI compatible container image. Monitoring, Tracing, Logging etc are all covered.

Maybe it’s my restricted experience in that special area, but I miss the mention of one-stop-solutions like Heroku or Pivotal Cloud Foundry of old where many of the things I would need to do for / in a cluster are done for me. Or maybe something like an assessment analogue to “make or buy”. Different tradeoffs, I know, but till to this day, not all applications are Netflix scale (neither in number of clients nor data produced. The latter being relevant in the context of the first chapter that speaks about the explosion of data created in the last years).

I guess a book covering the whole value chain of things these days isn’t complete without covering Mobile Workflows. I did only briefly skim this, though. A proper collaborative effort for being able to deploy new versions continuously is very important in this area: You want to grow and retain your user base with new features, keeping apps secure but also relevant to the algorithm of the app stores. It’s fun to read about device farms in this chapter… A category of problems I have never ran into luckily I guess.

The final chapter Continuous Deployment Patterns and Antipatterns is a logical conclusion of the book. It reemphasis the need for being able to continuously deploy and deliver with both the changed customer aspects and the need to act on security breaches and improving the timeline from identification to solution. The numbers stated about the Equifax security breach in 2017 are staggering (the costs go into the billions).

The case study about the iOS AppStore and how Apple change updating things (in many ways for the better) hits home with the version issue: It’s true that for many applications I don’t even know what version I have… It’s latest. At work there’s an ongoing discussion about that topic but with a database, a tool used as a foundational block for other software, it’s kinda hard to completely omit that information as long as you’ll never have breaking changes.

While all the case studies here are somewhat entertaining they aren’t meant to be, I guess. It can and will eventually happen to most of us.


I enjoyed reading this book a lot! It gives – juniors and seniors a like – an exhaustive overview about end-to-end developing applications with the needs of 2022 and beyond in mind. And with developing I mean of course everything that is subsumed under the DevOps term: From actual coding, testing, packaging, deploying and observing with both security and availability and performance metrics in mind.

If you are in a hurry I would recommend at least reading the chapters “DevOps for (or Possibly Against) Developers”, “Securing Your Binaries” and the “Continuous Deployment Patterns and Antipatterns”.

| Comments (4) »


How to become an Open Source committer?

A couple of days ago I was asked above questions: “How to become an Open Source committer?” I think the answer might be interesting to other as well, so I am sharing it here as well.

At some point I wrote a GitHub, which you can read over there: I think it gives a pretty good idea what I am up to.

While that I probably would fail all of the FAANG assessment tests for getting a job and I am not the person to look for implementing low level optimizations and stuff like that, I think that I am a decently good generalists with deep knowledge in a couple of important things, such as databases and the bigger (server side) frameworks, but also on a language level. My aim is to keep an overview on how things work together and go rather deep into the topic that I am working on in a project.

Anyway, I created a GitHub account back in 2010, my first contribution was a pull request to a jQuery add-on I like but that was missing a piece I wanted. I needed that for my foto project At that time I was working for nearly 8 years already at a small German company in Aachen, mostly doing Oracle Database based stuff… At the frontend with Oracle Forms Client/Server: Not exactly the stuff that is popular on GitHub. However, I was often blogging about some work related stuff right here, for example this here bulk operations in Oracle, an article that still has hits, or integrating Hibernate of old with Oracle spatial. Many of these old posts have been accompanied by issues in the trackers of the respective projects.

And yes, I do absolutely think that good issue reports are valuable and Open Source contribution on their own rights, very much like contributions to documentation and heck, just fixing typos. So from my perspective, a proper first step can be reporting things. If you work in a company that uses Open Source projects and you ran into issues take the time to create a reproducer. Don’t just go to a project and yell “this doesn’t work”, but pay back by something that is runnable and shows the error. Most maintainers I know are happy to work with that. If you employer doesn’t permit this, I would rethink my time there: Your employer is saving money by using Open Source. While contributing features or fixes is actually not always legally possible, taking the time for proper issues is. And if your own project is that secretive, do a dummy.

My personal journey continues with a log of blogging about rails. I really used it a lot and I like it to this day. For whatever reason, it just worked back then and I found everything I wanted in there. Some posts might be more angry and ranty, but that usually boiled down to gems (those are projects you can pull in as libraries) required native counterparts on the system and well, let’s say that ecosystem was worse on macOS around 2010 than it is today.

Somewhen around 2011 I rewrote aforementioned foto project from Ruby on Rails to Java with Springframework which really got my deep into Java. We started to experiment with Spring in the company, replaced a couple of old applications for the customer with new ones and great success and in 2013, Spring Boot appeared and I have been using it ever since:

The application that I went public with is this, still maintained and used (by me). Back in the early days I provided tons and tons of feedback to Boot and somewhen, also small features and bug fixes. In the end, it’s about trust.

At the same time, I had the great opportunity to visit ISAQB trainings and met Gernot Starke, a great personal inspiration. What an honor that he asked me to contribute to arc42byExample. Of course I took this chance.

Up until here the important part to understand is this: I am diligent at my work. Sometimes I overwork, but not too much. I have other interests than spent all day and night coding (More about this here). I was blessed that I started my professional live in a company that fostered and supported its employees: With a budget for trainings and a budget to take time for exploring and learning things. I cannot repeat this often enough: It depends so much on the first impression you get of work life which trajectory you take on and what expectations you allow yourself to develop. If you are a junior and your find yourself in a place that is all about the grind and hustle: Get out. Find something sustainable. I honestly don’t believe in “I hustle until my 30ths and than I stop” thing. Find a place that enables growth on a mutual level.

Anyway, I didn’t grind away with OOS contribution either. I started to experiment with giving talks. First at lokal meetups and (Java) user groups with good success. I somewhat like it, but it stresses the hell out of me. I am more of a writer. But it with some persistence it give me some kinda good name, which is of course valuable, both for contribution and growing a network.

Talks this days is a so-so topic. I personally find it really hard to justify traveling through the world and giving talks at every possible place (and not only eying the pandemic here). There’s no reason for me to fly to Brazil or so giving a talk about Spring Boot or Quarkus. There are excellent developers out in the world everywhere and I would rather coach a person from anywhere in the world to talk about Neo4j than flying ten thousand miles todo it myself just for a day off.

Back to open source: Make yourself a bit of a name. Report issues. Reach out over appropriate channels (tickets, Gitter, Slack; no unsolicited private messages). If you want to contribute code: Look for projects that have for contribution labels.

When you’re excited about a new idea and you might already have implemented it in someway or form and you think “hey, let’s just submit it!”, think again: Imagine the people at the receiving end. Is it a new feature? How will it be maintained in the future? Is it just something for a very small use case? Does it fit the rest? Who will own it? It is often safer to open up an issue and discuss if someone wants a new feature in their project or not. This will safe everybody’s time in the end.

Twitter is a good place for some discussions as well and from a question like this, a great learning across several people can come: Fix potential exponential backtracking in ReflectionUtils array parsing.

If its possible, go out to local meetups or conferences. Don’t just sit in there and spent your time passive, meet people. Talk with them. Listen. Build a network and contribute. In the end it’s a lot about trust and people, as I wrote already in 2017.

I personally was lucky: My close work with Spring and Spring Data people brought me into conversations with two people I would call both inspirational and friends these days: Oliver Drotbohm and later Michael Hunger. With a slight detour trying out consultancy at the good company INNOQ, I ended up at Neo4j. At Neo4j I maintain one of our Open Source projects, Spring Data Neo4j, together with Gerrit Meier. Several other modules have been spun off from there.

This month, I celebrated my 4th anniversary:

Of course, Neo4j doesn’t earn money by paying me and the teams that work on pure Open Source modules (such as connectors and drivers). We do need them however to facilitate the usage of our main products, such as Neo4j Enterprise and Neo4j AuraDB.

For me it was a once-in-a-lifetime opportunity. I get to learn from so many smart people in a world-class database company and on the same time do my work in the open. As said, don’t grind yourself mindlessly away, but also don’t let open doors pass.

Last but not least: Nobody is a worse developer if they don’t do Open Source. It is useful, educational and a lot of fun most of the time, but it’s not required at all to be good at a job.

Update: I had short conversation with Tim about the at when is a good time to enter a project: With rather young or mature projects:

If you follow that thread, you’ll see I spoke also about whether small equals insignificance or not (Hint: Small does not mean insignificance for me… Heck, you might even start something small that YOU need yourself and maybe attracting contributors on your own).

And while I was thinking about that topic, I remembered the initiative from iJUG last year, explicitly sponsoring new people to get into Open Source projects. Markus Karg wrote about this here (in German). While I am personally deeply into Spring and Quarkus these days, the Jakarta EE and Adoption projects are really valuable to the whole Java ecosystem.

And last but not least, sometimes things just don’t work out. Have a look at that small 5 Minute video:

Markus and Andres are both well known in the Java ecosystem, both avid Open source contributors and committers. And even though they followed the recommended approach, asking before contributing a new (small) feature, they weren’t able to get it in. This is super frustrating, but it happens and it happens to experience people as well. Don’t let it get to you if it happens to you.

Title photo by Peter Herrmann on Unsplash

| Comments (1) »


Winding down 2021

It’s late December and I am winding down with 2021, which was pretty much 2020 too, while looking skeptical into actual 2022.

I will come up with a personal review after I am done with the #Rapha500 and will focus here on what I found out to be great in 2021 work wise (aka programming Java and database related things).

Spring Data Neo4j 6

Spring Data Neo4j 6 6.0.0 was actually released October 2020, super-seeding SDN5+OGM. The project started out as early as 2019 as SDN/RX and we at Neo4j had big ambitions to create a worthy successor. We in this case are Gerrit Meier and me.

I think we did succeed in many terms: We managed to get on the Reactive-Hypetrain with SDN 6. Something that would not have been possible with Neo4j-OGM, which basically tries to recreate a subgraph from the Neo4j database on the client side just before mapping. That subgraph creation did not play nicely with a reactive flow, so we needed to come up with something else and focussed on individual records to be mapped.

And that came with a couple of issues: We thought we knew everything that customers and users had been throwing at Neo4j-OGM over the years, but boy… You’ll never stop learning. And adding insult to injury: While we had a really long beta period with SDN/RX, a long enough warning that SDN 6 would be a migration and not an upgrade and also had betas there, 2021 started with… surprised users. Of course.

Until then we 26 releases of Spring Data Neo4j 6 this year: 7 patches for 6.0, 5 milestones and 1 RC for 6.1, 6.1 itself followed by 7 patches again, 3 milestones of 6.2, one RC and eventually 6.2 last month.

A big thank you to Mark Paluch who not only gave us so much invaluable feedback in that time, but also ran most of the releases.

No more zoom talks…

I tried to give a talk with the following abstract twice this year:

2014, the reactive manifesto has been written: A pledge to make systems responsive, resilient, elastic and message driven. Two years later, in 2016, reactive programming started to go mainstream, with the rise of Spring Webflux and Project Reactor. Another two years later, many NoSQL databases provide reactive database access. Neo4j, a transactional Graph database, didn’t have a solution back than, but we started a big cross team effort to provide reactive database access to Neo4j. It landed in 2019: A reactive stack, right from the query execution engine, to the transport layer and the driver (Client connection) at the other end.

But that left the team working at Neo4j-OGM and Spring Data Neo4j out: What todo with an Object mapper that had been deeply inspired by Hibernate and was working on a fully hydrated subgraph client side?

Well, we did what many developers did: Just let us rewrite the thing, take some inspiration from Spring Data JDBC and also from modern approaches to querying databases like jOOQ and be done with it.

While we did manage to make a lot of new users happy, we didn’t expect so many tickets from old users. They where not complaining about changed annotations or configuration, but more about that we removed things we considered drawbacks of the old system but had been features people actually used.

If In-Person conferences should ever happen again, I am inclined to actually do this at some point. However, not remote. I am done with Zoom-talks, I just can any more…


Right now, SDN 6.2 is in an excellent shape. We have been able to iron out all outstanding big issues, made our ideas clearer and also added brand new things: Like GraphQL support via Query-DSL integration, a much improved Cypher-DSL, the most feature rich projection mechanism of all Spring Data projects (which got even back-ported into Spring Data Commons) and all that by only being in the same room once in 2021 (albeit, for BBQ).

I am really thankful to have colleagues and friends like Gerrit. It’s great when you can not only dream up things together, but also take care of issues later on.


In mid 2020, Michael Hunger gave us the neo4j-contrib/cypher-dsl repository and coordinates to be used to our rebooted Cypher-DSL. We extracted the Cypher-DSL from SDN/RX before it became SDN 6 as we thought tooling like this is valuable beyond an object mapping framework and we guessed right.

In 2021 we pushed out 23 releases: In the last incarnation we support building most statements that Neo4j 4.4 supports, both in a fluent fashion and in a more imperative way. The Cypher-DSL now provides a static type generator based on SDN 6 annotations as well as a Cypher-Parser. The parser module is probably the thing I am most proud of in the projects: It utilizes Neo4js own JavaCC based parser to read Cypher strings into the Cypher-DSL-AST, ready to be changed, validated or transformed.

The Cypher-DSL is used in SDN 6 as the main tooling to build queries but it also used in neo4j-graphql-java, a JVM based stack for transforming GraphQL queries into Cypher. I have written about that here. In addition to that, I hear rumors that GraphAware is using it, too. Well, they can be happy, we just removed all experimental warnings and released 2022.0.0 already.

I appreciate the feedback from Andreas Berger and Christophe Willemsen on that topic a lot. Thank you for being with me on that topic in 2021.

Quarkus and Neo4j

I do wear my “I made Quarkus 1.0” shirt with pride, and I am happy that the Neo4j-extension was part of Quarkus from release 1 up to 2.5.3.
It want be in 2.6.0 directly.

What?! i hear you scream… Be not afraid, it’s one of the first citizen in the Quarkiverse Hub. You’ll find it in quarkiverse/quarkus-neo4j and of course on and it learned so many new tricks this year, especially the Quarkus Dev-Services support for which I even created a video:

I fully support the decision to move extension to a separate org while retaining the close connection to the parent project, both via the orgs and the code generator. The discussion and the arguments are nothing but stellar, have a look at Quarkus #16870, Moving extensions outside of core repository.

My life as a maintainer of that extension is much easier in that form. A big shoutout to the people in the above discussion, especially to Guillaume Smet, George Gastaldi and also to Galder Zamarreño. It’s a pleasure working with you.


My database refactoring toolkit for Neo4j, Neo4j-Migrations completely escalated in 2021. While it started off as a small shim to integrate SDN 6 into JHipster (btw, I’m still super happy about every encounter with Frederik and Matt), it now does a ton of things:

  • Has CLI
  • Is distributed as native packages for macOS, Linux and Windows
  • Has a fully automated release process
  • Has a quarkus extension
  • Supports lifecycle callbacks

and more… which just got released as 1.2.2. The biggest impact on that project and my motivation has been made by Andres Almiray and JReleaser. Andres not just reached out to me to teach about JReleaser, he picked up my project, played with it, came up with a suggested workflow and we hacked together the missing pieces in an afternoon. Stunning.

If you find either my Neo4j-Migrations tooling or JReleaser useful, leave a star, or support Andres in a form that suites you.

More things

Similar to the way we created Neo4j support in Quarkus for a nice OOTB experience, Dmitry Alexandrov and me started writing a similar extension in Oracles project. I really, really appreciate that companies can work together in a positive way despite the fact that they are competitors in other areas.

Speaking about Oracle: Every single interaction with their GraalVM team has been just splendid. Thanks Alina Yurenko and team!

Thanks to Kevin Wittek we have been able to participate in the beta-testing of AtomicJars “Testcontainers Cloud, an absolutely lovely experience. I do see a bright future for the journey that Richard North and Sergei Egorov started.

There are many more people who’s input and feedback I appreciate a lot, not only this year, but previous and upcoming as well. Here a just a couple of them Gunnar Morling, knowledgable in so many ways and always fun talking with, Samuel Nitsche for input way beyond “just the tech” and surely Markus Eisele for always having an open ear.

Of course, there are even more. Remember, you all are valid. And more often than not, you do influence people, in some way or the other. I’m grateful to have a lot of excellent people in my life.

And with that, I sincerely hope that my first statement in this article will be just a bad pun and that 2022 will not be 2020, too and we can eventually safely meet in person again. Until then, stay safe and do create cool things. I still think that not all is fucked up, actually.

(Titel image by Vidar Nordli-Mathisen.)

| Comments (1) »


GraalVM and proxy bindings with embedded languages.

A while ago I had the opportunity to publish a post in the GraalVMs Medium blog titled The many ways of polyglot programming with GraalVM which is still accurate.

A year later, the GraalVM just got better in many dimensions: Faster, supporting JDK17 and I think its documentation is now quite stellar. Have a look at both the polyglot programming and the embeddings language. The later is what we are referring to in this post.

The documentation has excellent examples about how to access host objects (in my example Java objects) from the embedded language and vice versa. First, here’s how to access a host object (an instance of MyClass) from embedded JavaScript:

public static class MyClass {
    public int               id    = 42;
    public String            text  = "42";
    public int[]             arr   = new int[]{1, 42, 3};
    public Callable<Integer> ret42 = () -> 42;
public static void main(String[] args) {
    try (Context context = Context.newBuilder()
                           .build()) {
        context.getBindings("js").putMember("javaObj", new MyClass());
        boolean valid = context.eval("js",
               "         == 42"          +
               " && javaObj.text       == '42'"        +
               " && javaObj.arr[1]     == 42"          +
               " && javaObj.ret42()    == 42")
        assert valid == true;

This is the essential part: context.getBindings("js").putMember("javaObj", new MyClass());. The instance is added to the bindings of JavaScript variables in the polyglot context. In the following eval block, a boolean expression is defined and returned, checking if all the values are as expected.

Vice versa, accessing JavaScript members of the embedded language from the Java host looks like this:

try (Context context = Context.create()) {
    Value result = context.eval("js", 
                    "({ "                   +
                        "id   : 42, "       +
                        "text : '42', "     +
                        "arr  : [1,42,3] "  +
    assert result.hasMembers();
    int id = result.getMember("id").asInt();
    assert id == 42;
    String text = result.getMember("text").asString();
    assert text.equals("42");
    Value array = result.getMember("arr");
    assert array.hasArrayElements();
    assert array.getArraySize() == 3;
    assert array.getArrayElement(1).asInt() == 42;

This time, a result is defined directly in the JavaScript context. The result is a JavaScript object like structure and its values are asserted. So far, so (actually) exciting.

There is a great api that allows what in terms of members and methods can be accessed from the embedding (read more here) and we find a plethora of options more (how to scope parameters, how to allow access to iterables and more).

The documentation is however a bit sparse on how to use org.graalvm.polyglot.proxy.Proxy. We do find however a good clue inside the JavaDoc of the aforementioned class:

Proxy interfaces allow to mimic guest language objects, arrays, executables, primitives and native objects in Graal languages. Every Graal language will treat instances of proxies like an object of that particular language.

So that interface essentially allows you to stuff a host object into the guest and there it behaves like the native thing. GraalVM actually comes with a couple of specializations for it:

  • ProxyArray to mimic arrays
  • ProxyObject to mimic objects with members
  • ProxyExecutable to mimic objects that can be executed
  • ProxyNativeObject to mimic native objects
  • ProxyDate to mimic date objects
  • ProxyTime to mimic time objects
  • ProxyTimeZone to mimic timezone objects
  • ProxyDuration to mimic duration objects
  • ProxyInstant to mimic timestamp objects
  • ProxyIterable to mimic iterable objects
  • ProxyIterator to mimic iterator objects
  • ProxyHashMap to mimic map objects

Many of them provide static factory methods to get you an instance of a proxy that can be passed to the polyglot instance as in the first example above. The documentation itself has an example about array proxies. The question that reached my desk was about date related proxies, in this case a ProxyInstant, something that mimics things representing timestamps in the guest. To not confuse Java programmers more than necessary, JavaScript has the same mess with it’s Date object than what we Java programmers have with java.util.Date: A think to represent it all. Modern Java is much more clearer these days and call it what it is: An java.time.Instant (An instantaneous point on the time-line).

So what does ProxyInstant do? ProxyInstant.from( gives you an object that when passed to embedded JavaScript behaves in many situation like JavaScripts date. For example: It will compare correctly, but that’s pretty much exactly how far it goes.

Methods like getTime, setTime on the proxy inside the guest (at least in JavaScript) won’t work. Why is that? The proxy does not map all those methods to the JavaScripts object members and it actually has no clue how: The proxy can be defined on a Java instant, date or nothing thereof at all and just use a long internally…

So how to solve that? Proxies in the host can be combined and we add ProxyObject:

public static class DateProxy implements ProxyObject, ProxyInstant {

ProxyObject comes with getMember, putMember, hasMember and getMemberKeys. In JavaScript, both attributes and methods of an object are referred to as members so that is exactly what we are looking for to make for example getTime working. One possible Proxy object to make Java’s instant or date work as JavaScript date inside embedded JS on GraalVM therefor looks like this

public static class DateProxy implements ProxyObject, ProxyInstant {
  private static final Set<String> PROTOTYPE_FUNCTIONS = Set.of(
  private final Date delegate;
  public DateProxy(Date delegate) {
    this.delegate = delegate;
  public DateProxy(Instant delegate) {
  public Object getMember(String key) {
    return switch (key) {
      case "getTime" -> (ProxyExecutable) arguments -> delegate.getTime();
      case "getDate" -> (ProxyExecutable) arguments -> delegate.getDate();
      case "setHours" -> (ProxyExecutable) arguments -> {
        return delegate.getTime();
      case "setDate" -> (ProxyExecutable) arguments -> {
        return delegate.getTime();
      case "toString" -> (ProxyExecutable) arguments -> delegate.toString();
      default -> throw new UnsupportedOperationException("This date does not support: " + key);
  public Object getMemberKeys() {
    return PROTOTYPE_FUNCTIONS.toArray();
  public boolean hasMember(String key) {
    return PROTOTYPE_FUNCTIONS.contains(key);
  public void putMember(String key, Value value) {
    throw new UnsupportedOperationException("This date does not support adding new properties/functions.");
  public Instant asInstant() {
    return delegate.toInstant();

Most of the logic is in hasMember and the actual dispatch in getMember: Everything that a member can represent can be returned! So either concrete values that are representable inside the embedded language or proxy objects again. As we want to represent methods on that JavaScript object we return ProxyExecutable! Execution will actually be deferred until called in the guest. What happens in the call is of course up to you. I have added examples for just getting values from the delegate but also for manipulating it. Because of the later I found it sensible to use a java.util.Date as delegate, but an immutable Instant on a mutable attribute of the proxy object would have been possible as well.

Of course there are methods left out, but I think the idea is clear. The proxy object works as expected:

public class Application {
  public static void main(String... a) {
    try (var context = Context.newBuilder("js").build()) {
      var today =;
      var bindings = context.getBindings("js");
      bindings.putMember("javaInstant", new DateProxy(today.atStartOfDay().atZone(ZoneId.of("Europe/Berlin")).toInstant()));
      bindings.putMember("yesterday", new DateProxy(today.minusDays(1).atStartOfDay().atZone(ZoneId.of("Europe/Berlin")).toInstant()));
      var result = context.eval("js",
          var nativeDate = new Date(new Date().toLocaleString("en-US", {timeZone: "Europe/Berlin"}));
            nativeDate   : nativeDate,
            nativeTimeFromNativeDate : nativeDate.getTime(),
            javaInstant: javaInstant,
            diff: nativeDate.getTime() - javaInstant.getTime(),
            isBefore: yesterday < nativeDate,
            nextWeek: new Date(javaInstant.setDate(javaInstant.getDate() + 7))

As always, there are two or more sides to solutions: With the one above, you are in full control of what is possible or not. On the other hand, you are in full control of what is possible or not. There will probably edge cases if you pass in such a proxy to an embedded program which in turn calls things on it you didn’t foresee. On the other hand, it is rather straight forward and most likely performant without too many context switches.

The other option would be pulling a JavaScript date from the embedded into the Java host like so var javaScriptDate = context.eval("js", "new Date()"); and manipulate it there.

Either way, I found it quite interesting to dig into GraalVM polyglot again thanks to one of our partners asking great questions and I hope you find that insight here useful as well. A full version of that program is available as a runnable JBang script:


As you might have noticed in the snippets above, I am on Java 17. The script runs best GraalVM 21.3.0 JDK 17 but will also be happy (more or less) on stock JDK 17.

| Comments (5) »


Testing in a modular world

I am not making a secret out of it, I am a fan of the Java Module system and I think it can provide benefit for library developers the same way it brings for the maintainers and developers of the JDK themselves.

If you are interested in a great overview, have a look at this comprehensive post about Java modules by Jakob Jenkov. No fuss, just straight to the matter. Also, read what Christian publishes: It is no coincidence that my post in 2021 has the same name as his in 2018.

At the beginning of this months, I wrote about a small tool I wrote for myself, scrobbles4j. I want the client to be able to run on the module path and the module path alone. Why am I doing this? Because I am convinced that modularization of libraries will play a bigger role in Javas future and I am responsible for Spring Data Neo4j (not yet modularized), the Cypher-DSL (published as a Multi-Release-Jar, with module support on JDK11+ and the module path) and I advise a couple of things on the Neo4j Java driver and I just want to know upfront what I have to deal with.

The Java module system starts to be a bit painful when you have to deal with open- and closed-box testing.

Goal: Create a tool that runs on the module path, is unit-testable without hassle in any IDE (i.e. does not need additional plugins, config or conventions) and can be integration tested. The tool in my case (the Scrobbles4j application linked above) is a runnable command line tool depending on various service implementations defined by modules. A Java module does not need to export or open a package to be executable, which will be important to notice later on!

Christian starts his post above with the “suggestion” to add the (unit) test classes just next to the classes under test… Like it was ages ago. Christians blog post ist from 2018, but honestly, that reassembles my feeling all to well when I kicked this off: It’s seems to be the easiest solution and I wonder if this is how the JDK team works.

I prefer not todo this as I am happy with the convention of src/main and src/test.

As I write this, most things work out pretty well with Maven (3.8 and Surefire 3.0.0.M5) and the need for extra config vanished.

Have a look at this repository: michael-simons/modulartesting. The project’s pom.xml has everything needed to successfully compile and test Java 17 code (read: The minimum required plugin versions necessary to teach Maven about JDK 17). The project has the following structure:

├── app
│   ├── pom.xml
│   └── src
│       ├── main
│       │   └── java
│       │       ├── app
│       │       │   └──
│       │       └──
│       └── test
│           └── java
│               └── app
│                   └──
├── greeter
│   ├── pom.xml
│   └── src
│       ├── main
│       │   └── java
│       │       ├── greeter
│       │       │   └──
│       │       └──
│       └── test
│           └── java
├── greeter-it
│   ├── pom.xml
│   └── src
│       └── test
│           └── java
│               ├── greeter
│               │   └── it
│               │       └──
│               └──
└── pom.xml

That example here consists of a greeter module that creates a greeting and an app module using that greeter. The greeter requires a non-null and not blank argument. The app module has some tooling to assert its arguments. I already have prepared a closed test for the greeter module.

The whole setup is compilable and runnable like this (without using any tooling apart JDK provided means). First, compile the greeter and app modules. The --module-source-path can be specified multiple times, the --module argument takes a list of modules:

javac -d out --module-source-path greeter=greeter/src/main/java --module-source-path app=app/src/main/java --module greeter,app

It’s runnable on the module path like this

java --module-path out --module app/app.Main world
> Hello world.

As said before, the app module doesn’t export or opens anything. cat app/src/main/java/ gives you:

module app {
	requires greeter;

Open testing

Why is this important? Because we want to unit-test or open-test this module (or use in-module testing vs extra-module testing).
In-module testing will allow us to test package private API as before, extra-module testing will use the modular API as-is and as other modules will do, hence: It will map to integration tests).

The main class is dead stupid:

package app;
import greeter.Greeter;
public class Main {
	public static void main(String... var0) {
		if (!hasArgument(var0)) {
			throw new IllegalArgumentException("Missing name argument.");
		System.out.println((new Greeter()).hello(var0[0]));
	static boolean hasArgument(String... args) {
		return args.length > 0 && !isNullOrBlank(args[0]);
	static boolean isNullOrBlank(String value) {
		return value == null || value.isBlank();

It has some utility methods I want to make sure they work as intended and subject them to a unit test. I test package-private methods here, so this is an open test and a test best on JUnit 5 might look like this:

package app;
import static org.junit.jupiter.api.Assertions.assertFalse;
import static org.junit.jupiter.api.Assertions.assertTrue;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.ValueSource;
class MainTest {
	void isNullOrBlankShouldDetectNullString() {
	@ValueSource(strings = { "", " ", "  \t " })
	void isNullOrBlankShouldDetectBlankStrings(String value) {
	@ValueSource(strings = { "bar", "  foo \t " })
	void isNullOrBlankShouldWorkWithNonBlankStrings(String value) {

It lives in the same package (app) and in the same module but under a different source path (app/src/test). When I hit the run button in my IDE (here IDEA), it just works:

But what happens if I just run ./mvnw clean verify? Things fail:

[INFO] --- maven-surefire-plugin:3.0.0-M5:test (default-test) @ app ---
[INFO] -------------------------------------------------------
[INFO] -------------------------------------------------------
[INFO] Running app.MainTest
[ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 0.05 s <<< FAILURE! - in app.MainTest
[ERROR] app.MainTest.isNullOrBlankShouldDetectNullString  Time elapsed: 0.003 s  <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make app.MainTest() accessible: module app does not "opens app" to unnamed module @7880cdf3
[ERROR] app.MainTest.isNullOrBlankShouldWorkWithNonBlankStrings(String)[1]  Time elapsed: 0 s  <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make app.MainTest() accessible: module app does not "opens app" to unnamed module @7880cdf3
[ERROR] app.MainTest.isNullOrBlankShouldWorkWithNonBlankStrings(String)[2]  Time elapsed: 0 s  <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make app.MainTest() accessible: module app does not "opens app" to unnamed module @7880cdf3
[ERROR] app.MainTest.isNullOrBlankShouldDetectBlankStrings(String)[1]  Time elapsed: 0.001 s  <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make app.MainTest() accessible: module app does not "opens app" to unnamed module @7880cdf3
[ERROR] app.MainTest.isNullOrBlankShouldDetectBlankStrings(String)[2]  Time elapsed: 0.001 s  <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make app.MainTest() accessible: module app does not "opens app" to unnamed module @7880cdf3
[ERROR] app.MainTest.isNullOrBlankShouldDetectBlankStrings(String)[3]  Time elapsed: 0 s  <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make app.MainTest() accessible: module app does not "opens app" to unnamed module @7880cdf3
[INFO] Results:
[ERROR] Errors: 
[ERROR]   MainTest.isNullOrBlankShouldDetectBlankStrings(String)[1] » InaccessibleObject
[ERROR]   MainTest.isNullOrBlankShouldDetectBlankStrings(String)[2] » InaccessibleObject
[ERROR]   MainTest.isNullOrBlankShouldDetectBlankStrings(String)[3] » InaccessibleObject
[ERROR]   MainTest.isNullOrBlankShouldDetectNullString » InaccessibleObject Unable to ma...

To understand what’s happening here we have to look what command is run by the IDE. I have appreviated the command a bit and kept the important bits:

/Library/Java/JavaVirtualMachines/jdk-17.jdk/Contents/Home/bin/java -ea \
--patch-module app=/Users/msimons/Projects/modulartesting/app/target/test-classes \
--add-reads app=ALL-UNNAMED \
--add-opens app/app=ALL-UNNAMED \
--add-modules app \
// Something something JUnit app.MainTest

Note: Why does it say app/app? The first app is the name of the module, the second the name of the exported package (which are in this case here the same).

First: --patch-module: “Patching modules” teaches us that one can patch sources and resources into a module. This is what’s happening here: The IDE adds my test classes into the app module, so they are subject to the one and only allowed module descriptor in that module.
Then --add-reads: This patches the module descriptor itself and basically makes it require another module (here: simplified everything).
The most important bit to successfully test things: --add-opens: It opens the app module to the whole world (but especially, to JUnit). It is not that JUnit needs direct access to the classes under test, but to the test classes which are – due to --patch-module part of the module.

Let’s compare what Maven/Surefire does with ./mvnw -X clean verify:

[DEBUG] Path to args file: /Users/msimons/Projects/modulartesting/app/target/surefire/surefireargs17684515751207543064
[DEBUG] args file content:
"damn long class path"

It doesn’t have the --add-opens part! Remember when I wrote that the app-module has no opens or export declaration? If it would, the --add-opens option would not have been necessary and my plain Maven execution would work. But adding it to my module is completely against what I want to achieve.
And as much as I appreciate Christian and his knowledge, I didn’t get any solution from his blog to above to work for me. What does work is just adding the necessary opens to surefire like this:

			<configuration combine.self="append">
				<argLine>--add-opens app/app=ALL-UNNAMED</argLine>

There is actually an open ticket in the Maven tracker about this exact topic: SUREFIRE-1909 – Support JUnit 5 reflection access by changing add-exports to add-opens (Thanks Oliver for finding it!). I would love to see this fixed in Surefire. I mean, the likelihood that someone using surefire wants also to use JUnit 5 accessing their module is pretty high.

You might rightfully ask why open the app-module to “all-unnamed” and not to org.junit.platform.commons because the latter is what should access the test classes? The tests doesn’t run on the module path alone but on the classpath which does access modules which is perfectly valid as explained by Nicolai. We are having a dependency from the classpath on the module path here and we must make sure that the dependent is allowed to read the dependency.

Now to

Closed or integration testing

From my point of view closed testing in the modular world should be done with actually, separate modules. Oliver and I agreed that this thinking probably comes from a TCK based working approach, such as applied in the JDK itself or in the Jakarta EE world, but it’s not a bad approach, quite the contrary. What you get here is an ideal separation of really different types of tests, without fiddling with test names and everything.

An integration test for the greeter could look like this:

import static org.junit.jupiter.api.Assertions.assertEquals;
import static org.junit.jupiter.api.Assertions.assertThrows;
import greeter.Greeter;
import org.junit.jupiter.api.Test;
class GreeterIT {
	void greeterShouldWork() {
		assertEquals("Hello, modules.", new Greeter().hello("modules"));
	void greeterShouldNotGreetNothing() {
		assertThrows(NullPointerException.class, () -> new Greeter().hello(null));

As it lives in a different module, the package is different ( and so is the module name. Therefor, we can happily specify a module descriptor for the test itself, living in src/test:

module {
	requires greeter;
	requires org.junit.jupiter;
	opens to org.junit.platform.commons;

The module descriptor makes it obvious: This test will run on the module path alone! I can clearly define what is required and what I need to open up for usage of reflection. Notice: I open up the integration test module (, not the greeter module itself!


Testing in the modular world requires some rethinking. You need to learn about the various options of javac and java in regards of the module system.
I found The javac Command particular helpful. Pain points are usually to be found where class-path and module-path met. Sadly, this is often the case in simple unit or open tests. In a pure classpath world, they are easier to handle.

However, Maven and its plugins are getting there. I haven’t checked Gradle, but I guess that ecosystem is moving along as well. For integration tests, the world looks actually neat, at least in terms of the test setup itself. Everything needed can be expressed through module descriptors. Testing effort for something like the Spring framework itself to provide real module descriptors is of course something else and I am curious with what solution the smart people over there will come along in time of Spring 6.

If you find this post interesting, feel free to tweet it and don’t forget to checkout the accompanying project:

| Comments (6) »