Neo4j, Java and GraphQL

Recently, I realized I am an old person:

but I can even add more to it: Until 2021, I was able to make my way around GraphQL almost everywhere except for one tiny thing I made for our family site.

I mean, I got the idea of GraphQL, but it never clicked with me. I totally love declarative query languages, such as SQL and Cypher. GraphQL never felt like this to me, despite it’s claim during time of writing “GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API”. I perceive it more like schema declaration that happens to be usable as query language too, which is at least aligned with GraphQLs own description:



Anyway, just because I didn’t click with it doesn’t mean it’s not fulfilling a need other people do have.

I happen to work at a company that creates a world famous Graphdatabase named “Neo4j”. Now, “Graph” is about 71% of GraphQL and the suspicion is quite close that GraphQL is a query lange for a Graph database. It isn’t, at least not native to Neo4j. A while back I needed to explain this to several people when I was approached by our friends at VMWare discussing their new module spring-graphql and saw more than one surprised face.

Now, what does Neo4j have? It has first and foremost, Cypher. Which is a great. But it has also neo4j/graphql.

Neo4j GraphQL

Neo4j’s officially supported product is neo4j/graphql.



The Neo4j GraphQL Library, released in April this year, builds on top of Cypher. My colleagues Darrell and Daniel have blogged a lot about it (here and here) and of course there are great talks.

Technically, the Neo4j GraphQL library is a JavaScript library, usable in a Node environment, together with something like the Apollo GraphQL server. Under the hood, the library and it’s associated object mapper translates the GraphQL scheme into Cypher queries.

Architecturally you setup a native Graph database (Neo4j) together with a middleware that you can either access directly or via other applications. This is as far as I can dive into Neo4j GraphQL. I wholeheartedly recommend following Oskar Hane, Dan Starns and team for great content about it.

Neo4j GraphQL satisfies the providers of APIs in the JavaScript space. For client applications (regardless of actual clients or other server site applications), the runtime for that Api doesn’t matter. But what about old people like me, doing Java in the backend?

Neo4j, GraphQL and Java

Actually, there are a ton of options. Let me walk you through:

neo4j-graphql-java

This is the one that comes most closely to what neo4j-graphl does: It takes in a schema definition and builds the translation to Cypher for you. Your job is to provide the runtime around it. You’ll find it here: neo4j-graphql/neo4j-graphql-java.

This library parses a GraphQL schema and uses the information of the annotated schema to translate GraphQL queries and parameters into Cypher queries and parameters.

Those Cypher queries can then executed, via the Neo4j-Java-Driver against the graph database and the results can be returned directly to the caller.

The library does not make assumptions about the runtime and the JVM language here. You are free to run it in Kotlin with KTor or Java with… Actually whatever server that is able to return JSON-ish structure.

As of now (July 2021), the schema augmented by neo4j-graphql-java differs from the one augmented by neo4j/graphql, but according to the readme, work is underway to support the same thing.

What I do like a lot about the library: It uses the Cypher-DSL for building the underlying Cypher queries. Why? We – Gerrit and me – always hoped that it would prove useful to someone else outside our own object mapping.

How to use it? I created a small gist Neo4jGraphqlJava that uses Javalin as a server and JBang to run. Assuming you have JBang installed, just run:

jbang https://gist.github.com/michael-simons/f8a8a122d1066f61b2ee8cd82b6401b8 -uneo4j -psecret -abolt://localhost:7687

and point it to your Neo4j instance of choice. It comes with a predefined GraphQL scheme:

type Person {
	name: ID!
	born: Int
	actedIn: [Movie] @relation(name: "ACTED_IN")
}
type Movie {
	title: ID!
	released: Int
	tagline: String
}
type Query {
	person: [Person]
}

but in the example gist, you can use --schema to point it to another file. The whole translation happens in a couple of lines

var graphql = new Translator(SchemaBuilder.buildSchema(getSchema()));
var yourGraphQLQuery = "// Something something query";
graphql.translate(yourGraphQLQuery).stream().findFirst().ifPresent(cypher -> {
// Do something with the generated Cypher				
});

Again, find the whole and executable program here.

The benefit: It’s all driven by the GraphQL scheme. I get it, that there are many people out there finding this whole query language intimidating and prefer using GraphQL for this whole area. And I don’t mean this in any way or form ironical or pejorative.

To put something like this into production not much more is needed: Of course, authentication is a pretty good idea, and maybe restricting GraphQL query complexity (while nobody would write a super deep nested query by hand with GraphQL, it’s easy enough to generate one).

Some ideas could be found here and here.

Using Spring Data Neo4j as a backend for GraphQL

Wait what? Isn’t the “old” enterprise stuff made obsolete by GraphQL? Not if you ask me. To the contrary, I think in many situations these things can benefit from each other. Why else do you think that VMWare created this?

When I started playing around with Spring Data Neo4j as a backend for a GraphQL scheme, VMWares implementation wasn’t quite there yet and so I went with Netflix DGS and the result can be found here michael-simons/neo4j-aura-sdn-graphql. It has “Aura” in the name as it demonstrates in addition Spring Data Neo4js compatibility with Aura, the hosted Neo4j offering.

Netflix DGS – as well as spring-graphql – are schema first design, so the project contains a GraphQL schema too.

Schema-First vs Object-First

As the nice people at VMWare wrote: “GraphQL provides a schema language that helps clients to create valid requests, enables the GraphiQL UI editor, promotes a common vocabulary across teams, and so on. It also brings up the age old schema vs object-first development dilemma.”

I wouldn’t speak of a dilemma, but of a choice. Both are valid choices, and sometimes one fits better than the other.

Both Netflix-DGS and spring-graphl will set you up with infrastructure based on graphql-java/graphql-java and your task is to bring it to live.

My example looks like this:

import graphql.schema.DataFetchingEnvironment;
import graphql.schema.SelectedField;
 
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.stream.Collectors;
 
import org.neo4j.tips.sdn.graphql.movies.MovieService;
 
import com.netflix.graphql.dgs.DgsComponent;
import com.netflix.graphql.dgs.DgsData;
import com.netflix.graphql.dgs.DgsQuery;
import com.netflix.graphql.dgs.InputArgument;
 
@DgsComponent
public class GraphQLApi {
 
	private final MovieService movieService;
 
	public GraphQLApi(final MovieService movieService) {
		this.movieService = movieService;
	}
 
	@DgsQuery
	public List<?> people(@InputArgument String nameFilter, DataFetchingEnvironment dfe) {
 
		return movieService.findPeople(nameFilter, withFieldsFrom(dfe));
	}
 
	@DgsData(parentType = "Person", field = "shortBio")
	public CompletableFuture<String> shortBio(DataFetchingEnvironment dfe) {
 
		return movieService.getShortBio(dfe.getSource());
	}
 
	private static List<String> withFieldsFrom(DataFetchingEnvironment dfe) {
		return dfe.getSelectionSet().getImmediateFields().stream().map(SelectedField::getName)
			.sorted()
			.collect(Collectors.toList());
	}
}

This is an excerpt from GraphQLApi.java. The MovieService being called in the example is of course backed by a Spring Data Neo4j repository.

The whole application is running here: neo4j-aura-sdn-graphql.herokuapp.com.

I like the flexibility the federated approach brings: When someone queries your API and asks for a short biography of a person, the service goes to Wikipedia and fetches it, transparently returning it via the GraphQL response.

Of course, this requires a ton more knowledge of Java.
However, if I wanted to do something similar in a NodeJS / Apollo environment, I both think that this is absolutely possible and that I have to acquire knowledge too.

Using SmallRye GraphQL with Quarkus and custom Cypher queries

SmallRye GraphQL is an implementation of Eclipse MicroProfile GraphQL and GraphQL over HTTP. It’s the guided option when you want to do GraphQL with Quarkus.

My experiments on that topic are presented here: michael-simons/neo4j-aura-quarkus-graphql with a running instance at Heroku too: neo4j-aura-quarkus-graphql.herokuapp.com. I liked that approach so much I even did a front end for it.

Which approach exactly? Deriving the GraphQL from Java classes. BooksAndMovies.java shows how:

@GraphQLApi
@ApplicationScoped
public class BooksAndMovies {
 
	private final Context context;
 
	private final PeopleService peopleService;
 
	@Inject
	public BooksAndMovies(Context context, PeopleService peopleService,
	) {
		this.context = context;
		this.peopleService = peopleService;
	}
 
	@Query("people")
	public CompletableFuture<List<Person>> getPeople(@Name("nameFilter") String nameFilter) {
 
		var env = context.unwrap(DataFetchingEnvironment.class);
		return peopleService.findPeople(nameFilter, null, env.getSelectionSet());
	}
}

and Person.java a simple PoJo or a JDK16+ record (I’m running the app as a native application and there we’re still on JDK11, so the linked class is not a record yet).

import org.eclipse.microprofile.graphql.Description;
import org.eclipse.microprofile.graphql.Id;
 
@Description("A person has some information about themselves and maybe played in a movie or is an author and wrote books.")
public class Person {
 
	@Id
	private String name;
 
	private Integer born;
 
	private List<Movie> actedIn;
 
	private List<Book> wrote;
}

In this example I don’t use any data mapping framework but the Cypher-DSL and the integration with the Java driver alone to query, map and return everything (in an asynchronous fashion):

public CompletableFuture<List<Person>> findPeople(String nameFilter, Movie movieFilter,
	DataFetchingFieldSelectionSet selectionSet) {
 
	var returnedExpressions = new ArrayList<Expression>();
	var person = node("Person").named("p");
 
	var match = match(person).with(person);
	if (movieFilter != null) {
		var movie = node("Movie").named("m");
		match = match(person.relationshipTo(movie, "ACTED_IN"))
			.where(movie.internalId().eq(anonParameter(movieFilter.getId())))
			.with(person);
	}
 
	if (selectionSet.contains("actedIn")) {
		var movie = node("Movie").named("m");
		var actedIn = name("actedIn");
 
		match = match
			.optionalMatch(person.relationshipTo(movie, "ACTED_IN"))
			.with(person, collect(movie).as(actedIn));
		returnedExpressions.add(actedIn);
	}
 
	if (selectionSet.contains("wrote")) {
		var book = node("Book").named("b");
		var wrote = name("wrote");
 
		var newVariables = new HashSet<>(returnedExpressions);
		newVariables.addAll(List.of(person.getRequiredSymbolicName(), collect(book).as("wrote")));
		match = match
			.optionalMatch(person.relationshipTo(book, "WROTE"))
			.with(newVariables.toArray(Expression[]::new));
		returnedExpressions.add(wrote);
	}
 
	Stream.concat(Stream.of("name"), selectionSet.getImmediateFields().stream().map(SelectedField::getName))
		.distinct()
		.filter(n -> !("actedIn".equals(n) || "wrote".equals(n)))
		.map(n -> person.property(n).as(n))
		.forEach(returnedExpressions::add);
 
	var statement = makeExecutable(
		match
			.where(Optional.ofNullable(nameFilter).map(String::trim).filter(Predicate.not(String::isBlank))
				.map(v -> person.property("name").contains(anonParameter(nameFilter)))
				.orElseGet(Conditions::noCondition))
			.returning(returnedExpressions.toArray(Expression[]::new))
			.build()
	);
	return executeReadStatement(statement, Person::of);
}

The Cypher-DSL really shines in building queries in an iterative way. The whole PeopleService is here.

Restricting query complexity

In anything that generates fetchers via GraphQL-Java you should consider using an instance of graphql.analysis.MaxQueryComplexityInstrumentation.

In a Spring application you would do this like

@Configuration
public class MyConfig {
 
	@Bean
	public SimpleInstrumentation max() {
		return new MaxQueryComplexityInstrumentation(64);
	}
}

in Quarkus like this:

import graphql.GraphQL;
import graphql.analysis.MaxQueryComplexityInstrumentation;
 
import javax.enterprise.event.Observes;
 
// Also remember to configure
// quarkus.smallrye-graphql.events.enabled=true
public final class GraphQLConfig {
 
	public GraphQL.Builder configureMaxAllowedQueryComplexity(@Observes GraphQL.Builder builder) {
 
		return builder.instrumentation(new MaxQueryComplexityInstrumentation(64));
	}
}

This prevents the execution of arbitrary deep queries.

Summary

There is a plethora of options to use Neo4j as a native graph database backing your GraphQL API. If you are happy running a Node based middleware, you should definitely go with neo4j/graphql. I don’t have an example myself, but the repo has a lot.

A similar approach is feasible on the JVM with neo4j-graphql/neo4j-graphql-java. It is super flexible in regards of the actual runtime. Here is my example: Neo4jGraphqlJava.java.

Schema-first GraphQL and a more static approach with a Spring Data based repository or for that mapper, any other JVM OGM, is absolutely possible. Find my example here: michael-simons/neo4j-aura-sdn-graphql. Old school enterprise Spring based data access frameworks are not mutual exclusive to GraphQL at all.

Last but not least, Quarkus and Smallrye-GraphQL offers a beautiful Object-first approach, find my example here: michael-simons/neo4j-aura-quarkus-graphql. While I wanted to use “my” baby, the Cypher-DSL to handcraft my queries, I do think that this approach will work just excellent with Neo4j-OGM.

In the end, I am quite happy to have made up my mind a bit more about GraphQL and especially the accessibility it brings to the table. Many queries translate wonderful to Neo4j’s native Cypher.

I hope you enjoyed reading this post as much as I enjoyed writing it and the examples along it. Please make sure you visit the medium pages of my colleagues shared first here.

Happy coding and a nice summer.

The feature image on this post has been provided my Gerrit… If get the English/German pun in this, here’s one more in the same ball park 😉

| Comments (2) »

13-Jul-21


Synchronizing Neo4j causal cluster bookmarks

Neo4j cluster is available in the Neo4j enterprise version and of course in Neo4j Aura.

A cluster provides causal consistency via the concept of bookmarks:

On executing a transaction, the client can ask for a bookmark which it then presents as a parameter to subsequent transactions. Using that bookmark the cluster can ensure that only servers which have processed the client’s bookmarked transaction will run its next transaction. This provides a causal chain which ensures correct read-after-write semantics from the client’s point of view.

When you use a Session from any of the official drivers directly (for example from the Neo4j Java Driver, or at least version 6 of Spring Data Neo4j, you hardly every see bookmarks yourself and there’s mostly no need to.

While Spring Data Neo4j 5.x still required some manual setup, SDN 6 does not need this anymore.

So in theory, all applications running against Neo4j should be fine in terms of “reading their own writes”.

But what about multiple instances of an application running against a cluster, for example to scale the application, too?

This does actually require some work. In the following example I refer to the Java driver, but the API of the other drivers and of course, the behavior of the server is identical.

When creating a new session the driver allows to pass in a collection or iterable of Bookmarks and not just a single object. Those bookmarks are then taken all to the cluster. As soon as the first transaction in this session starts, the routing will make sure that the requests go to a cluster member that has reached at last the latest transaction defined by the collection of bookmarks. There is no need to keep the bookmarks in order.. That allows us just collect new bookmarks and pass them on. We don’t have to do anything on the client side about sorting.

That work is done already internally by Spring Datas implementation of Bookmark managers.

But what can we do with that information to make sure multiple instances of the same application read the latest writes?

As soon as SDN becomes aware of a new bookmark, we must grab it and push it into exchange. A Redis pubsub topic, or a JMS queue configured as pubsub or even some Kafka configuration will do.

I have created to projects to demonstrate such a setup with both SDN 6 and the prior version, SDN5+OGM:

Common setup

Both projects require a locally running Redis instance. Please consult the Redis documentation for your OS or use a Docker container.

Conceptional, every messaging system that supports pubsub should do.

Spring Data Neo4j 6

The example projected is here: bookmark-sync-sdn6.

SDN 6 publishes bookmarks received from the driver as ApplicationEvents, so we can listen on them via ApplicationListener. For the other way round, the Neo4jTransactionManager can be seeded with a Supplier>.

The whole setup is as follows, please read the JavaDoc comments:

Spring Data Neo4j 5 + Neo4j-OGM

The example projected is here: bookmark-sync-sdn5.

In SDN5+OGM we can use the BookmarkManager interface provided by SDN5. We run a completely custom (and also better implementation) than the default one relying on Caffeine-Cache (which is not necessary at all).

The principle is the same, though: When new bookmarks are received via the transaction system, they will be published, new bookmarks received on the exchange will be the new seed. Please note, in SDN5 bookmark support must be enabled with @EnableBookmarkManagement

Note: Don’t combine SDN6 and SDN5 config! Those configurations above are for two separate example projects.

Photo by Chiara F on Unsplash

| Comments (1) »

12-Jul-21


What if Metallica went into Java programming?

Yesterday, Maciej shared this and I answered

after that, things escalated a bit… 🙂

While I thought, if Metallica would have been Spring developers, they would have written about Disposable Beans, not Heroes, other people have been more creative:

From Gerald came songs about memory leaks or the lack thereof: Creeping Death (Memory Leak) or The Memory Remains (no Memory Leak).
Java is a managed manage, so of course there’s the garbage collection with Harvester of Garbage. We have (at least 3 boolean values), which is Sad But True (Boxed Booleans).
Anyway, there’s seems to be a running thread… well, Until it Sleeps. If there are some weak references, they thought I Disappear.

Most important bit however is Coffee in the Jar.

Mario chimed in as well and pointed out to the One singleton, the less friendly garbage collector named Seek and Destroy. I would have said that this one pulls The Shortest Straw for you, but that one also applies to a Comparable. In the end, Nothing else matters anyway, while jitting the hot path.

Christoph being the awesome Spring developer he is, brought up of course The Master Of Beans as well as The Bean Within. Everything has a beginning and the end, and so have beans: Init & Destroy.

My favorite one however came from dear Eirini: The bean that should not be, which sums up some struggles of setting up some projects where nicely 🙂

The twitter thread made my day, some light heartened fun for a change.

Have some fun while listening to the songs above:

Featured image on the post from Wikimedia commons under a Creative Commons license.

| Comments (0) »

27-May-21


Releasing Maven based projects to Maven central

I published my first library on Maven central about 2013. A server side embedding tool for webpages based on OEmbed. I remember how happy and proud I was having published something in binary form “for all eternity”.

Maven central is the canonical, default artifact repository for the built tool of the same name, Apache Maven. Until configured otherwise, Maven tries to resolve dependencies from there.

The company behind sponsoring and running Maven central is Sonatype. Back in 2013 (and also 2018) account approval to release things to central was manual and involved resolving of ownership of the reverse DNS name. All of that is centered around preventing the hijacking of coordinates (read along here) to prevent people from tricking other people into using malicious software.

The whole process to be a producer starts here: The central repository: Producers. It’s explained in great detail.

Releasing to central also involves going through something called staging repositories and the process associated with it. it can be done through an UI (OSS Sonatype) or via plugins for Maven.

This week, another company, JFrog, announced that they are shutting down Bintray / JCenter. I was aware that Bintray and JCenter are around and are often used within Gradle projects. Apart from that, I only used one of JFrogs products, Artifactory, in a company as a local Maven central mirror and local deploy and release target.

People seem to have used JCenter and Bintray because the release process seems less strict and they found the Maven central way of doing things too hard. Other voices have been raised that Maven central is often slow. I cannot confirm the later, though.

I am writing down the following remarks to demonstrate that it is not that hard to publish your libraries on Maven central after getting across the initial setup.

First of all, read the above link “Producer” to get your coordinates registered for you.

I did not go through the UI of Sonatype for quite some time. The libraries I put myself onto central are all released via the Maven release plugin.

To get this up and running, your pom.xml has to fit some requirements. Especially, the meta data has to be complete. This should be the first step.

Further requirements are: Javadoc and sources must be present. The artifacts must not come in a snapshot form and they must not depend on anything not being on Maven central. Also, the artifacts must signed via GPG (See all of this here).

For my personal needs I have configured to a local release profile in my ~/.m2/settings.xml containing my username on oss.sonatype.org and encrypted password. This goes into the list of <servers> like this:

<server>
  <id>ossrh</id>
  <username>g.reizt</username>
  <password>XXXXX</password>
</server>

Also in settings.xml, the GPG credentials.

<profile>
    <id>gpg</id>
    <properties>
            <gpg.keyname>KEYNAME</gpg.keyname>
<!--        <gpg.passphrase>XXXX</gpg.passphrase> -->
<!-- Or better via an agent -->
    </properties>
</profile>

This turns out to be one of the hardest part to get right. I always have to look this up for CI or a new machine.

So, for the projects pom: Make sure you follow the requirements for the meta data and configure the necessary plugins for JavaDoc, sources and signature.

The libraries I put on central have basically all this information:

<build>
	<plugins>
		<plugin>
			<groupId>org.sonatype.plugins</groupId>
			<artifactId>nexus-staging-maven-plugin</artifactId>
			<version>${nexus-staging-maven-plugin.version}</version>
			<extensions>true</extensions>
			<configuration>
				<serverId>ossrh</serverId>
				<nexusUrl>https://oss.sonatype.org/</nexusUrl>
				<autoReleaseAfterClose>true</autoReleaseAfterClose>
			</configuration>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-source-plugin</artifactId>
			<version>${maven-source-plugin.version}</version>
			<executions>
				<execution>
					<id>attach-sources</id>
					<goals>
						<goal>jar-no-fork</goal>
					</goals>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-javadoc-plugin</artifactId>
			<executions>
				<execution>
					<id>attach-javadocs</id>
					<goals>
						<goal>jar</goal>
					</goals>
				</execution>
			</executions>
			<configuration>
				<detectOfflineLinks>false</detectOfflineLinks>
				<detectJavaApiLink>false</detectJavaApiLink>
				<source>${java.version}</source>
				<tags>
					<tag>
						<name>soundtrack</name>
						<placement>X</placement>
					</tag>
				</tags>
			</configuration>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-release-plugin</artifactId>
			<version>${maven-release-plugin.version}</version>
			<configuration>
				<autoVersionSubmodules>true</autoVersionSubmodules>
				<useReleaseProfile>false</useReleaseProfile>
				<releaseProfiles>release</releaseProfiles>
				<tagNameFormat>@{project.version}</tagNameFormat>
				<goals>deploy</goals>
			</configuration>
		</plugin>
	</plugins>
</build>
 
<profiles>
	<profile>
		<id>release</id>
		<build>
			<plugins>
				<plugin>
					<groupId>org.apache.maven.plugins</groupId>
					<artifactId>maven-gpg-plugin</artifactId>
					<version>${maven-gpg-plugin.version}</version>
					<executions>
						<execution>
							<id>sign-artifacts</id>
							<phase>verify</phase>
							<goals>
								<goal>sign</goal>
							</goals>
						</execution>
					</executions>
				</plugin>
			</plugins>
		</build>
	</profile>
</profiles>

org.sonatype.plugins:nexus-staging-maven-plugin does all the heavy lifting behind the scenes as it hooks up to the release phases. I keep the gpg plugin in a separate profile so that users of my libraries are not pestered with it when they just want to build some stuff locally.

After all this is done, you can release your things in a two step process via `mvn release:prepare` followed by a `man release:perform`. The tooling will guide you through setting the current version and update your things to a new snapshot version.

I won’t go into the discussion whether the repeated tests runs are meaning full or not or whether not using the release plugin at all makes sense or not. I currently maintain projects than are run with CI friendly versions and released via other tooling to central and a couple of things released as described above.

| Comments (1) »

05-Feb-21


Do some puzzles sometimes

Wait, I here you say “This guy writing there, didn’t he write about not having time and energy for site projects?”:

Well, yes, sadly that’s the case and I did cancel a couple of things for good (BTW, I am looking for someone kind to take over the leadership of Aachens Java User Group EuregJUG, maybe there will be a time again for meetings).

On the other hand, I try not to go completely nuts and the last couple of weeks have not made this easy. It’s grey, cold (which I don’t even mind), but wet, wet, wet, muddy, muddy and then some: Running and especially cycling is a bit hard. Normally that would keep me on the track.

Instead, I started puzzle on Advent of Code again. I have a dedicated repository for my solutions, find it at michael-simons/aoc. Why more coding?

What I really like about the puzzles is the fact that they absolutely have nothing todo with frameworks, annotations, microservices, DDD platforms, build systems or moving JSON between endpoints. Just plain, logical puzzles to tinker with. A bit like doing a crossword thing each day.

In my repository, I tackled every puzzle with what’s available in a language, no libraries. Setup in a way that someone who want’s to run needs only to install one thing, the language. The repository contains Java (of course), Kotlin, Rust, Ruby, Go, SQL, Cypher, PHP, Scala, Typescript and some other things. I guess I managed to be more idiomatic in some languages than others, though. In most of the puzzles you’ll realize that you need various ideas, mathematical concepts, algorithms again and again. It’s helpful to compare how easy or hard is to implement those in various languages.

Up until this week I did run some circles around languages like Clojure or Lisp in general or things like Haskell. I find them intimidating at first sight and I don’t have a university background with knowledge about their theoretical concepts.

I started to read up on Clojure a bit and while I do not yet understand everything, I was able to create the following script

(def input (clojure.string/trim-newline (slurp "input.txt")))
 
(def freq (frequencies input))
(def starOne
    (- (get freq \() (get freq \))))
(println starOne)
 
(def starTwo
    (count
        (take-while (fn [p] (not= p -1))
            (reductions (fn [sum num] (+ sum num)) 0 (map {\( 1 \) -1} input)))))
(println starTwo)

It reads an input file into memory and uses the frequencies function to compute the frequencies of different characters in it. It assigns the difference of occurrences of `(` and `)` to the a variable named starOne and prints it.

The second part counts the number of iterations needed to map all opening brackets to 1 and closing brackets to -1 and summing them up (in several reductions) until one of them hits -1.

Many important things are already in there: Reading files, working with maps and lists, applying functions to every item in a list, calling functions and defining anonymous functions. I can work with that.

Fast forward a couple of days, having a look at I Was Told There Would Be No Math. Well, the math is actual super simple in that. Read a file with lines like 2x3x4. They give you the dimension of a parcel (length, width, height).

Compute according to some rules paper and ribbon needed to wrap those parcels or presents. Paper area is given by “find the surface area of the box, which is 2*l*w + 2*w*h + 2*h*l. The elves also need a little extra paper for each present: the area of the smallest side.”

The good object oriented person and the happy Java 15 user with preview feature I am I started to create a record to model that thing:

record Present(int l, int w, int h) {
    int surfaceArea() {
        return 2 * (l * w + w * h + h * l);
    }
 
    int slack() {
        return Math.min(l * w, Math.min(w * h, h * l));
    }
 
    int volume() {
        return l * w * h;
    }
 
}

I mean, basic math, right? Solution to the first question is just summing surface area and slack up like var starOne = presents.stream().mapToInt(p -> p.surfaceArea() + p.slack()).sum();. Easy, right?

Always thinking in objects primes you to things. Here to length, width, height. I should have realized what I am doing when I computed the smallest area (computed 3 areas and chose the smallest one): It doesn’t matter which value is assigned to length, width and height: I can just sort them, take the two smallest and multiple them. I realized that when I computed the smallest perimeter of the parcel:

int smallestPerimeter() {
    return Stream.of(l, w, h).sorted().limit(2).mapToInt(v -> 2 * v).sum();
}

Enter the Clojure solution. It felt very clumsy to define a type just for that.

Here’s what I came up with instead

(use '[clojure.string :only (trim-newline split-lines split)])
 
(def input 
    "I use again slurp to read the file and two library functions
     to trim the newlines and split the whole thing into a list. 
     An anonymous function is used on each line to split line by 
     the letter `x` and map the values to an int. Those ints are than
     sorted and the variable `input` will be a lazy list of int arrays."
    (map (fn [v] (sort (map bigint (split v #"x"))))
    (split-lines (trim-newline (slurp "input.txt")))))
 
(defn paper
    "As I know that the array is sorted, I can deconstruct it into the 
     3 values contained. The riddle for the paper is that the smallest area 
     is in there 3 and not 2 times. As the smallest area is defined by the first
     two elements, we multiple them 3 instead of 2 times like the rest."
  [dimensions]
  (let [[l w h] dimensions]
    (+ (* 3 l w) (* 2 l h) (* 2 w h))))
 
(defn ribbon  
    "Same idea as above: The smallest perimeter is defined by the 2 smallest values.
     The volume is of course the product of all 3."
  [dimensions]
  (let [[l w h] dimensions]
    (+ (* 2 l) (* 2 w) (* l w h))))
 
(println (reduce + (map paper input)))
(println (reduce + (map ribbon input)))

Find my prose inside the program.

I do like the approach not using to many data structures.

In the end: It is good to have a look outside the box sometimes and reset your brain with fresh ideas.

Thanks to Tim, Stefan and Jan for the multiple times in which you brought Clojure into my bubble.

Title picture by Bannon Morrissy on Unsplash

| Comments (1) »

03-Feb-21