Review: Sketchnotes in der IT

Lisa Maria Moritz und der dpunkt.verlag waren so nett, mir ein Rezensionsexemplar von “Sketchnotes in der IT” zuzuschicken. Dieses Review ist auf Deutsch, da Lisa Marias Buch auch auf Deutsch verfasst ist.

Lisa Maria ist Senior Consultant bei INNOQ; sie betreibt nicht nur das Blog sketchnotes.tech, sondern begleitet auch regelmässig Software Architektur im Stream mit Sketchnotes.

Sketchnotes werden seit einigen Jahren mit wachsendem Erfolg insbesondere zu Vorträgen erstellt und in sozialen Medien verbreitet. Ganz massiv fielen sie mir bereits 2017 auf, als ich ebenfalls für INNOQ tätig war. Meine geschätzte frühere Kollegin Joy hat sie eingesetzt und viele Vorträge und Workshops damit begleitet. Was sind Sketchnotes? Wikipedia schreibt zum Thema:

Sketchnotes sind Notizen, die aus Text, Bild und Strukturen bestehen. Der Begriff setzt sich zusammen aus Sketch (englisch sketch ‘Skizze’) und Note (englisch note ‘Notiz’ von lateinisch notitia ‘Kenntnis, Nachricht’).

Die Sketchnote-Erstellung wird “sketchnoting” oder “visual note taking” genannt. Häufig werden Sketchnotes als Alternative zur konventionellen Mitschrift angefertigt. Im Gegensatz zu Texten sind Sketchnotes nur selten linear strukturiert.

Aus: de.wikipedia.org/wiki/Sketchnotes.

Lisa Maria Moritz beschreibt und visualisiert auf rund 170 Seiten in “Sketchnotes in der IT” ihren Ansatz, visuelle Notizen in Vorträgen, Meetings oder bei alltäglichen Aufgaben einzusetzen. Sie gibt einen kurzen Überblick über die Anfänge von Sketchnoting sowie eine sehr sinnvolle Beschreibung von grundsätzlichen Layouts. Die Tipps zur Werkzeugauswahl, sowohl analog und digital, sind kurz und knapp, aber ausreichend. Das Buch ist unter anderem sinnvoll für Menschen wie mich, die verstehen möchte, wie die Gedanken hinter Sketchnoting funktionieren, aber insbesondere auch für diejenigen, die selber in dieser Form Notizen erstellen möchten. Dabei hilft insbesondere die umfangreiche Symbolbibliothek.

Das Buch ist in einen textlastigen “Anleitungsteil” und in eine Symbolbibliothek unterteilt. Letztere nimmt rund die Hälfte der knapp 170 Seiten ein. Die Symbolbibliothek ist hilfreich zum Aufbau einer eigenen “Sprache” für erste Gehversuche mit Sketchnoting.

Der Textteil erklärt nach einer Einleitung und einer Sketchnote über Sketchnotes die Grundlagen, das Handwerkszeug und verschiedene Einsatzszenarios von Sketchnotes. Der Abschnitt zum Handwerkszeug ist angenehm kurz und vermeidet eine “Materialschlacht”. Sprich: Es fängt nicht damit an, eine Vielzahl Stifte, Papier, Geräte und Software zu kaufen. Natürlich sollte analoges oder digitales Schreib- und Zeichenmaterial verfügbar sein, aber das scheint mir in der Natur der Sache zu liegen.

Ich persönlich fand die Sketchnote über Sketchnotes am interessantesten. Warum? Offen gesprochen, weil ich die meisten Sketchnotes zwar als hübsch anzusehen empfinde, aber nicht als für mich zugänglich. Durch das Buch habe ich gelernt, wie Menschen ihre Sketchnotes strukturieren und diese Sketchnotes ihnen wiederrum dabei helfen, sich an gehörtes, erarbeitetes oder gelesenes zu erinnern oder zu vertiefen.

Mein Kopf funktioniert so nicht: Bereits zu Schulzeiten in den 1990er und danach im Studium habe ich gerne und viel mitgeschrieben. Meine Handschrift war und ist grässlich, es bereitet mir selber Mühe, Mitschriften nach einigen Tagen zu entziffern. Daher beschäftige ich mich in der Regel immer kurz nach der Mitschrift mit derselben, übertrage sie ins (digitale) Reine und arbeite sie dadurch nach, lerne und verinnerliche das Gehörte. So wie ich die Ausführungen des Buches verstanden habe, soll dieser Prozess bereits beim Sketchnoting erfolgen, die fertige Sketchnote später nur noch als Ankerpunkt für das Verinnerlichte dienen.

“Sketchnotes in der IT” beinhaltet viele schöne Beispiele von Sketchnotes und ich habe beim Lesen versucht, das jeweils visualisierte nachzuvollziehen. Die Symbolbibliothek unterstützt dabei. Mir gelingt das nur mit Anstrengung, wenn überhaupt. Ich möchte damit keine negative Aussage über das Buch machen, im Gegenteil. Es hat mir einen guten Anreiz gegeben, mich einmal ernsthaft mit dem Thema zu beschäftigen.

“Sketchnotes in der IT” ist zum Preis von 22,90€ als Print beim dpunkt.verlag und natürlich bei anderen Anbietern erhältlich.

Lisa Maria und Eberhard haben ein weiteres Buch veröffentlicht, “Sketchnote zu Software Architektur im Stream”. Diese Buch ist als digitale Variante kostenlos auf Leanpub verfügbar oder als Print auf Amazon. Doch dazu lasse ich die beiden selber zu Wort kommen:

| Comments (0) »

08-Aug-21


Neo4j, Java and GraphQL

Recently, I realized I am an old person:

but I can even add more to it: Until 2021, I was able to make my way around GraphQL almost everywhere except for one tiny thing I made for our family site.

I mean, I got the idea of GraphQL, but it never clicked with me. I totally love declarative query languages, such as SQL and Cypher. GraphQL never felt like this to me, despite it’s claim during time of writing “GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API”. I perceive it more like schema declaration that happens to be usable as query language too, which is at least aligned with GraphQLs own description:



Anyway, just because I didn’t click with it doesn’t mean it’s not fulfilling a need other people do have.

I happen to work at a company that creates a world famous Graphdatabase named “Neo4j”. Now, “Graph” is about 71% of GraphQL and the suspicion is quite close that GraphQL is a query lange for a Graph database. It isn’t, at least not native to Neo4j. A while back I needed to explain this to several people when I was approached by our friends at VMWare discussing their new module spring-graphql and saw more than one surprised face.

Now, what does Neo4j have? It has first and foremost, Cypher. Which is a great. But it has also neo4j/graphql.

Neo4j GraphQL

Neo4j’s officially supported product is neo4j/graphql.



The Neo4j GraphQL Library, released in April this year, builds on top of Cypher. My colleagues Darrell and Daniel have blogged a lot about it (here and here) and of course there are great talks.

Technically, the Neo4j GraphQL library is a JavaScript library, usable in a Node environment, together with something like the Apollo GraphQL server. Under the hood, the library and it’s associated object mapper translates the GraphQL scheme into Cypher queries.

Architecturally you setup a native Graph database (Neo4j) together with a middleware that you can either access directly or via other applications. This is as far as I can dive into Neo4j GraphQL. I wholeheartedly recommend following Oskar Hane, Dan Starns and team for great content about it.

Neo4j GraphQL satisfies the providers of APIs in the JavaScript space. For client applications (regardless of actual clients or other server site applications), the runtime for that Api doesn’t matter. But what about old people like me, doing Java in the backend?

Neo4j, GraphQL and Java

Actually, there are a ton of options. Let me walk you through:

neo4j-graphql-java

This is the one that comes most closely to what neo4j-graphl does: It takes in a schema definition and builds the translation to Cypher for you. Your job is to provide the runtime around it. You’ll find it here: neo4j-graphql/neo4j-graphql-java.

This library parses a GraphQL schema and uses the information of the annotated schema to translate GraphQL queries and parameters into Cypher queries and parameters.

Those Cypher queries can then executed, via the Neo4j-Java-Driver against the graph database and the results can be returned directly to the caller.

The library does not make assumptions about the runtime and the JVM language here. You are free to run it in Kotlin with KTor or Java with… Actually whatever server that is able to return JSON-ish structure.

As of now (July 2021), the schema augmented by neo4j-graphql-java differs from the one augmented by neo4j/graphql, but according to the readme, work is underway to support the same thing.

What I do like a lot about the library: It uses the Cypher-DSL for building the underlying Cypher queries. Why? We – Gerrit and me – always hoped that it would prove useful to someone else outside our own object mapping.

How to use it? I created a small gist Neo4jGraphqlJava that uses Javalin as a server and JBang to run. Assuming you have JBang installed, just run:

jbang https://gist.github.com/michael-simons/f8a8a122d1066f61b2ee8cd82b6401b8 -uneo4j -psecret -abolt://localhost:7687

and point it to your Neo4j instance of choice. It comes with a predefined GraphQL scheme:

type Person {
	name: ID!
	born: Int
	actedIn: [Movie] @relation(name: "ACTED_IN")
}
type Movie {
	title: ID!
	released: Int
	tagline: String
}
type Query {
	person: [Person]
}

but in the example gist, you can use --schema to point it to another file. The whole translation happens in a couple of lines

var graphql = new Translator(SchemaBuilder.buildSchema(getSchema()));
var yourGraphQLQuery = "// Something something query";
graphql.translate(yourGraphQLQuery).stream().findFirst().ifPresent(cypher -> {
// Do something with the generated Cypher				
});

Again, find the whole and executable program here.

The benefit: It’s all driven by the GraphQL scheme. I get it, that there are many people out there finding this whole query language intimidating and prefer using GraphQL for this whole area. And I don’t mean this in any way or form ironical or pejorative.

To put something like this into production not much more is needed: Of course, authentication is a pretty good idea, and maybe restricting GraphQL query complexity (while nobody would write a super deep nested query by hand with GraphQL, it’s easy enough to generate one).

Some ideas could be found here and here.

Using Spring Data Neo4j as a backend for GraphQL

Wait what? Isn’t the “old” enterprise stuff made obsolete by GraphQL? Not if you ask me. To the contrary, I think in many situations these things can benefit from each other. Why else do you think that VMWare created this?

When I started playing around with Spring Data Neo4j as a backend for a GraphQL scheme, VMWares implementation wasn’t quite there yet and so I went with Netflix DGS and the result can be found here michael-simons/neo4j-aura-sdn-graphql. It has “Aura” in the name as it demonstrates in addition Spring Data Neo4js compatibility with Aura, the hosted Neo4j offering.

Netflix DGS – as well as spring-graphql – are schema first design, so the project contains a GraphQL schema too.

Schema-First vs Object-First

As the nice people at VMWare wrote: “GraphQL provides a schema language that helps clients to create valid requests, enables the GraphiQL UI editor, promotes a common vocabulary across teams, and so on. It also brings up the age old schema vs object-first development dilemma.”

I wouldn’t speak of a dilemma, but of a choice. Both are valid choices, and sometimes one fits better than the other.

Both Netflix-DGS and spring-graphl will set you up with infrastructure based on graphql-java/graphql-java and your task is to bring it to live.

My example looks like this:

import graphql.schema.DataFetchingEnvironment;
import graphql.schema.SelectedField;
 
import java.util.List;
import java.util.concurrent.CompletableFuture;
import java.util.stream.Collectors;
 
import org.neo4j.tips.sdn.graphql.movies.MovieService;
 
import com.netflix.graphql.dgs.DgsComponent;
import com.netflix.graphql.dgs.DgsData;
import com.netflix.graphql.dgs.DgsQuery;
import com.netflix.graphql.dgs.InputArgument;
 
@DgsComponent
public class GraphQLApi {
 
	private final MovieService movieService;
 
	public GraphQLApi(final MovieService movieService) {
		this.movieService = movieService;
	}
 
	@DgsQuery
	public List<?> people(@InputArgument String nameFilter, DataFetchingEnvironment dfe) {
 
		return movieService.findPeople(nameFilter, withFieldsFrom(dfe));
	}
 
	@DgsData(parentType = "Person", field = "shortBio")
	public CompletableFuture<String> shortBio(DataFetchingEnvironment dfe) {
 
		return movieService.getShortBio(dfe.getSource());
	}
 
	private static List<String> withFieldsFrom(DataFetchingEnvironment dfe) {
		return dfe.getSelectionSet().getImmediateFields().stream().map(SelectedField::getName)
			.sorted()
			.collect(Collectors.toList());
	}
}

This is an excerpt from GraphQLApi.java. The MovieService being called in the example is of course backed by a Spring Data Neo4j repository.

The whole application is running here: neo4j-aura-sdn-graphql.herokuapp.com.

I like the flexibility the federated approach brings: When someone queries your API and asks for a short biography of a person, the service goes to Wikipedia and fetches it, transparently returning it via the GraphQL response.

Of course, this requires a ton more knowledge of Java.
However, if I wanted to do something similar in a NodeJS / Apollo environment, I both think that this is absolutely possible and that I have to acquire knowledge too.

Using SmallRye GraphQL with Quarkus and custom Cypher queries

SmallRye GraphQL is an implementation of Eclipse MicroProfile GraphQL and GraphQL over HTTP. It’s the guided option when you want to do GraphQL with Quarkus.

My experiments on that topic are presented here: michael-simons/neo4j-aura-quarkus-graphql with a running instance at Heroku too: neo4j-aura-quarkus-graphql.herokuapp.com. I liked that approach so much I even did a front end for it.

Which approach exactly? Deriving the GraphQL from Java classes. BooksAndMovies.java shows how:

@GraphQLApi
@ApplicationScoped
public class BooksAndMovies {
 
	private final Context context;
 
	private final PeopleService peopleService;
 
	@Inject
	public BooksAndMovies(Context context, PeopleService peopleService,
	) {
		this.context = context;
		this.peopleService = peopleService;
	}
 
	@Query("people")
	public CompletableFuture<List<Person>> getPeople(@Name("nameFilter") String nameFilter) {
 
		var env = context.unwrap(DataFetchingEnvironment.class);
		return peopleService.findPeople(nameFilter, null, env.getSelectionSet());
	}
}

and Person.java a simple PoJo or a JDK16+ record (I’m running the app as a native application and there we’re still on JDK11, so the linked class is not a record yet).

import org.eclipse.microprofile.graphql.Description;
import org.eclipse.microprofile.graphql.Id;
 
@Description("A person has some information about themselves and maybe played in a movie or is an author and wrote books.")
public class Person {
 
	@Id
	private String name;
 
	private Integer born;
 
	private List<Movie> actedIn;
 
	private List<Book> wrote;
}

In this example I don’t use any data mapping framework but the Cypher-DSL and the integration with the Java driver alone to query, map and return everything (in an asynchronous fashion):

public CompletableFuture<List<Person>> findPeople(String nameFilter, Movie movieFilter,
	DataFetchingFieldSelectionSet selectionSet) {
 
	var returnedExpressions = new ArrayList<Expression>();
	var person = node("Person").named("p");
 
	var match = match(person).with(person);
	if (movieFilter != null) {
		var movie = node("Movie").named("m");
		match = match(person.relationshipTo(movie, "ACTED_IN"))
			.where(movie.internalId().eq(anonParameter(movieFilter.getId())))
			.with(person);
	}
 
	if (selectionSet.contains("actedIn")) {
		var movie = node("Movie").named("m");
		var actedIn = name("actedIn");
 
		match = match
			.optionalMatch(person.relationshipTo(movie, "ACTED_IN"))
			.with(person, collect(movie).as(actedIn));
		returnedExpressions.add(actedIn);
	}
 
	if (selectionSet.contains("wrote")) {
		var book = node("Book").named("b");
		var wrote = name("wrote");
 
		var newVariables = new HashSet<>(returnedExpressions);
		newVariables.addAll(List.of(person.getRequiredSymbolicName(), collect(book).as("wrote")));
		match = match
			.optionalMatch(person.relationshipTo(book, "WROTE"))
			.with(newVariables.toArray(Expression[]::new));
		returnedExpressions.add(wrote);
	}
 
	Stream.concat(Stream.of("name"), selectionSet.getImmediateFields().stream().map(SelectedField::getName))
		.distinct()
		.filter(n -> !("actedIn".equals(n) || "wrote".equals(n)))
		.map(n -> person.property(n).as(n))
		.forEach(returnedExpressions::add);
 
	var statement = makeExecutable(
		match
			.where(Optional.ofNullable(nameFilter).map(String::trim).filter(Predicate.not(String::isBlank))
				.map(v -> person.property("name").contains(anonParameter(nameFilter)))
				.orElseGet(Conditions::noCondition))
			.returning(returnedExpressions.toArray(Expression[]::new))
			.build()
	);
	return executeReadStatement(statement, Person::of);
}

The Cypher-DSL really shines in building queries in an iterative way. The whole PeopleService is here.

Restricting query complexity

In anything that generates fetchers via GraphQL-Java you should consider using an instance of graphql.analysis.MaxQueryComplexityInstrumentation.

In a Spring application you would do this like

@Configuration
public class MyConfig {
 
	@Bean
	public SimpleInstrumentation max() {
		return new MaxQueryComplexityInstrumentation(64);
	}
}

in Quarkus like this:

import graphql.GraphQL;
import graphql.analysis.MaxQueryComplexityInstrumentation;
 
import javax.enterprise.event.Observes;
 
// Also remember to configure
// quarkus.smallrye-graphql.events.enabled=true
public final class GraphQLConfig {
 
	public GraphQL.Builder configureMaxAllowedQueryComplexity(@Observes GraphQL.Builder builder) {
 
		return builder.instrumentation(new MaxQueryComplexityInstrumentation(64));
	}
}

This prevents the execution of arbitrary deep queries.

Summary

There is a plethora of options to use Neo4j as a native graph database backing your GraphQL API. If you are happy running a Node based middleware, you should definitely go with neo4j/graphql. I don’t have an example myself, but the repo has a lot.

A similar approach is feasible on the JVM with neo4j-graphql/neo4j-graphql-java. It is super flexible in regards of the actual runtime. Here is my example: Neo4jGraphqlJava.java.

Schema-first GraphQL and a more static approach with a Spring Data based repository or for that mapper, any other JVM OGM, is absolutely possible. Find my example here: michael-simons/neo4j-aura-sdn-graphql. Old school enterprise Spring based data access frameworks are not mutual exclusive to GraphQL at all.

Last but not least, Quarkus and Smallrye-GraphQL offers a beautiful Object-first approach, find my example here: michael-simons/neo4j-aura-quarkus-graphql. While I wanted to use “my” baby, the Cypher-DSL to handcraft my queries, I do think that this approach will work just excellent with Neo4j-OGM.

In the end, I am quite happy to have made up my mind a bit more about GraphQL and especially the accessibility it brings to the table. Many queries translate wonderful to Neo4j’s native Cypher.

I hope you enjoyed reading this post as much as I enjoyed writing it and the examples along it. Please make sure you visit the medium pages of my colleagues shared first here.

Happy coding and a nice summer.

The feature image on this post has been provided my Gerrit… If get the English/German pun in this, here’s one more in the same ball park 😉

| Comments (2) »

13-Jul-21


Synchronizing Neo4j causal cluster bookmarks

Neo4j cluster is available in the Neo4j enterprise version and of course in Neo4j Aura.

A cluster provides causal consistency via the concept of bookmarks:

On executing a transaction, the client can ask for a bookmark which it then presents as a parameter to subsequent transactions. Using that bookmark the cluster can ensure that only servers which have processed the client’s bookmarked transaction will run its next transaction. This provides a causal chain which ensures correct read-after-write semantics from the client’s point of view.

When you use a Session from any of the official drivers directly (for example from the Neo4j Java Driver, or at least version 6 of Spring Data Neo4j, you hardly every see bookmarks yourself and there’s mostly no need to.

While Spring Data Neo4j 5.x still required some manual setup, SDN 6 does not need this anymore.

So in theory, all applications running against Neo4j should be fine in terms of “reading their own writes”.

But what about multiple instances of an application running against a cluster, for example to scale the application, too?

This does actually require some work. In the following example I refer to the Java driver, but the API of the other drivers and of course, the behavior of the server is identical.

When creating a new session the driver allows to pass in a collection or iterable of Bookmarks and not just a single object. Those bookmarks are then taken all to the cluster. As soon as the first transaction in this session starts, the routing will make sure that the requests go to a cluster member that has reached at last the latest transaction defined by the collection of bookmarks. There is no need to keep the bookmarks in order.. That allows us just collect new bookmarks and pass them on. We don’t have to do anything on the client side about sorting.

That work is done already internally by Spring Datas implementation of Bookmark managers.

But what can we do with that information to make sure multiple instances of the same application read the latest writes?

As soon as SDN becomes aware of a new bookmark, we must grab it and push it into exchange. A Redis pubsub topic, or a JMS queue configured as pubsub or even some Kafka configuration will do.

I have created to projects to demonstrate such a setup with both SDN 6 and the prior version, SDN5+OGM:

Common setup

Both projects require a locally running Redis instance. Please consult the Redis documentation for your OS or use a Docker container.

Conceptional, every messaging system that supports pubsub should do.

Spring Data Neo4j 6

The example projected is here: bookmark-sync-sdn6.

SDN 6 publishes bookmarks received from the driver as ApplicationEvents, so we can listen on them via ApplicationListener. For the other way round, the Neo4jTransactionManager can be seeded with a Supplier>.

The whole setup is as follows, please read the JavaDoc comments:

Spring Data Neo4j 5 + Neo4j-OGM

The example projected is here: bookmark-sync-sdn5.

In SDN5+OGM we can use the BookmarkManager interface provided by SDN5. We run a completely custom (and also better implementation) than the default one relying on Caffeine-Cache (which is not necessary at all).

The principle is the same, though: When new bookmarks are received via the transaction system, they will be published, new bookmarks received on the exchange will be the new seed. Please note, in SDN5 bookmark support must be enabled with @EnableBookmarkManagement

Note: Don’t combine SDN6 and SDN5 config! Those configurations above are for two separate example projects.

Photo by Chiara F on Unsplash

| Comments (1) »

12-Jul-21


What if Metallica went into Java programming?

Yesterday, Maciej shared this and I answered

after that, things escalated a bit… 🙂

While I thought, if Metallica would have been Spring developers, they would have written about Disposable Beans, not Heroes, other people have been more creative:

From Gerald came songs about memory leaks or the lack thereof: Creeping Death (Memory Leak) or The Memory Remains (no Memory Leak).
Java is a managed manage, so of course there’s the garbage collection with Harvester of Garbage. We have (at least 3 boolean values), which is Sad But True (Boxed Booleans).
Anyway, there’s seems to be a running thread… well, Until it Sleeps. If there are some weak references, they thought I Disappear.

Most important bit however is Coffee in the Jar.

Mario chimed in as well and pointed out to the One singleton, the less friendly garbage collector named Seek and Destroy. I would have said that this one pulls The Shortest Straw for you, but that one also applies to a Comparable. In the end, Nothing else matters anyway, while jitting the hot path.

Christoph being the awesome Spring developer he is, brought up of course The Master Of Beans as well as The Bean Within. Everything has a beginning and the end, and so have beans: Init & Destroy.

My favorite one however came from dear Eirini: The bean that should not be, which sums up some struggles of setting up some projects where nicely 🙂

The twitter thread made my day, some light heartened fun for a change.

Have some fun while listening to the songs above:

Featured image on the post from Wikimedia commons under a Creative Commons license.

| Comments (0) »

27-May-21


Releasing Maven based projects to Maven central

I published my first library on Maven central about 2013. A server side embedding tool for webpages based on OEmbed. I remember how happy and proud I was having published something in binary form “for all eternity”.

Maven central is the canonical, default artifact repository for the built tool of the same name, Apache Maven. Until configured otherwise, Maven tries to resolve dependencies from there.

The company behind sponsoring and running Maven central is Sonatype. Back in 2013 (and also 2018) account approval to release things to central was manual and involved resolving of ownership of the reverse DNS name. All of that is centered around preventing the hijacking of coordinates (read along here) to prevent people from tricking other people into using malicious software.

The whole process to be a producer starts here: The central repository: Producers. It’s explained in great detail.

Releasing to central also involves going through something called staging repositories and the process associated with it. it can be done through an UI (OSS Sonatype) or via plugins for Maven.

This week, another company, JFrog, announced that they are shutting down Bintray / JCenter. I was aware that Bintray and JCenter are around and are often used within Gradle projects. Apart from that, I only used one of JFrogs products, Artifactory, in a company as a local Maven central mirror and local deploy and release target.

People seem to have used JCenter and Bintray because the release process seems less strict and they found the Maven central way of doing things too hard. Other voices have been raised that Maven central is often slow. I cannot confirm the later, though.

I am writing down the following remarks to demonstrate that it is not that hard to publish your libraries on Maven central after getting across the initial setup.

First of all, read the above link “Producer” to get your coordinates registered for you.

I did not go through the UI of Sonatype for quite some time. The libraries I put myself onto central are all released via the Maven release plugin.

To get this up and running, your pom.xml has to fit some requirements. Especially, the meta data has to be complete. This should be the first step.

Further requirements are: Javadoc and sources must be present. The artifacts must not come in a snapshot form and they must not depend on anything not being on Maven central. Also, the artifacts must signed via GPG (See all of this here).

For my personal needs I have configured to a local release profile in my ~/.m2/settings.xml containing my username on oss.sonatype.org and encrypted password. This goes into the list of <servers> like this:

<server>
  <id>ossrh</id>
  <username>g.reizt</username>
  <password>XXXXX</password>
</server>

Also in settings.xml, the GPG credentials.

<profile>
    <id>gpg</id>
    <properties>
            <gpg.keyname>KEYNAME</gpg.keyname>
<!--        <gpg.passphrase>XXXX</gpg.passphrase> -->
<!-- Or better via an agent -->
    </properties>
</profile>

This turns out to be one of the hardest part to get right. I always have to look this up for CI or a new machine.

So, for the projects pom: Make sure you follow the requirements for the meta data and configure the necessary plugins for JavaDoc, sources and signature.

The libraries I put on central have basically all this information:

<build>
	<plugins>
		<plugin>
			<groupId>org.sonatype.plugins</groupId>
			<artifactId>nexus-staging-maven-plugin</artifactId>
			<version>${nexus-staging-maven-plugin.version}</version>
			<extensions>true</extensions>
			<configuration>
				<serverId>ossrh</serverId>
				<nexusUrl>https://oss.sonatype.org/</nexusUrl>
				<autoReleaseAfterClose>true</autoReleaseAfterClose>
			</configuration>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-source-plugin</artifactId>
			<version>${maven-source-plugin.version}</version>
			<executions>
				<execution>
					<id>attach-sources</id>
					<goals>
						<goal>jar-no-fork</goal>
					</goals>
				</execution>
			</executions>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-javadoc-plugin</artifactId>
			<executions>
				<execution>
					<id>attach-javadocs</id>
					<goals>
						<goal>jar</goal>
					</goals>
				</execution>
			</executions>
			<configuration>
				<detectOfflineLinks>false</detectOfflineLinks>
				<detectJavaApiLink>false</detectJavaApiLink>
				<source>${java.version}</source>
				<tags>
					<tag>
						<name>soundtrack</name>
						<placement>X</placement>
					</tag>
				</tags>
			</configuration>
		</plugin>
		<plugin>
			<groupId>org.apache.maven.plugins</groupId>
			<artifactId>maven-release-plugin</artifactId>
			<version>${maven-release-plugin.version}</version>
			<configuration>
				<autoVersionSubmodules>true</autoVersionSubmodules>
				<useReleaseProfile>false</useReleaseProfile>
				<releaseProfiles>release</releaseProfiles>
				<tagNameFormat>@{project.version}</tagNameFormat>
				<goals>deploy</goals>
			</configuration>
		</plugin>
	</plugins>
</build>
 
<profiles>
	<profile>
		<id>release</id>
		<build>
			<plugins>
				<plugin>
					<groupId>org.apache.maven.plugins</groupId>
					<artifactId>maven-gpg-plugin</artifactId>
					<version>${maven-gpg-plugin.version}</version>
					<executions>
						<execution>
							<id>sign-artifacts</id>
							<phase>verify</phase>
							<goals>
								<goal>sign</goal>
							</goals>
						</execution>
					</executions>
				</plugin>
			</plugins>
		</build>
	</profile>
</profiles>

org.sonatype.plugins:nexus-staging-maven-plugin does all the heavy lifting behind the scenes as it hooks up to the release phases. I keep the gpg plugin in a separate profile so that users of my libraries are not pestered with it when they just want to build some stuff locally.

After all this is done, you can release your things in a two step process via `mvn release:prepare` followed by a `man release:perform`. The tooling will guide you through setting the current version and update your things to a new snapshot version.

I won’t go into the discussion whether the repeated tests runs are meaning full or not or whether not using the release plugin at all makes sense or not. I currently maintain projects than are run with CI friendly versions and released via other tooling to central and a couple of things released as described above.

| Comments (1) »

05-Feb-21