Some thoughts about using AI-Coding tools

Since late 2025 I had the feeling that I am losing my shit. While I did publish “Kann KI verschlafen werden” (Can AI be slept away) in September, the people of the church of AI got louder by the day. And at some point, that leaves marks. I don’t doubt my skills and abilities, I know how I work, I know what I find useful and what not, but still.

Things I never doubted:

  • AI tooling is a tool of the few, deskilling a major part of the workforce
  • AI has tremendous costs, obviously environmental, but on a same scale, socially when training models and later, leaving ecosystems in shamble
  • AI as such—local model give or take—is costly as fuck and raises the barrier of entry and sustainability.

There is more and luckily, other people wrote about it. Like “Feeding the Machine”, from James Muldoon, Mark Graham and Callum Grant, or Resisting AI, an anti-fascist approach to artificial intelligence by Dan McQuillan and essentially anything tante wrote the last months.

However, that doesn’t help me with uncertainty and self-doubt. Ignoring and hating is cheap, but at some point, I need to know and understand what makes people rave about AI coding tools that much, first hand. Late to the party is an understatement here. In January 26 i tried a couple of things: I used Cursor with scrobbles4j and created three changes: 1, 2 and 3. These changes are indistinguishable of code I would have written myself, the tool even aligned well with my testate (commit 2). Fun fact: That was the change I needed to control the least amount, speaking for the fact that those tools are text rehashing machines.

Did Cursor make me faster? Hell no. Here’s the transcript of the session. Note how I did give very specific instructions, pointed several times to CONTRIBUTING.md, how the tool struggles with my Maven and Git setup (they are working well, believe it or not and thank you very much…).

My normal mo here would be: Let’s go for a run or ride, think about stuff and when back home and showered, just write it down. That is literally how my head works. Both in writing code, texts and books.

Now I knew how a tool would work on a small, yet a bit unusual code base (I doubt that the combination of Quarkus, Cute, JDBI and very SQL focused programs are that common). I honestly felt: Relieved. If I abstract away the inherit problems, I can think about it as “yet another tool”, that is easy to use and you don’t need special knowledge for it (put in another way: Don’t believe the evangelists of “Prompt engineering”, if you are able to describe a problem to a human, than you are just fine).

Second test was using IntelliJ Junie with a gigantic codebase that is Neo4j (Yes, I did use the company account for that, obviously) and help me getting my feet wet with Scala. It worked to some extend, yet I did not know the specific parts I was working with as well as my own pet-project, so my success of describing the problem was not as good. What helped me much more was, drum roll, the quite excellent Tour of Scala, old fashioned reading and later on, my new colleague who patiently described and explained my mistakes.

So that’s the me part: I won’t change the way I work, really. The “tool xyz one shotted my fab and never heard of app idea in 3 hours” is still bullshit. Of course, something runnable will come out of it, but only if you live this as Fast fashion. If you guide the tool, you invest the time you would probably use to hone your skills into the machine (and probably it will also be used as training), but you get decent output.

That brings me to the next part: AI assisted contributions such as this. I liked the interaction with Oliver a lot, it was positive before, and positive in this issue again. He disclosed that he used Claude, and apparently, the changes are not invasive at all. I did rewrite about a third (see this commit) so that I a) know what is in there and b) are happy to own it.

I posted this. Some people starred (liked?) it, some had questions. This is an answer to the latter. I don’t live in isolation, and the world keeps on changing. I don’t like all changes (in fact, most of them are terrible), but at some point, one has only so much energy to fight all of them. And honestly, that’s not the point: I will not integrate AI as my daily driver. It is definitely not as good as the church of AI promise it. But I will not condemn people that made decent use of it and even disclose it. Berating them is in my opinion on the same level as berating people on their travel and other lifestyle choices. I can only lead by example here. Obviously, this is not speaking of garbage PRs that consists of 20% emojis and 80% junk changing 50% of a project because someone had fun “one shooting” a feature.

Title picture: Ryan Song at Unsplash

| Comments (0) »

19-Jan-26


Beautiful “new” cloud world: Apple Music

Things used to work, things had been quite nice for a while even. I usually felt being the owner and in charge of my electronic devices for the better part of my life. There’s been a lot of stuff that used to just worked.

One such feature was the fact that my first iPod classic I ever owned, a 3rd generation one, black&white backlight illuminated display, with red illuminated buttons, did keep track of when I did play which track, updated its play count and synched it back to my Apple iTunes library. I do have basically the same library still. Like, literally. It just grew and was migrated with each new Mac, Mac OS X and iTunes.

Sure, in the last years, Apple tried to push me into their Music subscription in all possible ways and places, but the stubborn old fuck I am, I still buy either physical CDs and rip them, hit the iTunes Store or Bandcamp.

Back to the play counts: Yes, I still have them. I still value them. I have smart playlist based on the number of plays, the last time I listened to stuff and so on. I put them into a database, I did talks about the domain model, turning it into a graph and I have a chart page up and running since 2021. Yes, this is of value to me. Not live altering, but a nice thing. I don’t need more AI and shit on it, just those numbers.

Now, why the long intro? I am also the father of two kids and a husband. My family does like music a lot, heck, the big kid even has good taste, but their listening habits are quite different than mine. Which is absolutely fine of course. The kids especially want to have streaming, they are just not “generation album”. Sure, I do share my local library inside our home, that works. But again, taste. And convenience (and don’t tell me, I should setup yet another server somewhere, hosting yet another software to replicate “the could”, that just does not fly in my life anymore).

So, after a long thought we finally agreed on subscribing to Apple Music for family. I was very reluctant about how that could possibly affect my local library, but a test showed that Apple is indeed able to leave my collection alone. Why Apple and not Spotify? Easy: I’d rather buy every single track my kids wanna listen to than sent any money over there. Why not YouTube? Convenience and quality, also price.

Now, the kids are super happy, and my wife too. Which is essentially all that counts.

And me? I did have a look at Apple Music itself of course. It feels alien to me, I want to have a more structured view on content and artists, but I did listen to some stuff. Lossless quality is excellent, selection too. I made the mistake and listened to “Stumblin’ In” (to learn about the original of that German cover that did get some traction the last weeks again and wow, that’s the worst catchy tune of all time and I really want my prior life back, but alas). Anyhow, the service is ok.

Fast forward two weeks: I want to sync local macOS content to my iPhone, and get the play count and info from my phone onto my computer. They way things did work for more than 20 years.

The next two hours I find myself searching “iOS music app doesn’t sync plays and play counts and more to macOS with an active Apple Music subscription” and there are threads on discussions.apple.com several years old and it basically boils down to the fact that the moment you have an Apple Music subscription your previously working bidirectional macOS <> iOS sync turns into a one way street unless you bite the bullet and turn on full library sync which I absolutely don’t want. Sync doesn’t even work when I use CS music for playing stuff… Its that deep into the hole system.

The solution: I signed out on iOS from “media and purchases” (not from the Apple id itself). Apple Music will instantly disappear from the music app, next sync will work again as expected. I still see tracks I bought on the iTunes music store, I am still able to play over family share and via remote. My remaining few other app subscriptions still work. I still can buy new stuff, but are prompted now again for credentials.

I. Hate. This. Shit. Side. Of. “The cloud”.

If it wouldn’t be for the happiness of my loved ones, I would have unsubscribed in an instant from the service again This is my device. I paid money for it and the functionality it came with. I even pay money now for the subscription and all of sudden, it’s less than before. OKRs and KPI driven products go f*ck your self with a stick, for crying out loud, and start to listen to your users again for a change.

| Comments (0) »

01-May-25


Maven build plan and execution order

Update: The plugin is now hosted on Mojohaus, has a new artefact id and thus is automatically available in Maven without to having to supply the group id. I updated the snippets below. The latest version displays also the versions of the plugins being used.

I recently stumbled upon the buildplan-maven-plugin and I find it quite useful

Run it on your project with

mvn buildplan:list

to list all steps in order and

mvn buildplan:list-phase

On the core project of Neo4js JDBC driver it looks like this:

Read the complete article »

| Comments (2) »

10-Mar-25


Stepping out of my comfort zone: No Rest for the Wicked

“Oh please no, Michael, not Sport content here”… Well, sorry, it’s my blog and right now I don’t have a much better place for it. Bear with me, though, there will be some cool database querying later on.

In late 2024 I stumbled upon No Rest for the Wicked 2025 and the idea is simple, even though the page describes it a bit complicated. The challenge is as follows:

Throughout February, do

  • Do 24 runs in total
  • Run once each hour of the day
  • Run once on each weekday
  • Each run (or walk / hike) must be at least 5km and at least 45 minutes
  • A break of at least 90 minutes between each activity

That sounds like “fun”, let’s go:

There’s a race result event that tracks the progress and we come to that later. I was in the top twenty at the time of writing, with one activity to go and finished eleventh.

The challenge was quite an emotional ride for me between “oh my gosh what did I sign up for?”, “I never manager” etc. My favorite hours of the day for running is actually anything between 6 and 10am. Afternoon is ok, evenings I really dislike, especially after long working hours sitting. I have a really odd way of putting my feet down while sitting, so in the evening my ankles are always kinda busted. Night running? I never tried. As a matter of fact, I try to avoid activities with a high heart rate in the evenings.

So there was a lot going on outside of my comfort zone and I am very content that I did this:

  • I felt good enough to go out in the middle of the freaking nights several times for running
  • Running in the night was a lot more satisfying than walking, the latter just to boring alone and I didn’t want to put on headphones
  • The silence was so nice, and so was the starry nights, quite unique, deeply satisfying experience
  • Prior to getting up and out it’s tough, doing it is uplifting then
  • One is capable of many things you never thought you are
  • Happy that I live in a place that I can go out every time without having to be afraid of things

Honestly, at the end of 2 weeks I felt less depressed and down than usual in February. The last 3 years I travelled for a week to the Canaries to get some sun and cycling in, which I couldn’t do this year for personal reasons. This challenge was a great antidote.


The only picture of myself I know of between 3 and 4am

So what does running here have to do with running database queries? Remember that Race result event above? I wanted to have an proper overview myself, without manual work and I created this:


My report of No Rest for the Wicked 2025

Looking at those number, would I do things differently? Yeah, I would probably not pushback the wee small hours between midnight and 3am to the end, but do them in the middle in two nights, in contrast spreading them over several days. I wasn’t sure if I would push through the challenge and my reasoning was that I didn’t want to fell for sunken costs. Anyhow, how did I create the above report? I went to connect.garmin.com, pulled down my activities as CSV for the past weeks, fired up DuckDB and ran that query:

WITH
  src AS (
    SELECT DISTINCT ON (Datum, Titel)
           *, EXTRACT('hour' FROM Zeit)*60*60 + EXTRACT('minute' FROM Zeit)*60 + EXTRACT('second' FROM Zeit) AS Duration
    FROM read_csv('*.csv', union_by_name=TRUE, filename=FALSE)
    WHERE Titel LIKE 'NRFTW%'
  ),
  activities AS (
    SELECT "Day #"      : dense_rank() OVER (ORDER BY Datum::DATE),
           "Day"        : Datum::DATE,
           "Run #"      : ROW_NUMBER() OVER (),
           "Break"      : age(Datum, COALESCE(date_add(lag(Datum) OVER starts, INTERVAL (lag(Duration) OVER starts) SECOND), Datum))::VARCHAR,
           "Hour of day": HOUR(Datum),
           "Weekday"    : dayname(Datum),
           "Sport"      : CASE Aktivitätstyp
                            WHEN 'Gehen'  THEN 'Walking'
                            WHEN 'Laufen' THEN 'Running'
                            ELSE Aktivitätstyp
                          END,
           "Distance"   : REPLACE(Distanz, ',', '.')::NUMERIC,
           "Duration"   : EXTRACT('hour' FROM Zeit)*60*60 + EXTRACT('minute' FROM Zeit)*60 + EXTRACT('second' FROM Zeit),
           "Progress"   : 100/24,
    FROM src
    WINDOW starts AS (ORDER BY Datum)
  ),
  sums AS (
    SELECT * REPLACE(
               round(SUM(Distance), 2)   AS Distance,
               to_seconds(SUM(Duration)) AS Duration,
               SUM(Progress) OVER dates  AS Progress
           ),
           "p_weekdays" : COUNT(DISTINCT Weekday) OVER (ORDER BY DAY GROUPS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING),
           "Weekdays"   : COUNT(DISTINCT Weekday) OVER dates
    FROM activities
    GROUP BY GROUPING SETS (("Day #", "Run #", DAY, "Hour of day", Progress, Weekday, Break, Sport), ("Day #", DAY), ())
    WINDOW dates AS (ORDER BY DAY)
  )
SELECT * EXCLUDE(p_weekdays) REPLACE (
           Break::INTERVAL AS Break,
           CASE WHEN "Hour of day" IS NULL AND p_weekdays <> 7 AND Weekdays = 7 THEN '✅' END AS Weekdays,
           CASE
             WHEN DAY           IS NULL THEN lpad(printf('%.2f%%', Progress), 7, ' ')
             WHEN "Hour of day" IS NULL THEN lpad(printf('%.2f%%', Progress), 7, ' ') || ' ' || bar(Progress, 0, 100, 20) END AS Progress
         )
FROM sums
ORDER BY DAY, "Hour of day" NULLS LAST;

It’s so funny how much value one can get out of basically three columns alone. I am basically working only on the start date, the duration and the distance. Everything else are derived values. Things to note: The progress per day and not per run (the latter would always be 1/24 of course), the checkbox when I hit the weekdays and the summary of distance and duration per day and overall. The nice formatting of the break time is a sweet bonus (using the age function). The DuckDB team just released 1.2.0 and put out a couple of blog posts that I really dig and I was able to utilize in my script:

  • Vertical Stacking as the Relational Model Intended: UNION ALL BY NAME Great blog, great feature. Used it to union all the CSV files (they have had different columns, because some of them included cycling watts, some didn’t. See FROM read_csv('*.csv', union_by_name=True, filename=False) in the query)
  • Catching up with Windowing Again, great content and I am a bit jealous because I think it’s more complete than the explanation in our book. I used the GROUPS framing to figure out when I hit all the weekdays
  • Announcing DuckDB 1.2.0 New features in 1.2.0. I am mostly using the alternative projection syntax “X: U” compared to “U AS X” in the inner projection. It just reads better with long expressions.

Apart from that the query demonstrates again the power of window functions computing for example the break between activities as well as the day and activity numbers. It also makes extensive use of the addition to asterisk expression EXCLUDE and REPLACE avoiding juggling all the columns all over again. Also, take note of the GROUP BY with GROUPING SETS, which is the longer form of a GROUP BY ROLLUP that gives you control over the sets. Last bit is the sweet DISTINCT ON. I could just add more csv files, and this deduplicates not on all columns, but only on date and title.

This is what I like running: Challenges and algorithms locally, on my personal data. Those small and “crazy” challenges are so much nicer than the super commercialised side of running you find with the 6 star marathons or super halves. And the same thing applies to reports as the one I created here for myself–and through FLOSS and actually sharing how things are done–for others, too. For me being able to do this locally without any big vendor analytics is empowering and brings back a lot of joy that is missing today in many settings.

Challenge accepted and put to bed with walking and running roughly 175km in 24 hours total.

| Comments (0) »

14-Feb-25


Let’s deadlock all the things.

Hej there, yes it’s me, I’m still alive and doing things, such as neo4j-jdbc, with automatic SQL to Cypher translations or building things like this for my own enjoyment. However, with the tons of meaningless posts and stuff around “Hey ma, look how I do AI with $tool”, I felt a bit out of time these days, just enjoying artisanal coding, requirements analysis and architectural work and didn’t have much incentive to write more prose. And giving the rest out for AI to swallow and regurgitate it, well… 🤷

In the last days however a rookie mistake I made gave me some food for thoughts and I write this post for anyone new in the Java ecosystem or new to programming in general: Regardless how much years you have acquired or how good you think you are, you always will make some stupid mistakes such as the following. Hopefully, you’re lucky as me and find yourself surrounded by colleagues who help you rolling a fix out. And yes, the blast radius of something that runs in $cloud is bigger than a bug in a client-side library, especially with the slow update cycles in many industries.

Enough talking, let’s go. I was refactoring some code to not use Java’s HttpURLConnection of old anymore. Here’s basically how it looked:

import java.net.HttpURLConnection;
import java.net.URI;
import java.nio.charset.StandardCharsets;
 
class S1 {
  public static void main(String... a) throws Exception {
    var uri = URI.create("http://randomapi.io");
    var body = "Hello";
 
    // Nice I get an output stream I just can write too, yeah…
    var connection = (HttpURLConnection) uri.toURL().openConnection();
    connection.setDoOutput(true);
    connection.setRequestMethod("POST");
    try (var out = connection.getOutputStream()) {
      // Reality: objectMapper.writeValue(out, Map.of("some", "content"));
      out.write(body.getBytes(StandardCharsets.UTF_8));
    }
    connection.connect();
  }
}

This works, but the URL connection is not a great API to work with. Also, it does not support HTTP/2 out of the box (or even not at all). The recommended replacement is the Java HttpClient. The same behaviour like above but with the HttpClient looks like this, defaulting to HTTP/2 by default:

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpRequest.BodyPublishers;
import java.net.http.HttpResponse.BodyHandlers;
import java.util.concurrent.Executors;
 
class S2 {
  public static void main(String... a) throws Exception {
    var uri = URI.create("http://randomapi.io");
    var body = "Hello";
 
    try (var client = HttpClient.newBuilder().executor(Executors.newVirtualThreadPerTaskExecutor()).build()) {
      var response = client
        .send(
          HttpRequest.newBuilder(uri).POST(BodyPublishers.ofString(body)).build(),
          BodyHandlers.ofString())
        .body();
    } catch (InterruptedException e) {
      Thread.currentThread().interrupt();
    }
  }
}

The request body is prepared by a BodyPublishers.ofString(body). There are quite a few others, such as from a byte array, files, reactive streams and more. What does not exist is a way to just get an output stream from the HTTP client one can write directly to, like the above example using URL connection. So assuming I am a Jackson user and I want to write some rather large JSON as request body, I would need to materialise this as a byte array in memory or deal with the shenanigans of ByteBuffer, which I really didn’t feel like. Instead, I thought, hey, there’s the pair of PipedInputStream and PipedOutputStream, let’s use a body publisher from an input stream supplier like the following:

var publisher = HttpRequest.BodyPublishers.ofInputStream(() -> {
  var in = new PipedInputStream();
  try (var out = new PipedOutputStream(in)) {
     outputStreamConsumer.accept(out);
   } catch (IOException e) {
     throw new UncheckedIOException(e);
   }
   return in;
});

Easy, right? While the pair is made exactly for the above purpose, the usage is 100% wrong and the JavaDoc is quite explicit with this:

Attempting to use both objects from a single thread is not recommended, as it may deadlock the thread. The piped input stream contains a buffer, decoupling read operations from write operations, within limits.

I did actually read this, set this up like above nevertheless, tested it and it “worked”. I thought, yeah, the HttpClient is asynchronous under the hood anyway, let’s go. Stupid me, as I did use its blocking API and ofc, reading and writing did occur on the same thread of course. Why did it not show up in tests? Rookie mistake number two, not testing proper extreme, like 0, -1, Integer.MAX_VALUE, a lizard and null. Within limits means: There’s a default buffer of 1024 bytes, any payload bigger than coming from anywhere into the request body would blow up.

Do this often enough, and you have a procedure that happily drains all the available threads from a system by deadlocking them, congratulations.

What would have prevented that? Taking a step down from mount stupid to begin with. A second pair of eyes. Better testing. Would AI have helped me here? I am unsure. Maybe in generating test input. On API usage: Most likely not. It’s all correct, and Java has no way in the language to make me aware. Maybe an IDE with a proper base model for semantic analysis can spot it, too. IntelliJ does not see it.

How did I solve it in the end? I was not happy using a byte[] array, so I really wanted to have that pipe and a working solution looks like this:

import java.io.IOException;
import java.io.PipedInputStream;
import java.io.PipedOutputStream;
import java.io.UncheckedIOException;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpRequest.BodyPublishers;
import java.net.http.HttpResponse.BodyHandlers;
import java.nio.charset.StandardCharsets;
 
class S3 {
  public static void main(String... a) throws Exception {
    var uri = URI.create("http://randomapi.io");
    var body = "Hello";
 
    try (var client = HttpClient.newBuilder().build()) {
      var bodyPublisher = BodyPublishers.ofInputStream(() -> {
        var in = new PipedInputStream();
        var out = new PipedOutputStream();
        try {
          out.connect(in);
        } catch (IOException e) {
          throw new UncheckedIOException(e);
        }
        Thread.ofVirtual().start(() -> {
          try (out) {
            // Here the stream can be passed to Jackson or whatever you have 
            // that's generating output
            out.write(body.getBytes(StandardCharsets.UTF_8));
          } catch (IOException e) {
            throw new UncheckedIOException(e);
          }
        });
        return in;
      });
      var response = client
        .send(
          HttpRequest.newBuilder(uri).POST(bodyPublisher).build(),
          BodyHandlers.ofString()
        ).body();
    } catch (InterruptedException e) {
      Thread.currentThread().interrupt();
    }
  }
}

The important pieces here are:

  • Create the input and output streams independently
  • Connect them outside both the reading and writing thread
  • Kick of the writing thread, and since we are on modern Java, just use a virtual thread for it. In the end, they are exactly for that purpose: To be used when there’s some potentially block things happening
  • Just return the input stream to the HTTP Client, it will take care of using and later closing it.
  • Close the output stream inside the writing thread (note the try-with-resources statement that “imports” the stream)

A runnable solution is below. I am using an explicitly declared main class here, so that I can have a proper Java script without the boiler plate of old. It brings up an HTTP server for testing as well, that just mirrors its input. It than uses both methods described above to POST to that server, get the response and asserts it. Use Java 24 to run it with java -ea --enable-preview Complete.java.

Fun was had the last days.

import java.io.BufferedInputStream;
import java.io.IOException;
import java.io.PipedInputStream;
import java.io.PipedOutputStream;
import java.io.UncheckedIOException;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpRequest.BodyPublishers;
import java.net.http.HttpResponse.BodyHandlers;
import java.nio.charset.StandardCharsets;
import java.security.SecureRandom;
import java.util.Base64;
import java.util.concurrent.Executors;
 
import com.sun.net.httpserver.HttpServer;
 
void main() throws IOException {
  var port = 4711;
  var server = HttpServer.create(new InetSocketAddress(port), 0);
  try {
    server.createContext("/mirror", exchange -> {
      byte[] response;
      try (var in = new BufferedInputStream(exchange.getRequestBody())) {
        response = in.readAllBytes();
      }
 
      exchange.sendResponseHeaders(HttpURLConnection.HTTP_OK, response.length);
      var outputStream = exchange.getResponseBody();
      outputStream.write(response);
      outputStream.flush();
      outputStream.close();
    });
    server.setExecutor(null);
    server.start();
 
    var secureRandom = new SecureRandom();
    var buffer = new byte[(int) Math.pow(2, 16)];
    secureRandom.nextBytes(buffer);
    var requestBody = Base64.getUrlEncoder().withoutPadding().encodeToString(buffer);
 
    var uri = URI.create("http://localhost:%d/mirror".formatted(port));
 
    var responseBody = useUrlConnection(uri, requestBody);
    assert responseBody.equals(requestBody);
    responseBody = useHttpClient(uri, requestBody);
    assert responseBody.equals(requestBody);
  } finally {
    server.stop(0);
  }
}
 
private static String useUrlConnection(URI uri, String body) throws IOException {
  var urlConnection = (HttpURLConnection) uri.toURL().openConnection();
  urlConnection.setDoOutput(true);
  urlConnection.setRequestMethod("POST");
  try (var out = urlConnection.getOutputStream()) {
    out.write(body.getBytes(StandardCharsets.UTF_8));
  }
  urlConnection.connect();
  try (var in = new BufferedInputStream(urlConnection.getInputStream())) {
    return new String(in.readAllBytes(), StandardCharsets.UTF_8);
  }
}
 
private static String useHttpClient(URI uri, String body) throws IOException {
  try (var client = HttpClient.newBuilder().executor(Executors.newVirtualThreadPerTaskExecutor()).build()) {
    return client.send(HttpRequest.newBuilder(uri).POST(
      BodyPublishers.ofInputStream(() -> {
        var in = new PipedInputStream();
        //noinspection resource
        var out = new PipedOutputStream();
        try {
          out.connect(in);
        } catch (IOException e) {
          throw new UncheckedIOException(e);
        }
        Thread.ofVirtual().start(() -> {
          try (out) {
            out.write(body.getBytes(StandardCharsets.UTF_8));
          } catch (IOException e) {
            throw new UncheckedIOException(e);
          }
        });
        return in;
      })).build(), BodyHandlers.ofString()).body();
  } catch (InterruptedException e) {
    Thread.currentThread().interrupt();
    throw new IOException(e.getMessage());
  }
}

| Comments (3) »

05-Feb-25