Rewriting and filtering history

In my role as a Spring library developer at Neo4j, I spent the last year – together with Gerrit on creating the next version of Spring Data Neo4j. Our name so far has been Spring Data Neo4j⚡️RX but in the end, it will be SDN 6.

Anyway. Part of the module is our Neo4j Cypher-DSL. After working with jOOQ, a fantastic tool for writing SQL in Java, and seeing what our friends at VMWare are doing with an internal SQL DSL for Spring Data JDBC, I never wanted to create Cypher queries via string operations in our mapping code ever again.

So, we gave it a shot and started modeling a Cypher-DSL after openCypher, but with Neo4j extensions supported.

You’ll find the result these days at neo4j-contrib/cypher-dsl.

Wait, what? This repository is nearly ten years old.

Yes, that is correct. My friend Michael started it back in the days. There are only few things were you won’t find him involved in. He even created jequel, a SQL-DSL as well and was an author on this paper: On designing safe and flexible embedded DSLs with Java 5, which in turn had influence on jOOQ.

Therefor, when Michael offered that Gerrit and I could extract our Cypher-DSL from SDN/RX into a new home under the coordinates org.neo4j:neo4j-cypher-dsl, I was more than happy.

Now comes the catch: It would have been easy to just delete the main branch, create a new one, dump our stuff into it and call it a day. But: I actually wanted to honor history. The one of the original project as well as ours. We always tried to have meaningful commits and also took a lot of effort into commit messages and I didn’t want to lose that when things are not working.

Adding content from one repository into an unrelated one is much easier than it sounds:

# Get your self a fresh copy of the target 
git clone git@wherever/whatever.git targetrepo
# Add the source repo as a new origin
git remote add sourceRepo git@wherever/somethingelse.git
# Fetch and merge the branch in question from the sourceRepo as unrelated history into the target
git pull sourceRepo master --allow-unrelated-histories

Done.

But then, one does get everything from the source. Not what I wanted.

The original repository needed some preparation.

git filter-branch to the rescue. filter-branch works with the “snapshot” model of commits in a repository, where each commit is a snapshot of the tree, and rewrites these commits. This is in contrast to git rebase, that actually works with diffs. The command will apply filters to the snapshots and create new commits, creating a new, parallel graph. It won’t care about conflicts.

Manisch has a great post about the whole topic: Understanding Git Filter-branch and the Git Storage Model.

For my use case above, the build in subdirectory-filter was most appropriate. It makes a given subdirectory the new repository root, keeping the history of that subdirectory. Let’s see:

# Clone the source, I don't want to mess with my original copy
git clone sourceRepo git@wherever/somethingelse.git
# Remove the origin, just in case I screw up AND accidentally push things
git remote rm origin
# Execute the subdirectory filter for the openCypher DSL
git filter-branch --subdirectory-filter neo4j-opencypher-dsl -- --all

Turns out, this worked good, despite that warning

WARNING: git-filter-branch has a glut of gotchas generating mangled history
rewrites. Hit Ctrl-C before proceeding to abort, then use an
alternative filtering tool such as ‘git filter-repo’
(https://github.com/newren/git-filter-repo/) instead. See the
filter-branch manual page for more details; to squelch this warning,
set FILTER_BRANCH_SQUELCH_WARNING=1.

I ended up with a rewritten repo, containing only the subdirectory I was interested in as new root. I could have stopped here, but I noticed that some of my history was missing: The filtering only looks at the actual snapshots of the files in question, not at their history you get when using --follow. As we moved around those files around a bit already, I lost all the value information.

Well, let’s read the above warning again and we find filter-repo. filter-repo can be installed on a Mac for example with brew install git-filter-repo and it turns out, it does exactly what I want, given I know vaguely the original places of the stuff I want to have in my new root:

# Use git filter-repo to make some content the new repository root
git filter-repo --force \
    --path neo4j-opencypher-dsl \
    --path spring-data-neo4j-rx/src/main/java/org/springframework/data/neo4j/core/cypher \
    --path spring-data-neo4j-rx/src/main/java/org/neo4j/springframework/data/core/cypher \
    --path spring-data-neo4j-rx/src/test/java/org/springframework/data/neo4j/core/cypher \
    --path spring-data-neo4j-rx/src/test/java/org/neo4j/springframework/data/core/cypher \
    --path-rename neo4j-opencypher-dsl/:

This takes a couple of paths into consideration, tracks the history and renames the one path (the blank after the : makes it the new root). Turns out that git-filter-repo is also way faster than the git-filter-branch.

With the source repository prepared in that way, I cleaned up some meta and build information, added one more commit and incorporated it into the target as described at the first step.

I’m writing this down because I found it highly useful and also because we are gonna decompose the repository of SDN/RX further. Gerrit described our plans in his post Goodbye SDN⚡️RX. We will do something similar with SDN/RX and Spring Data Neo4j. While we have to manually transplant our Spring Boot starter into the Spring Boot project via PRs, we want to keep the history of SDNR/RX for the target repo.

Long story short: While I was skeptical at first ripping the work of a year apart and distributing it on a couple of projects, I’m seeing it now more as a positive decomposing of things (thanks Nigel for that analogy).

Featured image courtesy of Nathan Dumlao on Unsplash.

| Comments (1) »

01-Jul-20


Die deutsche “Corona-Warn-App” CWA

Im Jahr 2020 passieren einige Dinge, die wohl kaum ein Mensch Ende des vergangenen Jahres hätte ahnen können. Die COVID-19 Pandemie ist es nicht… Führende Stimmen warnten schon lange. Die Pandemie ist allerdings Auslöser für vieles. So auch für die deutsche Variante der aktuell überall erscheinenden Corona-Tracing-Apps, die “Corona-Warn-App” beziehungsweise kurz CWA die nun von SAP entwickelt und in Zukunft von der Telekom betrieben werden wird.

Zu meiner Person: Ich bin seit 20 Jahren Java-Entwickler, Autor des deutschsprachigen Spring Boot Buchs und beruflich in der Open-Source Entwicklung tätig. Bereits Ende März äußerte ich mich sehr kritisch und negativ über mögliche Anwendungen aus Deutschland, ihre Implikationen auf Datenschutz, Anonymität, Verfolgbarkeit und mehr. Generell hat mich das Tempo der Maßnahmen Ende März eiskalt überrascht und auch stellenweise überfordert.

Ganz herzlichen Dank an Daniel, Tim, Michael, André, Jens, Falk und Sandra für das aufmerksame Lesen und das Feedback zu meinen Tippfehlern.

Ich möchte gar nicht lange über den Sinn und Nutzen von “Apps” im “Kampf gegen Corona” diskutieren. Das primäre Ziel dieser Apps ist das Nachverfolgbarmachen von Kontaktstrecken: Welche Person hielt sich für längere Zeit in der Nähe welcher anderen Personen auf? Erkrankt eine der Personen in diesen Kontaktketten, so können alle Personen der Kette informiert werden, dass sie Kontakt mit einer möglicherweise infizierten Person hatten. Die Konsequenzen dessen mögen unterschiedlich ausgeprägt sein, idR. wird es wohl auf eine mehr oder weniger nachdrücklich empfohlene Quarantäne hinauslaufen, gegebenenfalls angeordnete Tests.

Betrachtet werden im folgenden die Quelltexte der Corona-Warn-App. Genauer gesagt die in der Programmiersprache Java geschriebenen und mit dem Spring Framework umgesetzten Bestandteile des Backends.

Gastvorlesung an der FH-Aachen

Am 15. Juni durfte ich einen Gastvortrag an der FH-Aachen zum Thema Corona-Warn-App halten. Dieser wurde auf Youtube veröffentlicht:

Tracking oder tracing, zentral oder dezentral?

Tracking bezeichnet hier die Nachverfolgung einzelner Benutzer, durch regelmässiges Speichern des Aufenthaltsortes über die GPS-Daten eines unterstützten Smartphones. Tracking erfolgt in der Regel zentral: Individuelle Spuren einzelner Personen entstehen. Auch wenn viele Menschen dies täglich durch die Nutzung sozialer Medien mit aktivierten Ortungsdiensten tun, gewinnt es in den Händen des Staates noch einmal eine andere Bedeutung. Werden wir immer in einer Demokratie leben?

Daher wird in den Leitlinien der Europäischen Union zur Gewährleistung der uneingeschränkten Einhaltung der Datenschutzstandards durch Mobil-Apps zur Bekämpfung der Pandemie auch explizit von zentraler Speicherung der Standortdaten von Personen abgeraten:

Begrenzte Nutzung personenbezogener Daten: Die Apps sollten den Grundsatz der Datenminimierung einhalten, dem zufolge ausschließlich erforderliche personenbezogene Daten verarbeitet werden dürfen und die Verarbeitung auf das für den jeweiligen Zweck notwendige Maß beschränkt ist. Die Kommission ist der Ansicht, dass Standortdaten für die Ermittlung von Kontaktpersonen nicht erforderlich sind und dafür auch nicht verwendet werden sollten.

Datensicherheit: Die Daten sollten auf dem Gerät der betroffenen Person gespeichert und verschlüsselt werden.

Was ist die Alternative? Die Firmen Apple und Google haben sich auf ein Verfahren geeinigt, dass das sogenannte Decentralized Privacy-Preserving Proximity Tracing (DP3T) für Smartphones mit aktuellen iOS beziehungsweise Android-Betriebssystemen über Bluetooth umsetzt.

Im Detail beschrieb Heise.de die Funktionsweise unter dem Titel: Corona-Tracking: Wie Contact-Tracing-Apps funktionieren, was davon zu halten ist bereits im April.

Die Kurzform des ganzen ist ungefähr diese:

  • Die Telefone generieren alle 24 Stunden einen temporären Schlüssel (TEK) und verschlüsseln diesen und speichern ihn auf dem Gerät.
  • Aus diesem TEK werden weitere Schlüssel (RPIK) erzeugt, und zwar rollierend. Mit diesen werden Identifizierungsmerkmale generiert (Ephemeral ID beziehungsweise EphID genannt). Diese werden über Bluetooth abgestrahlt
  • Aus den EphIDs kann nicht der TEK ermittelt werden beziehungsweise nicht das Gerät, dass die Schlüssel abgestrahlt hat
  • Andere Geräte fangen diese EphIDs auf und speichern diese ebenfalls lokal
  • Wird eine Benutzerin positiv auf COVID-19 getestet, so kann sie das in der App eingeben. Zu diesem Zeitpunkt werden die TEKs der letzten 14 Tage auf einen zentralen Server hochgeladen und von dort an als “Diagnose Schlüssel” bezeichnet. Ein Rückschluss auf die erkrankte Person ist nicht möglich, da die ursprünglichen TEKs alle 24 Stunden neu generiert werden.
  • Der zentrale Server wird nun nach Prüfung den Diagnose Schlüssel an alle teilnehmenden Smartphones ausstrahlen. Diese Geräte können die gesammelten und lokal gespeicherten EphIDs von anderen Geräten, die in der Nähe waren, nun entschlüsseln, wenn diese ursprünglich mit dem TEK des diagnose Schlüssels verschlüsselt wurden. Ist dies der Fall, weiß die Benutzerin des Smartphones, dass sie – oder ihr Gerät – in der Nähe eines Smartphones war, das einer infizierten Person gehört

Die CWA braucht also mindestens folgende Bestandteile:

  • Schnittstellen zum Betriebssystem moderner Smartphones: Diese stellen Apple und Google bereit.
  • Ein Frontend dazu. Das, was allgemein in diesen Zeiten als “App” betrachtet wird.
  • Ein Backend, das mindestens Komponenten enthält, die TEKs entgegen nehmen, verifizieren und anschließend an Teilnehmern des Systems ausstrahlen

Eine Komponente, die eine persönliche Anmeldung eines Benutzers am System enthält ist ausdrücklich nicht notwendig.

In Deutschland werden die letzt genannten Komponenten von einem Konsortium der Deutschen Telekom und SAP entwickelt und das als Open Source Anwendung, vollumfänglich verfügbar auf Github als Corona-Warn-App, inklusive der Projektdokumentation. Wie gesagt, eines der Dingen, die 2019 sicherlich höchst unwahrscheinlich waren.

Meine persönliche Meinung: So wie es aktuell rund 80 Millionen Virologen und Epidemiologen gibt, soviel Softwareexperten mit einer Meinung sind zu erwarten (ich sollte noch mal auf mein Buch aufmerksam machen, kauft es, es ist sehr gut!): SAP und Telekom haben in diesem Umfeld wenig zu gewinnen: Es ist anzuerkennen und trägt für mich sehr zum Vertrauen in eine solche App bei, dass eine offene Entwicklung zumindest versucht wird, und das angefangen bei der wirklich guten Projektdokumentation, Softwarearchitekturbeschreibung, den Epics und Whitepapers, den Apps für iOS und Android sowie dem Quelltext des Servers, mit dem die Apps sprechen und dem Verifikationsserver.

Ich habe keine Expertise in der Anwendungsentwicklung für iOS beziehungsweise Android. Daher verweise ich für dieses Thema auf das Review von Roddi: Review der App, die Endanwender sehen.

Softwarearchitektur

Wir finden die relevante Softwarearchitektur in der Dokumentation unter Solution Architecture (Architektur der Lösung). Ich benutze dieselben Grafiken wie das CWA-Projekt, die unter Apache 2 License veröffentlicht wurden:



Oben links sehen wir die Komponenten auf dem Gerät der Nutzer: die CWA. In der “Open Telekom Cloud” sehen wir alle Serverkomponenten, die im folgenden weiter enthüllt werden:



Diese Dokumente sprechen sicherlich nicht jeden Menschen an, für mich sind sie sehr aussagekräftig und genau richtig detailliert. Sie ermöglichen einen schnellen Zugang zu den Komponenten des Systems. Ich halte Zugänglichkeit in diesem Fall für essentiell!

Ich werde im folgenden die Komponenten “Corona-Warn-App-Server” (dieser nimmt die TEK infizierter Personen entgegen) und den “Verification Server” (dieser verifiziert die Infektion einer Person, die diese in die App einträgt, mit dem “Test Result Server”, auf dem die Daten des Gesundheitsamtes gespeichert sind) unter dem Standpunkt des beschriebenen Verfahrens betrachten und analysieren, ob mehr Daten als versprochen gespeichert werden oder nicht und ob offenkundige, grobe Schnitzer gemacht wurden.

Der Corona-Warn-App-Server

Update vom 3. Juni: Auf Twitter entsteht eine spannende Diskussion über das eigentliche Datenmodel selber: Thread sowie insbesondere meine Antworten und diese darin. Hier gibt es grundsätzlich 2 Bedenken: “Bleibt das DB Setup so in Produktion?” (Wenn ja, da stimme ich Alvar zu, wäre das schlecht), “Sollte so modelliert werden?” (vor diesem Commit hätte ich gesagt, ist nicht brilliant, aber ok… Die Änderungen danach sind so mittelgeil. Es wird gehofft, dass die “keys of infected user” aus dem Übertragungsprotokoll unique genug sind, um als Primary Key zu dienen und falls doch nicht, saveDoNothingOnConflicty YOLO. Also ich hätte das schon gern gewusst, wenn eine App-Instanz das Backend mit duplicate keys flutet.

Erste Frage: Wie ist es um Vollständigkeit und Inbetriebnahme bestellt? Mein erster Gedanke bei den umsetzenden Unternehmen wäre ein komplexes Enterprise-Deployment, mit Application-Server, Portalservern und mehr gewesen. Klassisches Java EE halt. Ich wurde positive von mehreren Java basierten Spring Boot-Anwendungen überrascht. Die Zielplattform der Anwendung ist Kubernetes, ebenfalls eine Lösung zur Automatisierung der Bereitstellung, Skalierung und Verwaltung von Anwendungen in sogenannten Containern. Kubernetes ist ebenfalls Opensource und wird im Fall der CWA in OpenShift betrieben. OpenShift verwaltet Rechenressourcen in privaten und öffentlichen Clouds.

Kann ich den Server lokal bauen und kompilieren, Java und das Buildtool vorausgesetzt?

Kurz: Ja!

> cwa-server git:(master) ./mvnw clean verify
[INFO] Reactor Summary for server 0.5.3-SNAPSHOT:
[INFO] 
[INFO] server ............................................. SUCCESS [  1.301 s]
[INFO] common ............................................. SUCCESS [  0.066 s]
[INFO] protocols .......................................... SUCCESS [  4.354 s]
[INFO] persistence ........................................ SUCCESS [ 11.746 s]
[INFO] services ........................................... SUCCESS [  0.326 s]
[INFO] distribution ....................................... SUCCESS [ 14.035 s]
[INFO] submission ......................................... SUCCESS [ 13.675 s]
[INFO] ------------------------------------------------------------------------

Find ich erstmal gut. Das Readme beinhaltet Anweisungen, wie das ganze Dingen in Betrieb genommen werden kann, die Stand 31. Mai auch funktionieren. Ein herzhaftes `docker-compose up` bringt hervor:

80/tcp                    pgadmin_container
0.0.0.0:8000->8080/tcp, 0.0.0.0:8006->8081/tcp   cwa-server_submission_1
0.0.0.0:8001->5432/tcp                           cwa-server_postgres_1
0.0.0.0:8003->8000/tcp                           cwa-server_objectstore_1
0.0.0.0:8004->8004/tcp                           cwa-server_verification-fake_1

Der submission-server

Basiert auf Spring Boot 2.3, dem aktuellsten Release. Die Wesentliche Abhängigkeiten sind spring-boot-starter-web sowie spring-boot-starter-security sowie Spring Boot Actuator und Micrometer und die Prometheus Anbindung. Mit den letzt genannten Komponenten können Metriken (u.a. Performance etc.) offen gespeichert werden.

Im commons befinden sich sowohl die implementierten Protokolle auf Basis von Protocol buffers als auch die Datenbankschicht.

Letztere setzt auf auf eine relationale Datenbank: Postgres. Ich halte das für eine vernünftige als auch transparente Wahl.

Gespeichert wird tatsächlich nur das, was im eingangs beschriebenen Prozess notwendig ist: Der zum Diagnosis Key erhobene Temporary Exposure Key.

Am Code selber gibt es einiges, dass ich persönlich nicht als “Best-Practices” sehen oder gar empfehlen würde. Wenn schon explizite `@EnableXXX` Konfiguration, dann richtig: Manuelle Auflistung aller Spring-Komponenten, um im Zweifelsfall sicher davor zu sein, dass eine Bibliothek über automatische Konfiguration unerwünschte Komponenten registriert. Vollständig irritierend finde ich die Suche nach Servlet-Komponenten: Zumindest im Sourcecode finde ich keine zusätzlichen Komponenten, die nicht bereits so gefunden werden würden. Lässt die Frage offen, ob die final paketierte App gegebenenfalls weitere Libraries beinhaltet.

Die Testabdeckung ist auf den ersten Blick gut, mit entsprechenden Mocks der Downstream-Services ist der Submission-Controller testbar.

Die Dokumentation lässt leider offen, was es mit dem numerischen, erforderlichen Header cwa-fake auf sich hat. Ich vermute er dient zum Testen der Apps. Ich würde dazu raten, diesen Code Bereich im Controller in Produktion zu entfernen.

Von Johannes kam der Hinweis auf die Stelle der Dokumentation, die ich übersehen habe: Fake Submissions:

In order to protect the privacy of the users, the mobile app needs to send fake submissions from time to time. The server accepts the incoming calls and will treat them the same way as regular submissions. The payload and behavior of fake and real requests must be similar, so that 3rd parties are unable to differentiate between those requests. If a submission request marked as fake is received by the server, the caller will be presented with a successful result.

Das Ziel der Übung ist, “weißes Rauschen” zu Erzeugen. Sowohl in Bezug auf die Requests selber, als auch in Bezug auf die Antwortzeiten (Das kann gegen Timing-Angriffe schützen.

Der distribution Service

Die Aufgabe dieses Service ist die Verteilung bestätigter Diagnose Keys an sogenannte CDNs: “Content delivery networks”. Das sind speziell auf die Auslieferung von statischen Inhalten ausgelegte Dienste, die das in der Regel sehr schnell und sehr zuverlässig tun und die oftmals auch speziell auf Regionen ausgeprägt werden können.

Ebenfalls setzt dieser Service die Anforderung um, dass gespeicherte Diagnosis Keys nach 14 Tagen gelöscht werden.

Der distribution Service ist schlussendlich ein Kommandozeilen-Programm. Er muss von außen orchestriert werden. Nach Start werden alte Schlüssel gelöscht (die SQL-Abfrage ist nachvollziehbar und in Form eines Spring Data Repositories implementiert), alle neuen Schlüssel zusammengestellt und anschließend in einem S3 kompatiblen Cloud-Speicher gespeichert. In Produktion wird dieser in der in Deutschland gehosteten Telekom-Cloud liegen, im Test ist es ein “Zenko/Cloudserver”, der lokal in Docker gestartet werden kann. Optional werden Debug / Testdaten generiert. Danach endet das Progrmam.

Auf den ersten Blick fand ich die Entscheidung seltsam und hätte einen dauerhaft laufenden Service gewählt und die Prozesse mit den Mitteln des Spring-Frameworks gesteuert… Im Zweifelsfall mit den entsprechenden @Schedule-Annotationen. Metriken würden dabei ebenfalls automatisch anfallen und gesammelt werden können.

Ich kann nicht beurteilen, ob die andere Entscheidung besser ist. Sie erfordert auf jeden Fall weitere Konfiguration in der Kubernetes-Zielplattform um den Dienst anzustoßen.

Die Testabdeckung ist auf den ersten Blick gut.

Zwischenfazit zum Warn-App-Server

Es gibt vieles, das ich persönlich im Corona-Warn-App-Server anders machen würde. Aber ist er grundsätzlich “kaputt” oder unsicher? Sicherlich nicht. Die Architektur ist gut dokumentiert und an einem Sonntagnachmittag nachvollziehbar. Das ist gut: Übermässige Komplexität erhöht Fehlerwahrscheinlichkeiten oder schafft Lücken, Dinge zu verstecken.

Der Server speichert keine weiteren Daten als die notwendigen. Es wäre interessant, die Konfiguration der Orchestrierung des ganzen für die Produktionsumgebung zu sehen. Wird die PostgreSQL Datenbank on-disk verschlüsselt? Wird sie gegebenenfalls repliziert? Wie sind die Object-Stores geschützt?

Der Corona-Warn-App Verification Server

Die Verifikation eines Benutzers, der seine Erkrankung und das Testergebnis an die App melden möchte, erfolgt mit Hilfe eines TAN-Verfahren. Der Datenfluss ist in Solution Architecture detailliert erklärt. Wichtig ist hier, dass der Verification Server keine personenbezogenen Daten speichert. Er nutzt zur Bestätigung des Test einen weiteren Dienst. Diesen habe ich nicht angeschaut.

Die Liste der Abhängigkeiten des Verification Servers ist deutlich größer: Von Lombok an angefangen über Guava zu OpenAPI UI und zum Feign-Client zur deklarativen Anbindung von Webservices. Gut, der Labserver hätte auch anders angebunden werden können… Aber passt schon.

Der Service speichert – ebenfalls in einer PostgreSQL-Datenbank – eine “AppSession” sowie die empfangenen TANs.

Warum der Verification Server Liquibase und der Warn Server Flyway zur Datenbankmigration nutzt? Unklar. Vermutlich unterschiedliche Teams.

Auch hier werden keine Daten gespeichert, die direkt oder einfach einer Person zugeordnet werden können, das Prinzip der Datensparsamkeit wird beachtet.

Fazit

Wer ein “Domain Driven Design”-Leuchtturm-Projekt erwartet, ist an der falschen Stelle. Die von mir betrachteten Services sind sicherlich keine hervorstechenden Projekte, die durch die Bank Best-Practices zeigen. Aber sie sind solide, gut zugänglich, vollumfänglich dokumentiert und insbesondere frei von künstlerischen Kapriolen. (der Art “Lass mal eigenen Schlüsselmechanismus bauen oder einen eigenen Persistenzlayer”).

Das Decentralized Privacy-Preserving Proximity Tracing (DP3T) Konzept wirkt für mich schlüssig, die implementierten Backend-Services speichern erstmal nur die notwendigen Daten um den eingangs erklärten Prozess zu unterstützen.

Einige Fragen habe ich bereits im Text gestellt. Reproducible Builds und Nachvollziehbarkeit, wer wann welchen Build deployed hat sind noch zu ergänzen.

Während ich die “Datenspende-App” des Robert-Koch-Instituts für eine Katastrophe halte (sowohl von der Umsetzung und des Timings im April, siehe hier und hier), würde ich die Corona-Warn-App des Landes wahrscheinlich nutzen. Meine größte Sorge wäre Stand heute vermutlich der Stromverbrauch und in zweiter Hinsicht, der Nutzen: Was passiert, wenn ich in Kontakt mit infizierten Personen kam? Das System ist so ausgelegt, dass der Server nicht weiß, wer ich bin. Es setzt also auf den gesunden Menschenverstand derjenigen, die gewarnt werden. Dass die gegebenenfalls in Quarantäne gehen, Kontakte wieder vermeiden. Mehr nicht. Vielleicht wäre es sinnvoll, eine Warnung direkt mit einem kostenlosen Test-Coupon zu verschicken.

Am Ende darf also bezweifelt werden, dass es eine technische Lösung für die Corona-Pandemie gibt. Jürgen Geuter aka tante, den ich sehr schätze, schrieb bereits im April, ob diese Apps nötig sind.. Ich halte den Artikel nach wie vor für lesenswert, über Technologie hinaus. Ebenfalls sehr lesenswert, jedoch auf Englisch, ist der folgende Text: The long tail of contact tracing (societal impact of CT).

| Comments (0) »

31-May-20


More lessons learned

This is the forth and final post in my series of Running free… A developers story of development.. Under the title “More lessons learned” I collected several other things that have been important for me to realise and to change or at least think actively about.

More lessons learned



There are couple of other things I realized are important for me.

Stress Reduction / FOMO

Everybody nows about FOMO, the fear of missing out?

And also the red bubbles on your phone?



I got off facebook a while ago. Removing Twitter was hard.



And I have to say: I couldn’t do without it.

What I have now is turned off all notification popups and only red bubbles for DMs AND moved twitter to my second screen.

As simple as it my sounds: Stuff like this is big source of stress. Be mindful about it.

The thing about sleep



Sleep is a mysterious thing, isn’t it? It overwhelms you at times you want to stay awake and leaves you alone at times when you needed the most.

Deeply embedded in our “programming culture” is this joke:

Sounds familiar?



(Obligatory Jochen Mader quote.)

Programming – If you’re not tired, you’re not doing it right.



Let that sink in for a second. Being tired comes in a couple of different forms:

  • Physically tired
  • Mentally tired
  • and tired of something

While I like the physically tiredness after a long run, i hate mental tiredness.

Also we should not ignore the fact that working on a screen is also physically tiring.

So a combination of mental and physically being tired, near to exhaustion is something that is required to do programming right?

The cute version.



More like it



The reality often looks different and this is one version of it. You are actually aching for some rest but willingly ignore it.

Closely related



Very closely related is the Ballmer peak. The idea that a certain level of being drunk gives you super powers and turn you into a better coder.

It’s scientifically proven that sleep deprivation has the some effect in a car as driving drunk.

I honestly don’t have an idea why a slight level of dizzyness brings you much faster in the zone than being totally awake, but it works for me too.

There’s a thin line



As with many things: It’s a thin line and sometimes slippery slope. I spoke with a couple of friends about that topic and we agree on two things:

  • All-nighters rock
  • All-nighters suck

There are the ones in which you start hammering away code that just shines. They are amazing and I like the result as much as you do

And then there are the ones you get stuck with a problem which a helping colleague could solve in an instant the next day. Or you yourself, with a clear mind. They suck: You get in bed, cranky, you get outta, even more cranky. They suck.

Depending on how often you find yourself burning with passion and sacrificing not only your spare but also your sleep time, you will have a hard time communicating with people not following your weird sleep pattern. Those people might be you partner or your colleagues. And at times, both.

And eventually, long time even short time sleep deprivation will result in serious health issues.

Trust



Trust is a big word, but an important one.

Trust is a strong motivator. Since 2 years I work 100% remote for Neo4j. Neo4j creates the database with the same name and a couple of other things. From day 1 I received a lot of trust. Those people sending me a laptop, screen and stuff before I even officially started. They trust me that I don’t slack around the whole day.

This is a strong motivator for good work. I would like to pay back that rust.

Trust is an enabler for remote work. Together with the fact that we work task oriented and not that much clock oriented, I have a lot of freedom. This improves my life, the ability to balance family, job and my sports in an optimal way.

Trust needs to be mutual. Between me and the company and also between colleagues. Trusting in a common goal and a common interested to reach that together brings really good work relationship.

And for what it’s worth: You spend – even being remote – at least one third of your day with colleagues. I want that to be a relationship on the same level, with mutual trust that allows the assumption of good intentions.

Recap



I told a personal story here, centered around running at first sight. I needed the focus shift that came with it to realise various things.
Most of them boil down to some simple facts: Don’t get your self worth only from IT projects. You are not the projects you maintain.
You wear different hats during different times in your life. Enjoy that.

Get off the screen once in a while and move around. Of course you don’t have to run a marathon. A walk is nice as well.
Try a walking meeting for example.

If things just don’t work in your current project or company: Reflect on your power. Can you change things? Are you willing to change things?
If not, change places. Often there’s a safety net in place.

Passion is important. But it needs fuel to burn. Choosing what to burn is always a trade off. There’s only so much of yourself you can burn.

Last but not least: Read. Here are some recommendations:

Some books



Matt Haig wrote the wonderful book “Reasons to stay alive”, in which he reflects on his depression, anxiety and the path he took in life. I can related to a lot of things in here and it has been worthwhile for me to read through his experience.

“Where There’s A Will” by Emily Chappell is about long distance cycling races at first side, but much more about what you experience beforehand, on the road and especially afterwards. A strong book by a strong person.

Last but not least: “Why We Sleep” by Matthew Walker. All I am saying: It’s frightening. In our industry people brag around about all-nighters (me included) and apparently, I have had no clue what I am doing to myself.

Contact



Find myself as @rotnroll666 on Twitter or via E-Mail. Reach out if you want to discuss stuff. Of course I’m into programming and IT. Most of my topics these days are database related, both graph and relational and as such Cypher and SQL), but I’m deeply involved under my Spring Boot Hat in the Spring and reactive Spring world.

While this applies, Java is there to stay in my profile.

Take care.

| Comments (1) »

06-Apr-20


Why do we overwork and burnout?

This is the third post in my series of Running free… A developers story of development.. Here I will speak about my surroundings in German society and in a German company and why I overworked nevertheless and what I learned from my behaviour and mistakes therein.

This piece has been written before the #COVID-19 crisis. Speaking about a safety net feels so much weirder now and the need for one became so much more obvious.

Why do we overwork?



I will describe the surroundings and conditions you’ll find these days in Germany regarding work. Why? Because it is important to understand that self inflicted overwork is not necessary all the times. Technically, it is not necessary.

The question “Why do we overwork?” has been raised by a lot of people and also addressed by a lot.

I found one interesting paper by Lonnie Golden and Morris Altman: “Why Do People Overwork? Over-Supply of Hours of Labor, Labor Market Forces and Adaptive Preferences”

That paper of course addresses both externally and intrinsic imposed work hours and overemployment. The authors are experts on that field and you should probably trust their input more than mine.

Let’s stick with the authors definition of overwork as work beyond a persons own capacity that is self-sustainable in terms of physical or mental well-being and overemployment as work beyond their initially preferred or agreed extend of commitment toward working hours.

Overwork refers specifically to the cumulative consequences of operating at overcapacity, additional hours spent at work eventually creates fatigue or stress so that the worker’s physical or mental health, well-being health or quality of life is not sustainable in the longer run.

–Lonnie Golden and Morris Altman, Why Do People Overwork

Keep in Mind: Wage vs Salary



A couple of things said in the current slides will not apply for everyone. Freelancers and contractors usually have agreed wages per hour and are potentially in a better position to control their number of working hours. Or at least: Each working hour of a person with a wage based income will result in a higher income. For people with a fixed salary this is of course seldom true.

I’m unfamiliar with the situation outside Germany, but in my country we find various scenarios. At times over hours are included in the salary. Sometimes only a fixed number is included, the other ones are payable at a given amount. Or the over hours can be redeemed as off hours and days.

But the important thing to keep in mind here is: The fact that one has a directly related, positive feedback to over hours (aka more money) respectively is in control about the number of hours, reduces the perceived feeling of being overworked.

This doesn’t however change the adverse effects of being overwork on

  • Personal well being
  • Family and social life
  • Risks of accidents

for example.

Directly imposed work hours and overemployment



Here are a couple of things that directly come to mind when thinking about over hours. Over hours can be explicitly or implicitly requested by the employer. In our industry that often happens at the end of a “final” sprint, before a release (well, shouldn’t those big bang release be gone with microservices and all that anyway?) or as the infamous crunch time.

Implicit over hours are more subtle. Their reasons vary from a project being under stuffed, people being constantly distracted from work and trying to keep up etc. One reason might be even old and sluggish hardware, shops giving old laptops with spinning disk drives out to people just to cut costs.

I tend to be ok with the occasional explicitly demanded hour. On the one hand I get paid well but on the other hand, I always had an interested in the success of the companies I worked for. If I wouldn’t have that, I couldn’t work to my full potential.

The last point on this slide is an issue of course: A lot of people involuntary do over hours and over employment and still don’t make ends meet. Probably that’s not such a big issue in a well paying industry like hours in a western society, but it’s a general problem.

At least in Germany, a couple of things mediate the later issue and I’m gonna briefly speak about them so that you get an idea about the safety net that helps you when making job wise decisions and planning in Germany. Why do I think this is important? Because it gives you freedom and room for decisions.

Safety net in Germany



Many people think of safety nets in cases of falling down… I like that image here a lot as well: Falling down starts by getting of the rails and a good safety net prevents this.

So, what do we have in Germany that helps keeping you on the track, giving you time finding new jobs and actually taking care of yourself and your family?

So first of all, we have Arbeitslosengeld which means unemployment benefit. This is paid for at least 6 months when you had been employed for 12 months. It is paid for 12 months when you had been employed for more than 24 months before being unemployed. There is a penalty of 2 months when you quit compared to when you had been given notice.

The gross amount of Arbeitslosengeld is 67% of your previous average income over a given period (until a maximum value). Most people won’t be able to keep their living standard on the amount, but it gives you a long enough period looking for a new job without worrying all to much, at least in our industry and filter bubble.

After the period of unemployment benefit people can claim Arbeitslosengeld II or Hartz IV. The things here a lot more complicated and the actual amount one receive is much lower. The law was an attempt to bring together social welfare and unemployment help. Arbeitslosengeld II comes with strings attached, people are obliged to improve their job situation, take offered jobs or they will sanctions.

A thing that keeps a lot of worries at bay is of course public healthcare. I don’t have to worry to call an ambulance, I don’t need to pay bills upfront. Most of the time this works pretty decent.

Last but not least: Most people here rely on state retirement pension. Private investing huge amount of income is still quite uncommon in Germany. There have been a lot of attempts to change that, but I’m sceptic here and I am unsure how that will work out. Most people here don’t plan on retire like before 40 or something, but work until eligible to receive a pension.

Family



One important aspect is the fact that Germany tries to protect families and encourages both parents to do both: family work and income related work. That could be better to large extend – at least compared to the Scandinavian area – but we’re not that bad.

What do we have in place? From the day of the pregnancy being announced at work until 4 months after birth mothers cannot be dismissed by law. Then there is maternity protection, which basically prohibits forcing a mother to work 6 weeks before birth and 8 weeks after birth.

For the longer term aspect of work versus family, the aspect of general parental leave protection is more important. Parents are protected by law against dismissal and similar when they go on parental leave, up-to 36 months per kid and parent. And that’s not all: For a maximum period of 14 month, there’s Elterngeld, which is at max 65% of the parents net income.

Why am I stating this here? While there different circumstances that makes things harder, but there shouldn’t be a reason why one parent – usually the dad – runs back to work asap. It’s definitely not a project that’s more important than a kid and often not the money.

So again, why?



Assuming we have don’t are continually required to do over time by our bosses, don’t need to work around the clock to compensate other issues of the company and can make ends meet without living in constant fear of losing a job:

Why do we over work?

Possible reasons for self inflicted over working



I came up with a number of possible reasons, and most of them are my own.

There are some pretty solid reasons… Working for a promotion, doing some experiments, maybe spiking something. A less valid reason is because everyone is doing it due to management seeing the busy people as more productive or a Hero culture where always someone really knowledgeable comes in and saves the day (on the price for them to burn out).

Some people forget what we had a couple of pages ago: We are not our code. And we are not our work. We don’t need to take everything personal and grind us to pieces on fixing something. I hear my colleague laughing over there in Brunswick. I’m in need for constant reminder not taking every bug personally.

And of course, there’s alway the thin line between being passionate for something and suffering for something. The origin of the word passion is exactly that: Pain.

I’m deliberately not saying that these reasons are from my past, I’m just the type of person prone to let passion go into pain.

A stoic, caring approach would help here.

Why do we I overwork?



Now, let’s jump into my personal reasons and the issues I’m constantly facing and make an “I” out of this we.

I feel responsible, whether it’s just perceived or for real. That feeling is stronger the less I feel there’s someone else jumping in or skilled enough to solve it.

That leads me to thinking there is no-one skilled or willingly enough to take care of an issue.

When faced with problems I’m trying to solve, I’m quite often too stubborn to ask for help, which of course is being myself implementing my own hero culture.

Apart from that, I just like the stuff I’m working on and I’m somewhat a perfectionist. Together, that’s an easy way to just loose myself in the zone and completely forgetting about time and my surroundings.

For me it was important to realize the items here that needed to be addressed for me to feel better.

Things I can change



So of course the feeling of being responsible is not a bad feeling. It’s quite important, it’s a sign that you care.

But remember, being responsible and doing everything on your own are not the same, so learn to delegate.

The assumption there is no-one to delegate to or no-one else skilled enough for a problem or not willing to solve it, is just wrong. Of course, if you’re working completely in isolation, that is the case, but that seems quite rare. So, make sure there other people, trust them and trust into their skills and assume a common goal.

If you are not working directly together and are in sort of customer / supplier relationship, try seeing an issue from their perspective. They want an issue solved as much as you do, but maybe they just don’t have the knowledge to do so.

The third bullet point should be perfectly clear: Just don’t be stubborn, asking for help is NOT a failure.

Flashback



After studies I started my career in a small company with less than 20 people, including 2 CEOs and a cleaner. I entered at a time when Oracle Forms Client server was the given tool in that company.

While the company was an Oracle shop – from Database to Forms to Designer and back – times where changing, Forms Client Server was about to be deprecated and I was really lucky that the CEO was very open to new things and always supported us with research and development without an immediate monetizable effect.

A bit fast forward 2 or 3 years, I found myself in a couple of Java courses and learning Java SE (Swing) as well as having a peek into web applications with J2EE (this is not a mistake, Jakarta EE was called that way back then). We didn’t go the Jakarta EE route, but landed in Spring and Oracle APEX world, but that’s story in its own right

Unrelated to the tech stack: Many of those small shops ran their own infrastructure back then, but without a supportive IT department and just doing things on the fly. Remember, that was pre cloud era and one used to host all things yourself. Guess who felt responsible in the end?

The result: I somewhat aggregated all the Java (and later) Spring knowledge in the company and later on, most of the administrative IT things.

That is nice for your ego, hopefully good for your paycheck and is fun to a certain point and amount of work, but eventually, you’ll be a single point of failure, for the company but also for what you can actually manage to achieve.

Lessons learned



A system having a single point of failure is not resilient. The single point of failure is under too much pressure.

Don’t be that single point of failure!

It is not enough to accumulate all the knowledge to grow. Growth needs room. There is no room when you’re the single point of failure.

The same is true for everyone around you: When you walk around with an ego bigger than your head, nobody else can grow, make their own mistakes and learn as you did. Good people will either leave or end up being frustrated.

When you work together, assume a common goal. Of course there’s always personal interest involved, but that is inevitable. Would working together without trust in common goals be reasonable at all?

The Things I won’t change



I still like the stuff I work on, even more these days. Also, losing oneself in an immersive task is just great.

To care about something and do it with 100% is both a strength and a flaw when overdone. It allows you to be good at something but also can burn away all your strength.

It’s all about the dosage and avoiding the pain of too much passion:

Passion needs fuel to burn



There is a fantastic blog titled Passion and burnout from Codecentrics Nandor Gyerman. Nandor starts with the suffering aspect as well and takes it a step further: Passion is an intense feeling of fire, an act of self-immolation.

The fire needs fuel to burn. That fire can be sustainable or not, depending on what you burn on the altar of passion: Time, energy, common sense, money, habits, sleep or finally sanity?

Enjoy stoic virtue more than uncontrollable passion



My approach here is – and by any measurement, i still fail on this quite often – to enjoy things I do with a stoic virtue more than a burning passion that eventually will devour me.

Care about things, but don’t forget to care about yourself. Otherwise passion will eventually burn you out.

Remember the key thing about life on earth is change.



“What about your previous shop?”

By the end of 2014 I was a general manager (“Prokurist” to be precise), and it turned out: It just wasn’t my role. There are lot of important takeaways:

  • Making someone a manager just by order doesn’t work, it needs preparation
  • Being a manager on the other hand requires effort and work and doesn’t work – like software development – out of the box
  • If you not intrinsic motivated, things don’t work out

I found myself in a constant overload: Still trying to be a good developer and at the same time struggling to be a manager. I learned the hard way that authority given by order is worth nothing, it doesn’t work that way.

Even when a role changes significantly by outer mechanism, your position in the company between colleagues won’t do with the same momentum.

So in 2017 we parted ways and it was good to do so.

I could try out a couple of things afterwards and I wasn’t too afraid to change anymore

My old boss and I still speak regularly and I’m happy. They are doing as good as I am. I can see first hand, how things change for the better, when there is enough room for everyone.

One thing I took with me from those years: Having a mentor at a company, who trusts in you and supports you is invaluable. It opens doors to opportunities, learning and so much more.

If you happen to look for something new, this would be one aspect that would be important for me.

Continue with the forth and final part: More lessons learned.

| Comments (0) »

30-Mar-20


Miles are my meditation

This is the second post in my series of Running free… A developers story of development.. In this part I will focus on what actually helped me defocus my head from spiralling around work related issues and problems.

Cognitive therapy – Miles are my meditation



In 2017, I was writing my second book, running the EuregJUG was a great success and on the outside, I was as successful as it get’s in my company, but I was feeling worse every day.

I tried a couple of things that I thought would be expected of a successful mid thirty guy (aka a dude with a full blown midlife crisis) and define myself better:

  • Went to fancy barbershops and got myself expensive haircuts
  • And an expensive watch
  • Drank more than ever
  • Tried to be what I thought is manly

Result: I looked stupid, spent to much money, got fat, sit in front of a computer even more. I needed a big shift of focus in my life, something like a cognitive therapy

I was always an avid cyclist and still managed to ride a bike nearly everyday. That was however also work related as I did commute by bike. Not much of a focus shift.

JCrete 2017



Sometimes all it needs is a good conversation and people being role models. I had a couple of those at JCrete 2017, especially with Felix and Heinz. I’m mentioning this here because of two reasons:

  • The conversations people lead are important and have often an effect.
  • If possible, go visit an unconference. JCrete is one of the most famous, but a couple of
    more have appeared in the last years, such as
    JAlba, JWild or JSpirit.

These unconferences offers a market place like proposal and selection of topics. Many of those pretty hard core technology wise but also with topics such as presented here.

Running – Is your bike broken



I always said that I start running on the day when all my bikes are broken.

I actually tried a couple of times to run more than a kilometer and I usually hit something like 3k and ended always at the point where everything hurt and while I’m usually quite stubborn, I couldn’t convince myself do go further.

Standard solution until then was trying to motivate me with buying more gear, but that was never longterm sustainable.

One doesn’t need much



The nice thing about running is: You can get pretty far with a decent pair of running shoes. It’s a good idea to go into a shop and get some guidance for selecting a pair. I needed one that gave me a bit of balance but not that much cushioning.

I have a couple of running shirts but depending on the lengths of your run, I don’t think they matter that much. I tend to sweat a lot, so I’m more for the lightweight sports gear that transports humidity away from the body. A fitness tracker is nice, but not required. I would even say a heart rate monitor is not necessary. Usually you will notice when you are over pacing, but alas, I’m not a doctor of medicine.

Anyway, this time, I stuck with my old pair of running shoes and set a couple of other goal:

SMART goals



Goals are important.

I really think that we all have a ton of intrinsic motivation in us. To access it we need goals and an environment that allows us to pursue those goals. Goals in the context of this talk here aren’t of course not only sport goals, but also professional ones. Those can be: Learning a new language (of course, both programming and spoken languages), solving the task of your job in an optimal way, making a relevant step in your career.

There are a lot of silly acronyms in project management and methodology but here’s one I really like: SMART goals. Those are goals that are:

  • Specific
  • Measurable
  • Achievable
  • Realistic
  • Timely

Let’s see: I wanted to be able to run 10k by the end of 2017 and get my weight down to 80kg again. That makes already two specific and measurable goals. Are they achievable? Of course. A healthy person at my age should be able to run 10k in a reasonable amount of time and 80kg is pretty much my optimal body mass index. Both goals are realistic (in contrast to let’s say being able to compete against world class athletes). And finally, it was in the summer of 2017, 6 months would be perfectly long and short enough to achieve those goals.

Did I reach them?

There’s a nice run at the end of each year in my place. This is where I wanted to see if I reached my goal.

That was 2017:



10k in about 50 Minutes. Not bad at all. I spare you the view of my scale, but I reached the weight goal, too.

Fast forward to 2019:



Where to find motivation?



I already gave that answer: Miles are my meditation. Doing long distance things has a calming and relaxing effect. Runners or cyclist high is a thing.

It’s like turning off the repetitive thoughts in your head that circle around issues, anxiety and problems.

It forces you to focus on your breathing, your body, yourself. On the next step hill or mile in front of you. Not some abstract thing in the future.

Competition?



I like doing races because they make me stretch. It’s not like that I’m trying to reach a certain place, but running with a lot of folks and a timer ticking actually increases your pace by a whole magnitude.

Also: It’s great running in places that are reserved otherwise for cars etc.

Fun fact: I never considered participating in a cycling road race. I find group rides with more than 5 or 6 people mentally straining enough already. In a peloton you’re usually super close to each other and you really have to be aware. Kinda defeats the purpose of switching ones head of for a while.

What about Medals? 🏅



Remember what I said at the beginning about outside appreciation: Medals, physical and virtual, are also that, so enjoy them, but they don’t matter anyway (at least when you’re not a professional racer, I guess).

In June 2018 I joined Strava…



and things escalated a bit. I never thought that a platform like Strava would change my life that much.

If it’s not on Strava, it didn’t happen

I smiled about that joke at first, but then gamification kicked in. See 2018, 2019 and now 2020:



In 2019, I went totally bonkers… Here again a goal: Doing a Strava “Gran Fondo” aka a 100k cycling tour each month and taking pictures. That was a fun thing to do… I even created a small book from it and donated the revenue.

Nice memories



I have been thinking about these now for some time.

I have been discussing Strava and gamification with my wife and also the kids. The kids love this physical medals and yes, even when I said earlier today, don’t rely too much on outside appreciation, kids being proud of their parents is a hell of a good appreciation.

And actually, having some real tokens fits the experience of doing a halve or a full marathon a lot more.

Being outside



The biggest motivation for me however is being outside. Doesn’t matter if on bikes or running. While I took all three pictures here near my home during pretty good weather, I ran and cycle the last two years with

  • rain
  • more rain
  • storm
  • snow
  • and everything in between

I found it much easier to go out running in bad conditions, the effort required to clean messed up gear is just smaller. Cycling usually means more inertia as you need to plastique wrap yourself and most of the time, wash the ride.

I used to listen to music the first couple of weeks while running but eventually stopped it. I don’t listen to podcasts. I try to don’t think about anything.

I cannot stress this enough: The brain also needs room to wonder and ruminate.

Zwift and generally indoor sports never clicked with me. They allow a lot of people to make the most of their time and I get this. Would I use something indoor sports, I would missing the bit of letting my brain go. Of course I would watch talks or TV shows.

How did the running influence me?



Feeling stronger and healthier

As plain as it is: I feel stronger and healthier.

More relaxed

Also, as simple as that: Working out, feeling yourself, your body and the surroundings and a hot shower afterwards does wonders, much more than the a lonesome leisure beer does.

For me, it worked wonders avoiding depersonalization.

More resilient

It made me more resilient: I needed to overcome an initial pain point.

The longer distances taught me that pace control is important. I can rush all the things I want in the beginning, if I cannot reach my goal, it’s in vain.

Let me tell you this story: I ran my first marathon in April 2019, started with an achievable goal time and try to run a constant pace to reach that. The second one, in October the same year I was like “Go all in”. That worked well enough for the first half of the thing and ended with me more or less crawling behind the finish line. In the end, I was only a minute faster than on the first one but at the same time, wore myself down.

Anyway, it’s the same with work: Make sure you’re in a place where you can find your pace. A good place will you give the time for that.

Positive feedback

The realization that I’m actually good at something I always disliked is eye opening.

If it works with sports, it probably works with other things, too. Things that might seem out of your comfort zone, too.

Like: Being more open in work, accepting help, accepting challenges that involves more than superficial reflections and so on.

Focus shift

Let’s focus on that one.

I had enough time to think while running and realized a couple of things:

  • I was more than overworked
  • I felt depersonalized and while being angry at a lot of things, actual success and good projects at work left me cold.
  • Lot’s of stuff was very robotic and tiresome at the same time
  • I took the anger with me home
  • While being stuck in it, I wasn’t able to articulate that.

So let’s raise the question: How did I let it come to this, even in surroundings where it would not be technically necessary. Why do we overwork?

Continue with part 3: Why do we overwork and burnout?

| Comments (0) »

23-Mar-20