Nowadays every community needs a chat of some sort. And by nowadays I mean: it has actually always been like that. Some decades ago we used IRC. But the things have changed. Solutions need to be more accessible.

The context is this: we have an online forum community where members register and participate. The member account is based in the forum software. We want members to be able to join the chat without any more fuzz than necessary. 

Here is what we are going to use:

  • invisioncommunity - a well know board software to host online forums.
    • its OAuth API to connect the chat server, allowing users to login with the same account
  • matrix chat, specifically:
    • synapse as the server (its the matrix reference homeserver)
    • element (previously called "riot") as web client

To be able to setup the OAuth connection the invisioncommunity, synapse and element services need to be accessible by https. I can recommend using letsencrypt certificates. But this guide does not explain this part. But the resulting apache config is shown below.

This configuration examples has the domains setup to:

  • the forum reachable at
  • the matrix server at
  • the element web chat at

The tools allow you to host your own community without depending on other services or cloud solutions - but still using modern solutions.

Step 1: add an OAuth Application in invisioncommunity

Login into the AdminCP and add an OAuth application. Remember the client identifier and the client secret. The client secret can not be shown again - but it can be regenerated.

  • Client Type: Custom Confidential OAuth Client
  • Available Grant Types: Authorization Code
  • Require PKCE for Authorization Code grant?: Not required
  • Redirection URIs:
  • Authorization Prompt: Never
    this will allow your invisioncommunity members to just open the element chat, get redirected a few times, but then be already connected and online in the chat.
  • Allow users to choose scopes? off
  • Show in Account Settings? on
  • Access Tokens: leave the defaults
  • Scopes: profile and email. leave the defaults

Step 2: configure the OIDC provider

The official documentation contains more examples.

  - idp_id: yourhostname
    idp_name: " Login"
    discover: false
    issuer: ""
    client_id: "changeme"
    client_secret: "secret_changeme_aswell"
    scopes: ["email", "profile"]
    authorization_endpoint: ""
    token_endpoint: ""
    userinfo_endpoint: ""
        subject_claim: "name"
        localpart_template: "{{ }}"
        display_name_template: "{{ }}"
        email_template: "{{ }}"

Use the client_id and client_secret from step 1. Make sure the url's you use are all correct. The authorization_ token_ and userinfo_ endpoints are specific to the invisioncommunity software.

The user_mapping_provider configures the chat so the forum username is used as name in the chat.

After you change the homeserver.yaml you need to restart the service.

Step 3: configure the Element web chat for your community

Our goal here is to have a simple way for the forum users to use the chat - not hosting a chat solution for anyone interested. So the users are still managed in the invisioncommunity software. So in this configuration we will disable the registration and some other options.

    "default_server_config": {
        "m.homeserver": {
            "base_url": "",
            "server_name": "matrix.yourhostname"
    "sso_immediate_redirect": true,
    "disable_custom_urls": true,
    "disable_guests": true,
    "disable_login_language_selector": true,
    "disable_3pid_login": true,
    "brand": "",
    "defaultCountryCode": "DE",
    "showLabsSettings": false,
    "features": {
        "feature_new_spinner": "labs",
        "feature_pinning": "labs",
        "feature_custom_status": "labs",
        "feature_custom_tags": "labs",
        "feature_state_counters": "labs"
    "default_federate": false,
    "default_theme": "light",
    "welcomeUserId": "",
    "enable_presence_by_hs_url": {
        "": false,
        "": false,
        "": true
    "settingDefaults": {
        "breadcrumbs": true,
        "UIFeature.shareSocial": false,
        "UIFeature.registration": false,
        "UIFeature.passwordReset": false,
        "UIFeature.deactivate": false,
        "UIFeature.thirdPartyId": false
    "jitsi": {
        "preferredDomain": ""

The default welcome screen is disabled, so users will not need to re-login or create an account. Users can't change their password or anything, as these things still happen in the invisioncommunity.

There you go! Chat is on! (thumbs up)

Configurations files

Docker configuration

Docker-compose file:

version: '3'


    image: vectorim/element-web:v1.9.0
    restart: unless-stopped
      - ./element/config.json:/app/config.json
      - synapse
      - 28009:80

    image: matrixdotorg/synapse:v1.43.0
    restart: unless-stopped
      - UID=1006
      - GID=1020
      - ./data:/data
      - db
      - 28008:8008

    image: postgres:14.0
      - POSTGRES_USER=synapse
      - POSTGRES_PASSWORD=synapseDBpassword
      - POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C
      - ./pgdata:/var/lib/postgresql/data


  • UID and GID are the user and group id of your linux user and group.
  • adjust the synapse_server_name.
  • make sure the exposed ports match the ones in the apache configuration.

Apache2 configuration

Site-Configuration. Requires SSL, Headers, Proxy.

<IfModule mod_ssl.c>
<VirtualHost *:443>
	SSLEngine on

	ErrorLog /var/log/apache2/matrix.hostname.com_error.log
	TransferLog /var/log/apache2/matrix.hostname.com_access.log

	RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}

	AllowEncodedSlashes NoDecode
	ProxyPreserveHost on
	ProxyPass /_matrix nocanon
	ProxyPassReverse /_matrix
	ProxyPass /_synapse/client nocanon
	ProxyPassReverse /_synapse/client

	Include /etc/letsencrypt/options-ssl-apache.conf
	SSLCertificateFile /etc/letsencrypt/live/0002/fullchain.pem
	SSLCertificateKeyFile /etc/letsencrypt/live/0002/privkey.pem

<VirtualHost *:443>

        ErrorLog /var/log/apache2/riot.hostname.com_error.log
        TransferLog /var/log/apache2/riot.hostname.com_access.log

        RequestHeader append "X-Frame-Options" "SAMEORIGIN"
        RequestHeader append "X-Content-Type-Options" "nosniff"
        RequestHeader append "X-XSS-Protection" "1; mode=block"
        RequestHeader append "Content-Security-Policy" "frame-ancestors 'none'"

	ProxyPreserveHost On
	ProxyPass /
	ProxyPassReverse /

        Include /etc/letsencrypt/options-ssl-apache.conf
        SSLCertificateFile /etc/letsencrypt/live/0002/fullchain.pem
        SSLCertificateKeyFile /etc/letsencrypt/live/0002/privkey.pem


If you end up in a redirect loop in the matrix server make sure the "ProxyPreserveHost on" statement in present.

In late January 2021 I noticed some error messages in OneNote on Android. Synchronization issues. Usually nothing tragic. I moved over to my notebook, clicked on the "synchronize" button (dealing with errors on big screens usally works better for me) aaaaand the notebook was gone. Reset to empty. All notes gone.

That was a bit concerning. But no reason to panic just yet. Booted up the PC. Opened OneNote. Saw my notes. For a second. It then also synchronized to an empty notebook.

So all copies gone. more than 10 years of notes. Every conference, every meetup, every course I visited I documented in OneNote. The travels that I was planning to do. Cooking recipes. A lot of stuff. 

Time to panic. As I was unable to find any trace if my old notebooks (only in errors on Android these old notebooks where mentioned, but unavailable).

After looking through every computer and folder and finding noting, I decided to contact the OneNote support. As OneNote ist part of my Office365 subscription. A bit hard to reach them? "Call me back" expects you to live in the united states or Germany. Okey. So Chat-Support it is! Went through the Bot questions. Poor thing could not help me. Talking to a human. Nice invention. Could not really do much, but he or she suggested to contact OneDrive support, as I synced all notebooks they should be there somewhere.

OneDrive Support Bot it then. Went though the bot questions. Could not really do much. Do I want to email a human? Nice invention. Wrote a lengthy email about my vanished notebooks. After two weeks of back and forth the summary is this: they are gone.

Ich habe mir die Protokolle für Ihr Konto angeschaut, speziell für die Datei, die Sie erwähnt haben. Unsere Protokolle zeigen an, dass dieses Element am 13. Dezember 2020 von einer Person von der OneDrive-Website gelöscht wurde. Sobald die Artikel gelöscht wurden, gingen sie in den Papierkorb. Da die Protokolle zeigen, dass die Elemente nach 30 Tagen nicht aus dem Papierkorb wiederhergestellt wurden, wurden die Ordner und ihr Inhalt dauerhaft gelöscht. Dies bedeutet, dass wir die Ordner oder deren Inhalt nicht wiederherstellen können.

It seems I deleted the notebooks on December 13th. As I only noted they are sort of missing in late January the recycle bin was emptied - which is done automatically after 30 days. No more restore.

I can't explain how this happened. Or what happened at all. I'm willing to accept that it was me messing something up. But I never had issues with OneNote or OneDrive. So time for forensics.

I do backups. Lucky me. Step one are copies on my Synology NAS. I synchronize both OneDrive there and store the backups there too. No trace of those OneNote files. Quite weird. Level two is a copy of these files on AWS. I keep backups for around two years. 750TB of stuff. Mainly Windows crap. No OneNote files either...

I am getting mad! So. Where do we stand and what happened. 

OneNote does store local copies. In C:\Users\wemu\AppData\Local\Microsoft\OneNote - not in AppData\Roaming, as most other tools I use. No in Local. That folder is not part of the backups. Of course its not.

OneNote itself at some point put only references to my notebooks into my OneDrive. In the Documents folder there are some .url files that represent these notebooks. I usually don't use that Documents folder. It does look like a Toolbar folder from Internet Explorer. Maybe that's what happened... me randomly deleting folders that look unimportant.

But where are all the "*.one" files that contain my stuff?

As it turns out, there are two OneNote. OneNote and OneNote for Windows 10. One comes with Office 365, one is installed via the Microsoft Store.

In OneNote you can turn on a feature to create daily backups. It's not turned on by default? At least it was not on for me. Since I prefer the navigation on OneNote for Windows 10 I wasn't using OneNote much. And that has to run every now and then to create those backup files.

Then there is OneNote fancy-ness itself: you can create a new notebook directly in OneDrive. This will create one of those .url files into your OneDrive folder. If you create a zip File of your whole OneDrive folder, your notebook will not be part of it, only that reference will be. Dear lord this is so stupid.

If you create a notebook "on you PC" you can still put it in some OneDrive folder, but now a .one file will be created.

Management Summary

  • OneNote for Windows 10 is nice to work with, but does not give you any control of the .one files, should you be willing to have them backed up. Only if you are very careful when creating notebooks.
  • OneDrive as part of the Office 365 subscription empties the recycle bin after 30 days. No settings available. You may have 1TB of storage and still 900GB available. But it s really important to have that emptied at this pace...
    It may be better to create you own "Recycle Bin" folder and move stuff in there... this 30 day policy is stupid.
  • OneNote (from the Office 365 setup) has more control on backups and files, but has to be used to do that. At least this way the notebooks will be part of your backups.
  • Creating a notebook directly on OneDrive does only create a .url file. The notebook is now owned by the cloud. It's not yours anymore.
  • Creating a notebook on your local PC (even inside the OneDrive folder) creates .one files that can be backup-ed.

I don't understand this mess.


Backups are good. Thing is that Microsoft is a bit messy with that needs to be part of it, and even a disk image would not have helped here! Because the cloud is your friend! Until it isn't anymore.

The only safe way seems to be to make sure you actually see those .one files after you created a new notebook. Or use OneNote and turn on the backup feature.

Is this on by default? No. Good defaults are what should be done here!

After some weeks of sadness I gave up. I tried a few "undelete" tools on all the PCs that had the notebooks. None found anything OneNote related. 

I did not use these notes every day. So the notes where taken more for myself. It might be my way of remembering this. So it's not a catastrophic disaster. I moan more about the notes that I took for future things.

If I will continue using OneNote is undecided. I want sane defaults that work in my favour, which are not in place.

After almost 7 years of running around with a MacBook Pro / Late 2013 Model, I decided to go back to windows. Or at least try...


  • August: purchased a 2020 Dell XPS 15 9500
  • Touchpad-Wobble. No replacement parts. Re-ordered after 17 workdays.
  • late October: Replacement XPS arrived
  • Touchpad-Wobble. Palm Rest was replaced two times - issue resolved.
  • January: Noted a chassis-flex causing accidental mouse-clicks
  • Palm-Rest replaced. Then Palm-Rest and mainboard replaced. Issue not resolved.
  • Send to Dell UK. Issue could not be resolved.
  • February: Dell kept the Notebook and refunded me all the money.

Story Time

The main reason to go Mac in 2013 was a friend and his recommendation. Still grateful for that. And no other real option was available for what I was looking for: weight less than 2kg, long battery life, a good looking device - let's be honest here good looking was number one. Good build quality. Performance. Nothing else came even close. And I do work a lot in terminals - made Mac a good choice.

I did never regret going MacBookPro - the build quality is fantastic. The MagSafe charger is too. Very good screen. Quietly running. The touchpad is simply amazing. A lot of good things.

I decided to replace the MacBook for several reasons. The main one: the battery degraded and MacOS did start showing a warning. The tool coconutBattery showed a remaining capacity of 78%. Everything was still working fine. But every other computer I use runs Windows. The switching around became cumbersome.

And windows has evolved. The very good Git Bash, the new Microsoft Terminal, the WSL2 (Windows Subsystem for Linux) - great things.

On the Apple side some of the Mac Book Pro features have been removed: no more MagSafe. I don't like the touchbar.

If I want to believe my Google search history (yes I have that turned on) the first search for "dell xps 15" was 2016. But I was looking for a device with a similarly good touchpad, weight, screen, battery life and good looks. It got more serious in 2018 where I had a lot of struggle with MacOS upgrades. Had to reset the MacBook twice within a year as of weird root certificate issues. Since Dell now offers a good sized touchpad with Microsoft Precision Drivers I thought thats my exit strategy.

The Pain of Ordering

The reviews of Dave Lee, Hardware Canucks and LTT as well as The Everyday Dad (sort of more of the "Mac" perspective) all sounded very good. But they all mentioned touchpad issues where there is a wobble. But they all claimed Dell would replace those parts, and as of June it should not be an issue anymore.

In July 2020 I ordered a Dell XPS 15 9500. It arrived mid-august 2020. A nice box, well packaged, a very good experience.

The touchpad had that wobble issue. So I called Dell Premium Support the same day. The support employee was very nice and good to talk to (after the usual 2-5 phone redirects...), a technician would show up the next day with a replacement part (the whole palm-rest needs to be replaced). Sounded good. Ticket closed (?!).

The next morning a received an email that no replacement parts are available and I would be contacted again. Ten days later I called again. No progress, the Dell support told me there is a wait time of 15 business days until something else can be done. Ticket closed (?!). Waited some more. Called again after seventeen days since the delivery. This time a new notebook was ordered for me with a delivery date planned for early October 2020. Not ideal. But well... Ticket closed (?!).

The delivery date was then moved back again into November. Also the notebook with the defect was picked up - had to do nothing there, thats a plus.

In the meantime I decided to order an iFixit kit and replace the MacBook battery myself following the very good guide they provide. In 55 easy steps - ok, a bit messy, because batteries really need to be glued into a chassis, right apple?! - I replaced the battery and in another 55 easy steps put it back together. Took longer than expected: but worked! Back to 100% capacity.

The replacement XPS arrived in the end of October. Nice packaging, good experience. Same touchpad issue with the replacement device! Dell Support again. Had another good call - after 3-5 redirects. Technician for tomorrow. Deja vu. BUT: actually he showed up the next morning. Cool. Replaced the palm-rest. The issue STILL there! I asked him if I'm nuts, but he confirmed that wobble, and that its broken. Weirdly: the old part did no longer have that wobble once not out of the notebook. Only when screwed into the laptop it appeared. He also stated he's replacing this part a lot, more in XPS series than the Precision series. Well. Disappointing quality control - but ok at least they replace the part and don't force you into discussions. New ticket.

New replacement part! He showed up again the next day again. Cool. Another palm-rest replacement. This time: SUCCESS! Finally. So the notebook I ordered end of July 2020 was usable end of October 2020. My very positive unboxing mood: gone by now. GONE! This is an expensive machine. The joy of buying one: GONE. Thats one working palm-rest out of four.

In January 2021 I noted a weird effect: lifting the notebook with one hand on one side caused accidental mouse clicks due to some weird chassis-flex (see Dell Community). The palm rest was replaced twice, also the mainboard was replaced once. Did not resolve this issue. Send the notebook to Dell UK. Issue could not be fixed, so Dell refunded me the Money. After this track record of repairs that was probably the best solution.


  • are you satisfied? no.
  • can you recommend the XPS: no.
  • regrets? maybe.
    Had I tried the battery replacement earlier I might would not have ordered one and waited until AMD Ryzen Mobile CPUs are more widely available.
    But this operation was a bit risky as you have to "rip out" the old battery, leaving me with no notebook at all. Didn't want to risk that.
  • I miss a USB-A port
  • I miss the HDMI output
  • Dell Support: despite the premium support purchased with the notebook, I can only describe the process as very cumbersome.

Comparing the machines

MacBookDell XPS 15
The touchpadIt's just perfect. Like it a lot. Never used a mouse.

The touchpad is very nice. Is it not as good as the Mac one (from 2013!). Even after 7 years, the Windows world has not been able to catch up.

I had to change some settings in the registry editor (see


The feeling is still 'weird'. If scrolling faster it's not really keeping up. Might needs some more messing around. And the general sensitivity is not as good. Sometimes touches (to click something) are not recognized. Never ever had that on the MacBook. Never needed to tweak something. It is just perfect the way it comes. So it's possible. Hear that? Microsoft? Dell? Both? Anyone?

Physical sizeA bit bigger, but nothing as a big plus or minus.The XPS is a bit smaller in size - mainly in width. But it has a good format.

WeightAround 2010gThe XPS feels quite a bit more heavy. But it's around 2055g. From lifting it up I would have said it must be more.

Not too bright. But very nice to look at. Glas. No touch.

Good scaling in MacOS.

After 7 years some sort of coating on the display is gone. Some sort of stain? It's not noticable when the screen is on, but clearly when off.

The screen has some good and ugly sides.

The color and brightness: fantastic.
Touch: I like it. I use it a lot for scrolling and closing windows. I think it's a handy feature. Hear that Apple? Probably not.
The automatic brightness changes in steps that I notice. Irritating. Had to turn it off (settings, not registry).

And one thing I don't understand: Windows on High-Res displays does not look so eye friendly like MacOS does. Can't tell what it is. It's all sharp and color accurate. But the font scaling still shows ugly dialogs and relative sizes that have not been fully "resolved".

The brightness can only be adjusted down to some limit, and not entirely turned off. I sometimes do that when I need the device to do something overnight. 

Build quality

Looks very good even after 7 years. The lid is a bit "tilted" from carrying it around. And dropping it once. Nothing serious. As good as new. Robust machine.

The lid closes with no gaps...

It is very good looking. There are "gaps". There is a rubber lid around the screen - as on the MacBook - to prevent dust entering when it's closed. But the lid does not close good enough. There is a gap. The MacBook does not have that.

Some small things I noted as well - and one better does not "note such things":

  • There are two little holes on top of the lid. I thought I broke something already. Maybe microphones? Was just irritated.
  • there is a white light in front that indicates charging. It is not fully illuminated.

    Update: I was able to fix this myself. Found a hint where maybe only a wire was blocking the led - which it was. See this Dell Support Topic.
  • The lid opens with one hand and has a perfect resistance. My MacBook requires two hands.
SpeakerNo complains.The Dell speakers sound better than the 7 year old MacBook speakers. The headphone output just produced static noise until the drivers (700MB) where re-installed.

got used to it. hard to tell (smile)

The arrow keys on the MacBook are somehow better to use. Less shift button hitting.

Feels better than the 7 year old MacBook. Has a fingerprint reader.

Also the Power On button could be a bit away from the other buttons.

There is no FN+Disable Microphone Shortcut? But neither has the Mac.

"Instant on"Open the lid to login: just a slight delay - sometimes the keyboard is not quite ready when the screen is. MacOS: just nice.Windows: differs. When not asleep it's very close. After some time its maybe 10 seconds. Not as good. Acceptable. Hope this may gets a bit better with time. No expectations, just hope.

mostly quiet. When doing something heavy for a while: well there are no miracles. But YouTube and Chrome: no fans kick in. But the MacBook is certainly on the warmer side. I will not say it's an issue, just if I could choose I would rather not have that warm fingers.

The air intakes are on the side. Thats perfect for my couch position.

The XPS is a bit more noisy that the MacBook. There are battery profiles to configure it to your preferences. A "migrate from mac" profile might be good... because... why again do I configure something I never did on MacOS?

But it is also a rather quiet machine. Not that quiet - it also has 64GB of RAM compared to the 16GB on my MacBook. And comes with double the CPU Cores/Threads and double the storage.

The air intakes are on the side and on the bottom. That feels not so perfect for my couch position (not sure if an issue).

Yet: when using Google Chrome the fans tend to kick in where on the MacBook I don't have that.

Battery Lifestill impressive. And after my battery-swap: even more so. I use caffeine to have the MacBook just "on". When left alone with only the screen running, it runs for hours and hours.

Battery-Life felt bad at the beginning. But it holds up quite nicely. Did not measure anything as I don't know how to compare both. But I have no complains.

At least the charging technology improved in the last few years. Also the charger that Dell includes is compact in size and nice. Not a random ugly one.

Running Linuxtried it. You can't run both internal and dedicated GPU. Well you can with some messing around. But: no, not going there. This is embarrassing. The notebook runs very hot and has a very bad battery life. You're obviously not supposed to do this. It's not your device alone.Linux and battery life. Open topic. But I meet more and more people doing this.
  • MacSafe
  • USB-A
  • HDMI

  • obviously no USB-C
  • no USB-A (for all the Logitech presenter, USB tokens and other devices I have)
  • no HDMI output
  • USB-C - nice for charging and docking stations. But very picky with the chargers.

a dongle for both USB-A and HDMI is included


Currently: back on my late 2013 MacBook. Congrats Dell. Well done.

Given the issues I found within hours or days, and given a replacement machine had the same issues, I feel the reviews I saw on YouTube where all a bit "lazy"?

Wouldn’t it be amazing if you can deliver the right software to your customer that they can understand and follow? 

Agile software development has gained a lot of traction in recent years. Yet still, teams struggle with freedom and responsibility. Behavior-driven development also referred to as BDD enables everyone involved in the project to easily engage with the product development cycle. It helps to stay on the path to get to the right decision to build not only software right but also building the right software.

In BDD users, testers and developers write test cases together in simple text files. These test cases are scenarios that are examples of how the software is supposed to behave. The shared scenarios help team members to understand what is going on. They are used during a long period of a cycle starting with the specification, helping during implementation and design, and can even fill feature completion reports.

BDD is a software development approach that brings users, testers, and developers together to create test cases in a simple text language. It also comes along with methods to go beyond just specification files and just another test utility. It brings methods to improve transparency and traceability into an agile team.

Do you want to learn how to deliver better software to your customers? 

Check out the workshop for further details.

Culture as Code

Welcome to culture as code! Now: obviously, this is a lie - culture is about social behaviour, about norms found among humans. Culture is more than what happens at your workplace. It is found in music, art, religion. You can't possibly put that into code. And you're right, you can't.

But hear me out. 

Let me tell a story showing that code can indeed be the source of influence on human behaviour - and help to improve on your team's culture. And by adding that human to the mix, make this work.

Suspect a required movement

The situation in software development if often diverse. It all started with good intentions, enthusiasm, even huge ambitions. Then it all drifted into some arrangement of progress and staying alive. A drift that companies often try to come by with organizing work into departments, increasing delivery cycles. And people get used to those cycles. Rely on them, maybe even demand them.

A culture driven by processes and rules emerges. Not too bad, not great either.

So the cycle continues: That application needs to be done in a couple of months?! We will do testing when the software is more complete. Does the CI server show a broken build? We can fix that once we need it. It is more important right now to be able to work locally. Do the integration tests do whatever they want? We can take care of those once all services are integrated.

This mental model of postponing important things in favour of urgent work will only change when the culture changes! We need to change our culture!! This is what developers are born for. This is what developers are given time for!!! Culture changes!!! Psychological work, social competence. Well, maybe not every developer is up for the task. Still, this situation needs to improve. 

Starting a movement

Copper plate with laws used by RomansBut what are developers good at? We declare our manifestos to follow, search for patterns to implement and re-use, define principles and write guidelines. Come up with practices to repeat the next time, since they worked. With rules to follow to not stumble over the same thing again. 

Rules - we can implement those - can't we?

Investigating the situation

Some rules are surely harder to implement than others. 

Like: Be humble to each other. Nice rule. Sounds good. Sounds important. The implementation may require some Commander Data brain. No one got time for that. 

New rules, more concrete, more useful! 

So here we go: The CI build is broken? Drop all your pencils and go fix it!

The code analyzer brought up a crime to light? Drop all your pencils and go fix it! The security scanner found a new vulnerability? Drop the pencils and update that library some insane person added! The deployment failed? Drop all your pencils and fix the deployment. The service crashed miserably? Drop all your pencils and investigate!

These are all important activities, and the quality in which we perform them affects our efficiency and how much time we can spend on other things.

We don't do all this because we would need to check the CI server, the code analyzer, the security scanner, the deployment tools, the service logs and monitoring tools - just to be able to know what is going on, if there is something to do at all and if the problem that came up is actually ours or caused by some other poor fellow.

So back to our plan: what are developers good at? Well, I do hope at writing code! And all of the above can be expressed in code and be shown to your team in an instant. Is that a culture change? It sure is not. But we ain't there just yet. Let's keep trucking.

This overview will help us to identify what task is in the wild and should be done - or - honestly - should have always been done - but we didn't - because it was too fiddly to check everything. So we can go from dropping the pen directly to addressing the issue, taking a huge jump over all the forensics.

Once we have a collection of this information and can show a summary, we make transparent what is hidden somewhere. No more constant searching for the same information. For us and others. The situation of our software is transparent. And obvious state in software? Isn't that beautiful.

Writing a build monitor

Well there is one already, isn't there? Sure there is. But none that shows our tools, our state, our information the way we want it. And - to anticipate a little bit here - we want to do more than just rules on some state. We target culture.

So once we wrote a small application that collects all the information we need (from the CI server, the code analyzer, the security scanner, the deployment information, and the application health state itself) we figure: we have quite some information at hand!

For example: is the production version newer than the version on pre-production? Is the application un-healthy because of some other service that is unavailable - because that may just mean to let them know about it? Does the meta-information of our application suggest any actions? Did we configure the application correctly so all monitoring tools can work? Was there a SNAPSHOT version deployed? Is the version that was created by the CI server the one that was deployed to the test environment? Or is there a version gap? Is the test-coverage anywhere near where it should be?

The pirate codex

All kinds of things can now be checked, beyond just a state in a single tool. The state of a delivery pipeline can be observed and checked against our rules. The rules we currently need.

Some may call this IT governance. But there is one big difference. These are our rules, they represent our current focus and priorities. Ideally these match - but let us not jump into this snakepit today.

Observe the change

Now as every team has a reddish screen, some will take on the challenge to go greenish. And they will notice that once something comes up, it is usually caused by other teams. These teams will now either: be lost and frustrated (because there is no one to lead them out of the mess), or they take on the challenge themselves. Because this is our tool and our rules. And no one wants to come in last. Some claim it - but I don't believe you. So application by application, pipeline by pipeline, team by team the situation improves. Because people care. Because they see the state, they see the effect of their work, they notice the improvements over time.

And when people start taking care where they previously didn't: there is your culture change.

So a simple build screen? A tool that allows implementing some custom rules? Does that work? 

The laws carved into a wall in Gortyn (Creta)

It does. But it does not on its own. Because tools are only tools. If a team does not find a way how to help themselves or to whom to turn to for embarrassing questions, they will ignore this. Delete the transparency and keep living in the mess. You need the humans willing to go through some essentials - application by application, team by team. You need some of them who care and carry this care to others. By making the things they care for transparent - for everyone.

This worked exceptionally well for me. Because you can always find other knights willing to ride along to battle. Because they are sick of living in the mud. And this way - step by step, floor by floor you will reach the roof under the stars. I will not argue it is a fast process. But some processes are more healthy if slow. And even with 40 applications, if you can only heal 2 a month, after only two years you will have healed them all. Looking at some mess today: don't you wish you would have started 24 months ago? Because today: it would be the roof and the stars! 

So if it is not today: let it be tomorrow.

When we think about some mobile apps and how they changed how we meet people, how we connect and get in touch: then why would code not be able to influence and change a culture? Code certainly already did this on other occasions.

Yes! Code can change a culture, together with the people that hold on to it. I've seen it.

The tool that came out of all these steps was called "Mobitor" - because it needed a name at some point. You can give it a run yourself:

The next steps are on to collect some guidelines to help you orient yourself on the journey.

The picture of the wall with laws is from wikipedia. The Roman bronze plate picture is from Claude. The Codex of Pirates of the Caribbean is from the fandom wiki (remember: it's just guidelines).

Kotlin all the things

So, after all, it seems JetBrains is very serious with Kotlin. And I have to admit it comes with some handy features and good IDE support. But this is not about the Kotlin language, this is about where it can be used.

As we have seen in Migrating from Gradle to Gradle writing Gradle build scripts using the new Kotlin DSL is supported. So far we have

  • our sources in Kotlin
  • our build configuration in Kotlin
  • but not our Continuous Integration configuration
    (depending on how far you want to push it your build chain or pipeline as well)

Since TeamCity (the CI server) is from JetBrains as well, it supports storing your build configuration not only over the UI. It supports storing the configuration in your VCS in XML format and (since around version 10 and 2017 in a Kotlin format. The current version 2019.1 comes with even more improvements and simplifications in this section. So throw away your build config yaml file! Kotlin you build config too! Although this seems a bit weird at the beginning, there are some big advantages:

  • it real source code
  • it compiles
  • you can share common pipeline definitions via libraries
  • since Kotlin is a typed language, there is a nice support for in your IDE with auto-completion
  • you can compile the code prior to pushing it - compared to the try and error cycle that YAML config files come with, I'll argue it is the better method

There is a very nice Blog from JetBrains on this topic that I can highly recommend to read:

And while you're at it, maybe watch the webinar on "Turbocharging TeamCity with Octopus Deploy" as well. Octopus is an additional commercial service. But distinguishing between continuous integration and deployment seems a good split in responsibilities.

Since I consider you know to be convinced let us test-drive it! We first need a TeamCity Server, a TeamCity Agent, and an example project.

To have it quickly set up to test-drive, I created a docker-compose.yaml file (yes a yaml file, isn't it ironic):

Clone the repository and fire it up:

$ docker-compose pull
$ docker-compose up -d

Then point a browser to http://localhost:8111/

This will bring you to the TeamCity installer. But don't panic, it will only take a few minutes!


  • Select PostgreSQL
  • download the driver
  • and use the same settings as used in the docker-compose file:
    • "postgres" as host
    • "teamcity" as username, password and database name



Scroll down and agree to the license
(well read it of course - but don't tell me you are not used to selling your soul)

Create an admin account

(for simplicity use "admin" and "password" here)

There you are

As you may notice on top there are 0 agents. Which is not entirely true.

But to have the one we have enabled it needs to be authorized first

Go to "Agents" and "Unauthorized" and enable the agent

If you now go to the start page (click on the logo on the top left) you will be able to add a project

Builds run already

If you have a close look at the project you will notice it contains a ".teamcity" directory that contains the build configuration:

The new 2019.1 format comes in a "portable" variant. So the number of files to earlier TeamCity version is reduced to only the settings.kts file and the pom.xml 

From here on no more clicking in the UI is necessary.

And even more comfort

Since you probably use IntelliJ to develop in Kotlin anyway and you now have a running TeamCity server from the same company, even more, comfort is possible!

Just install the TeamCity plugin in IntelliJ and point it via the new menu entry to your local server.

This will show the build status of your projects, but in addition, also allows you to run your local changes remotely as personal build! This is not a new feature but people tend to forget about it:

Install the plugin in IntelliJ

Restart and point it to the local TeamCity server
(we used "password" as password above)

And you can now remote run builds with local changes!

Even with not yet committed changes

Personal builds have this nice additional icon to mark personal builds. These are only visible to the user that created them

If you create an additional user and re-login you will not see other peoples personal builds

So we have:

  • our software in Kotlin
  • our Gradle build script in Kotlin
  • our CI configuration in Kotlin - able to share and re-use our build chains 
    (read the JetBrains blog above for more details on this)
  • and free of charge with this setup: remote runs

What a Kotlin world to live in!

Gradle build scripts have been written in a Groovy based DSL for a long time. Although flexible, the IDE support was always a bit of a problem. You either knew what to type or you searched the docs or tried to find an answer on stackoverflow. IDEs always struggled to provide help on writing tasks or configuring them.

For some time now, a Kotlin based DSL is in the works and as of Gradle 5 it is available in 1.0. So is it any better compared to what you can do with the Groovy based DSL?

To get started, some reading the documentation (later, after this blog post!) helps:

If you need to learn about Gradle in general, there are free online trainings available that I can highly recommend (from starting with gradle to advanced topics).

The example project created for this comparison is on GitHub and contains a simple spring boot application also written in kotlin that spits out a docker image. The master branch uses the groovy DSL, the kts branch uses the new kotlin DSL but does exactly the same.

Overview of the groovy build

The groovy based build script uses the new plugin syntax:

new plugin syntax
plugins {
  id "com.palantir.docker" version "0.22.1"

Instead of the old syntax which would look like this:

old plugin syntax
buildscript {
  repositories {
    maven {
      url ""
  dependencies {
    classpath ""

apply plugin: "com.palantir.docker"

This will simplify the kotlin script migration, as the kotlin syntax is very similar to the new one.

Note on the new plugin syntax

There have been some issues with this new syntax when a Maven Repository Proxy (like Nexus or Artifactory) is used. But the Gradle plugin repository is available as maven repository as well and as of Gradle 4.4.x plugins can be loaded via a repository proxy too (previously this would only work without any authentication or with direct internet access - which is unlikely in an enterprise environment). So Gradle 4.4.x comes to the rescue! You can add your repository proxy to an init.d script and use the new plugin syntax.

apply plugin: EnterpriseRepositoryPlugin

import org.gradle.util.GradleVersion

class EnterpriseRepositoryPlugin implements Plugin<Gradle> {

    private static String NEXUS_PUBLIC_URL = "https://<nexushostname.domain>/repository/public"

    void apply(Gradle gradle) {
        gradle.allprojects { project ->
            project.repositories {
                maven {
                    name "NexusPublic"
                    url NEXUS_PUBLIC_URL
                    credentials {
                       def env = System.getenv()
                       username "$env.NEXUS_USERNAME"
                       password "$env.NEXUS_PASSWORD"

            project.buildscript.repositories {
                maven {
                    name "NexusPublic"
                    url NEXUS_PUBLIC_URL
                    credentials {
                        def env = System.getenv()
                        username "$env.NEXUS_USERNAME"
                        password "$env.NEXUS_PASSWORD"

        def referenceVersion = GradleVersion.version("4.4.1")
        def currentVersion = GradleVersion.current();
        if(currentVersion >= referenceVersion) {
            gradle.settingsEvaluated { settings ->
                settings.pluginManagement {
                    repositories {
                        maven {
                            url NEXUS_PUBLIC_URL
                            name "NexusPublic"
                            credentials {
                                def env = System.getenv()
                                username "$env.NEXUS_USERNAME"
                                password "$env.NEXUS_PASSWORD"
        } else {
            println "Gradle version is too low! UPGRADE REQUIRED! (below " + referenceVersion + "): " + gradle.gradleVersion

Other than that the build does not contain any unusual things. There are task configurations and a custom task. The spring boot dependencies are added, the test task is configured to measure the test coverage using jacoco. The build uses the docker-palantir plugin to create the docker image. And there is a task that prints some log statements to tell a TeamCity server about the coverage results (will not hurt on Jenkins or Bamboo).

The docker plugin uses task rules to create some tasks, so it's configured via the extension class - there are several ways to do this, the variant used in the example is also to have IntelliJ understand what task it is so auto-completion works.

Overview of the kotlin build

The kotlin DSL build script (in build.gradle.kts - the buildscript has a new filename!) uses a plugin syntax very similar to the groovy one. It looks almost the same. The build script looks very similar if you compare them both.

The way tasks are referenced or created changes slightly. Once you get used to it it's fairly easy to use, and the IDE supports what you are doing!

Quirks and conclusion

As the IDE support is currently limited to IntelliJ we can only look at that. But if you were used to compile gradle builds on the command line anyway, the gradle wrapper will automatically recognize the kotlin build script and is by default capable to run these.

An improvement you almost immediately notice: auto-completion in build scripts suddenly makes sense! For some reason it sometimes is very slow to show up but the suggestions made are way better than what you would see using the groovy builds.

Yet IntelliJ will sometimes mark fields of plugins as not accessible - the build will work, it's just the IDE that complains. There are workarounds for some of the warnings.

The expected variant:

tasks.test {
    extensions.configure( {
        destinationFile = file(jacocoExecFileName)

But there is some access warning. But you can switch to using the setter:

tasks.test {
    extensions.configure( {

Not too nice, but not a showstopper - and unclear if Gradle is to blame or if it's some IntelliJ issue. 

In other cases it helped to hint the task type:

tasks.withType( {

To have better auto-completion. The documentation on the kotlin DSL gives some hints how to help yourself.

In every case where IntelliJ complained, a workaround could be found. But these are just to have IntelliJ not complain on the build script. Not ideal. But you can reach a state where IntelliJ does not mark any line with an error or warning! Compared to the random errors and warnings mess in the groovy build scripts: way better.

Comparing the length of both scripts, doesn't really show a clear winner. Both scripts have about the same length and structure. The tasks are often a few lines shorter, but the type declarations will add an import statement on top. Overall this simplifies the migration and keeps the readability one got used to. I wished everything was just shorter and more expressive - but that probably was just a personal wish - actually it is a bit unfair to the groovy DSL, which is already good. The build scripts seem to initialize slower but builds run at the same speed. But the way gradle optimizes task execution or determines if task configuration needs to be loaded at all did improves with gradle 5 - so the speed penalty might not be there for you at all. So the way it looks today: quite good :-)

I have no concerns to use the kotlin DSL in production builds at all and the IntelliJ support is in a good state, so you will not need to flip between the IDE and the command line all the time if you don't like doing that.

Is it a migration?

The title states this was a Gradle to Gradle migration. But the resulting build scripts look very similar. So is it really one? I would say yes. It took me two attempts with a couple of hours of searching around in the documentation and experimenting (as there are not so many examples around yet). Although the result does not look like much of a change, it took some effort to get there. But effort in the meaning of hours to days - surely not weeks (as I'm not the most experienced gradle nor kotlin user). Of course this may fall apart if a lot of plugins are used or they don't properly interact with the Gradle API in this version (as you will probably upgrade to gradle 5.x from a 4.x version).

Hints for the hasty

The linked documentation on top already contains this but just in case you are a very hasty developer, here are some useful gradle tasks in this context:

$ gradle kotlinDslAccessorsReport

prints the Kotlin code necessary to access the model elements contributed by all the applied plugins. The report provides both names and types.

You can then find out the type of a given task by running

$ gradle help --task <taskName>

Another important statement is in the migration guide in the configuring plugins section:

Keeping build scripts declarative

To get the most benefits of the Gradle Kotlin DSL you should strive to keep your build scripts declarative. The main thing to remember here is that in order to get type-safe accessors, plugins must be applied before the body of build scripts.

So if you are programming in kotlin anyway, and you also use TeamCity and its kotlin build DSL you can now also use kotlin in your builds too. Kotlin all the things!

Kotlin has certainly more momentum today compared to Groovy. The typed DSL solves some crucial handling issues in gradle build scripts. I would guess the new DSL may become the default at some point, not that I'm aware of any timeline. Just an assumption. So don't hurry, the groovy DSL will be around for quite some time. If you are starting with gradle, I would try using the kotlin DSL from the beginning.

Give it a try! 

These are some notes that were taken when watching this video:

One pattern of the book is “be a Hands-on Modeller” (you have to have some contact to the ground level or you won’t give good advice, stay up to date, stay sharp, keep learning things you can talk about).

Every effective DDD person is a Hands-on Modeller.

A lot of things are not exactly different from the book but there is a little different emphasize.

What is (really) essential in the book?

  1. Creating creative collaboration of domain experts & software experts à ubiquitous language pattern
    (you’re not supposed to create that model for yourselves)
  2. Exploration and experimentation
    the first useful model you encounter is unlikely to be the best one. When there are glitches and you start working around if you’re already frozen. à “blast it wide open”, explore with the domain experts
  3. Emerging models shaping and reshaping ubiquitous language
    (say things crisply an un-ubiquitous, no complicated explanations), explore with the domain expert
  4. explicit context boundaries (sadly it is in chapter 14, would be chapter 2 or 3 today)
    a statement in a language makes no sense when it's floating around, you could only guess the context it is in.
    Draw a context map! Should be done in every project!
  5. focus on the core domain (sadly it is in chapter 15)
    find the differentiator involved in your software: how is your software supposed to change the situation for the business you’re in (we do not mean efficiently, something significant)

These are the things to focus on.

Building Blocks (chapter 5)

Our modelling paradigm is too general, we have objects and relations – this is just too broad. We need something that structures this a little more, puts things into categories, helps communicate the nature of your choices.

Services - Entities - Value objects
Repositories – Factories

  • They are important but overemphasized
  • But let’s add another one anyway, as an important ingredient: Domain Events (interesting for domain expert):
    The level of event (important for the domain expert) you want to record something important happened in your domain. There is a consistent form:
    • Tend to happen at a certain time
    • Tend to have associated a person
    • typically immutable (you record it happened and that’s it)
  • Domain Events give you some architectural options, especially for distributed systems
    (record events from different locations)
  • consistent view on this entity (“runs” in a game reported from different locations) across a distributed system à event oriented view

More options:

  • Decoupling subsystems with event streams (Design Decoupling)
    • Have a core transactional system, send out a stream of domain events
      (you can change the definitions and only need to maintain the stream of events)
  • Such distributed systems are inconsistent but well characterized
  • Have multiple models in a project that are perfect for their purpose say reporting and trading (of course you don’t have to)

Another aspect of domain events are distributed systems:
Enabling high-performance systems (Greg Young)

Aggregates (super important)

  • Do people often ask how to access what’s inside? But that’s not the most important question.
  • Aggregates help to enforce the real rules
  • You have something you think of as a conceptual whole which is also made up of smaller parts and you have rules that apply to the whole thing.
  • Classic example: purchase order, having a limit, an amount, line items that add up, …
    but with thousands of line items object orientation gets a little stuck
  • Beware of mimic objects (that carry data around but not doing anything useful)
    • "Where is the action taking place?”
  • Sometimes it might be useful to give aggregates more privileges so they could execute a count query themselves.
  • Aggregate: we see it as a conceptual whole and we want it to be consistent
    • Consistency boundaries
      • Transactions
      • Distribution
        (you need to define what has to be consistent when crossing the boundaries)
      • Concurrency
    • Properties
    • Invariant rules

Strategic design

  1. Context mapping
  2. Distillation of the core domain
  3. Large scale structure

Large scale structures do not come up that often.

Setting the stage

  • Don’t spread modelling too thin (“you need to know why modelling is done”)
    Modelling is an intensive activity, so the more people understand it the more value you gain
  • Focus on the core domain, find out what it is. Find the need for extreme clarity.
  • Clean, bounded context
  • iterative process
  • access to a domain expert

Context mapping

Context: the setting in which a word or statement appears that determines its meaning.

Bounded context: a description of the conditions under which a particular model applies.

Partners: to teams that are mutually dependent. Forced into a cooperative relationship.

“Big Ball of Mud”:  (the most successful architecture ever applied)

How to get out? Draw a context map around the big ball of mud. Build a small correct module inside the ball until eventually the ball of mud captures you. But you had that time to do it right. So think about an anti-corruption layer.

If you transfer a model into a different context, use a translation map:
model in context <--> translation map <--> model in context

Explain the meaning of something. Because meaning demands context.


  • Draw a context map
  • Define core domain with business leadership
  • Design a platform that supports the core domain