Fixing ‘host must not be null’ with LocalStack and AWS S3 Client

After being inspired by Piotr Przybyl‘s talk about integration testing, I started off a new project with some thorough testing of a Spring Boot JAR.

Piotr Przybyl on Integration Testing

Software development is full of obscure problems, and when uploading an object to an emulated Amazon S3 service using LocalStack, I kept getting the error:

java.lang.NullPointerException: host must not be null.

Not the most helpful error message, especially since I was setting endpointOverride on my S3Client builder in this way:

this.s3Client = S3Client.builder().endpointOverride(localstack.getEndpoint()).credentialsProvider(StaticCredentialsProvider.create(credentials)).build();

Some hours of painful searching suggested that the error message was pointing me in the wrong direction.

The solution? Add this to the Builder:

.forcePathStyle(Boolean.TRUE)

Microsoft Defender and macOS Ventura

macOS Ventura has been released. I use my Mac Mini far less than I do my MacBook Pro, so I decided to upgrade without a fresh install, something I very rarely do.

Surprisingly, the only casualty of the upgrade was Microsoft Defender, which came up with a warning triangle and a suggestion I click ‘Fix’, bringing up the Full Disk Access page of System Preferences to do… well, something.

The error message in Defender wasn’t helpful, and I wondered if it was bringing up the Full Disk Access pane instead of another pane. mdatp health to the rescue, which reported “Full disk access has not been granted”, so I knew I was on the right track.

The fix was really simple – click on Microsoft Defender in the preferences pane and click the minus button, then repeat this for Microsoft Defender Security Extension. After a second or two, an entry for Microsoft Defender will reappear, and should have full disk access toggled on if it hasn’t already.

For some reason, the second time I did this, the Security Extension didn’t appear, but if it does for you, toggle that on.

Automating your home with openHAB

In the past few weeks, I’ve been actively looking at how I can manage all my ‘smart’ (or ZigBee/WiFi connected) IoT devices in my home from a central place.

I started with Home Assistant, but quickly found the user interface a bit too clunky for my appetite. Searching around, openHAB came up as a good contender and it meets almost all of my wants – active development, an APT repository, the ability to run under Docker and well-used by many people.

Data structure

The thinking behind openHAB’s data structure confused me at first – a combination of things, channels and items didn’t seem logical to me until I got stuck in. It turns out it’s quite clever:

  • A thing is a physical device, such as a smart plug or bulb
  • Each thing has one or more channels, which are individually accessible data points on the device, such as an on/off switch (input), or energy usage monitor (output)
  • Links connect a channel to an item, such as energy usage to a metric (output) or on/off toggle (input). Channels can have multiple links too

Where a thing is accessed via another device, such as a Hue bulb, a special type of thing called a bridge needs to be defined. The bridge discovers other things connected to it and publishes them ready for configuration. Bridges are instances of a binding, so if you have three Hue controllers, you have three bridges defined.

Lighting things up

Connecting my Hue lights was trivial. The Hue binding is included in the openHAB distribution, and is installed by clicking ‘Install’. Adding the bridge requires the IP address of the Hue bridge and a username, and a quick press of the hardware button on the bridge to pair things together. At this point, the bridge reports the devices connected to it, and it’s just a case of adding them as things.

Conclusion

Despite the UI feeling a bit fiddly to edit – similar to writing HTML in Notepad but having to indent it as it’s YAML – I absolutely love openHAB. In the coming weeks, I’ll write up how I connected my Glow IHD and CAB, a Tasmota switch for my porch light, my Ring doorbell and the problems I had with my TP-Link Tapo devices and how easy it was to fix them.

Monitoring your appliances’ power

I recently posted about real-time data from your smart meter and all was good, but then thread by Robin Hawkes on Twitter caught my eye:

These devices connect to your WiFi network and allow you to switch a connected device on or off, on a schedule if you require, and also monitor power consumption. That last bit is the most important for me – knowing how much energy I’m consuming.

After researching Tasmota, an open firmware for simple home automation devices, and checking a bunch of reviews on the TP-Link Tapo P110‘s firmware, it looked like these P110s would do just what I needed – simple power monitoring for not much of an initial outlay.

LocalBytes were out of stock of both of these items when I looked, so I took the plunge and ordered one from elsewhere. Well, I ordered eight because I was feeling quite bold.

Initial setup

Trivial. Download the Android (or iOS presumably) app, plug in a device, find its wireless network and configure it to connect to your wireless network. It reboots and that’s about it.

Firmware updates can be scheduled automatically, but I don’t know whether this will switch the power off to any connected device or not. Something to check later.

Control and monitoring

The mobile app makes it quite easy to switch a device on and off, to set a schedule, and to see power consumption. But it’s a mobile app, and I want the data somewhere I can analyse it easily.

Home Assistant to the rescue! Running HA under Docker is really easy if you know Docker. I could have re-purposed one of my Raspberry Pi as I can’t get a new OpenVMS Community licence for VAX any more, but I wanted to try HA quickly to see if it fitted my needs.

Support for the P110 devices doesn’t come as standard, but there’s a community-written workaround for that. A little fiddly and not really what I’d expect, but it works.

Next steps

Oh boy, there’s a lot I want to do.

First off, I want to push the sensor data out from these devices in to an MQTT server such as Mosquitto and have Telegraf pull this data in to a time-series database so I can visualise it with Grafana.

Other things I want to do include automatically checking my Google Calendar and setting the heating to come up early when I’m doing a morning clinic, or having it come on a bit later when I’m at home that day. I want to get an inline power switch for my porch light and turn that on between dusk and sometime around midnight.

Slack’s verbose logging on Linux

I’m a long-time user of Slack. Many of my customers use it, and we share channels to exchange information and work better together. I’m also a member of a number of other Slack workspaces for various projects.

The Linux app is great, with one exception – it’s very heavy on logging:

slack.desktop[27635]: [09/12/22, 13:58:26:475] info: API-Q cb429c3b-1662987506.474 client.shouldReload called with reason: polling
slack.desktop[27635]: [09/12/22, 13:58:26:475] info: API-Q cb429c3b-1662987506.474 client.shouldReload is ENQUEUED
slack.desktop[27635]: [09/12/22, 13:58:26:487] info: API-Q cb429c3b-1662987506.474 client.shouldReload is ACTIVE
slack.desktop[27635]: [09/12/22, 13:58:26:631] info: API-Q cb429c3b-1662987506.474 client.shouldReload is RESOLVED
slack.desktop[27635]: [09/12/22, 13:58:26:632] info: [MIN-VERSION] No need to reload
slack.desktop[27635]: [09/12/22, 13:58:34:433] info: DND_V2 Checking for changes in DND status for the following members: XXXXXXXXX,XXXXXXXXX,XXXXXXXXX
slack.desktop[27635]: [09/12/22, 13:58:34:435] info: DND_V2 Will check for changes in DND status again in 1.43 minutes

I don’t want my syslog littered with information that’s not useful! Other applications such as gnome-shell are pretty good at being verbose, but not to Slack’s extent.

The fix, thankfully, is super easy. Create a file named /etc/rsyslog.d/20-slack.conf with the following:

Drop info log messages from Slack
:rawmsg,contains,"slack.desktop" /dev/null
& stop

Run sudo systemctl reload rsyslogd and ta-da, no more Slack logging in your syslog.

Real-time Smart Meter data

A year or two ago, I took the plunge and had a smart meter installed. I naively thought that being able to read energy usage was a simple case of connecting a ConBee-II or similar to the ZigBee HAN.

To save anyone else from going through the same range of emotions as I did, here’s how you can read your own smart meter data.

Technology primer

Your electricity meter has two parts – a metering device, and a communications device located at the top. The electricity meter periodically sends energy usage information over a communications network to your supplier. It’s easy when you have a continual supply of electricity.

If you have a gas meter, it doesn’t have its own communications device. To do so would require a power supply to the gas meter – readily available on an electricity meter. Instead, the gas meter sends energy usage to the electricity meter every 30 minutes and therefore only has a long-life battery installed.

Getting access to the data

There are two ways to get access to real-time electricity and real-ish time gas usage data. Neither of them involve pairing your own device.

The best option is to buy a combined In-Home Display (IHD) and Customer Access Device (CAD) from Glow (Hildebrand Technology) sell a combined in-home display (IHD) and Customer Access Device (CAD) for around £65. This arrives already paired with your smart meter, and you connect it to your home wireless network, and it sends out data from your smart meter to an MQTT server (which can be on your local network too), ready for you to consume yourself. The device needs Internet access for firmware updates, but your data is kept locally.

The other option is to use intermediary such as Glowmarkt, who are a DCC Other User and can request your metering data from the Data Communications Company, then make it available to you. This is a straightforward process, although you need to go through an industry-mandated security process to prove you are requesting access to your data and not somebody else’s. They make your data available via their MQTT server, although it’s not at the same level of granularity as having your own IHD/CAD.

The data

You can use the CAD as a simple IHD – it’s a lot prettier than the one supplied by my energy supplier (which I can still use). Electricity usage arrives every few seconds, with gas usage as and when the metering equipment makes it available.

The real power comes when you work with data in real-time, or at least as close to real-time as it’ll give you. Want to work out how much the tumble dryer cost to run? Or find out whether you’re OK with having your home a little cooler to save a bunch of energy?

Data format

The MQTT messages you receive are in JSON format, and contain data for three ‘clusters’ – Metering (0x0702), Prepayment (0x0705) and Device Management (0x0708). Each of these clusters has an attribute set – the Metering cluster presents the Reading Information Set (0x00), Formatting (0x03) and Historical Consumption (0x04). Finally, each attribute set has a set of key/value pairs. From this, you can decode that cluster 0x0708 (Device Management), attribute set 0x01 (Supplier Control Attribute Set) value 0x01 is the provider name.

Since the data is sent in JSON format, it’s quite easy to parse. If you want to dive straight in to the detail about what clusters and attribute sets are, the ZigBee Smart Energy Standard is available, although at 628 pages, it’s a heavy read.

Problems

The entire process was quick – the CAD arrived within a day or two of placing my order and was ready to use the moment I plugged it in. Getting my data in real-time took a little longer – it’s a manual process for the staff at Glow, but once it’s set up, that’s it.

The only problem is that there’s no formal support. For the first six months, the CAD disconnected itself from my WiFi network for no reason. Despite posting about my issue, there’s was no progress on it – but out of the blue, a firmware update arrived which fixed the issue.

Is the lack of formal support a problem? Likely not – unless you’re having an issue. Since the firmware update, I’ve had no problems with the IHD, other than a lack of time to play with and analyse the data.

Recommendation: go buy one!

Working around ‘Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg)’

If a webpage tells you to run a command to import a GPG key when setting up an APT repository, it isn’t necessarily correct! Newer versions of Ubuntu no longer use apt-key and /etc/apt/trusted.gpg, preferring you put repository GPG keys in a file under /etc/apt/trusted.gpg.d.

Having recently reinstalled my desktop and not realising this, I had this exceedingly annoying error:

W: https://packagecloud.io/slacktechnologies/slack/debian/dists/jessie/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.

This is one of the kinds of errors that tells you what you shouldn’t do, and isn’t too helpful about guiding you to what you should do.

As this is an easily forgettable problem, here’s how to fix it.

First, list the keys in /etc/apt/trusted.gpg:

gpg --keyring /etc/apt/trusted.gpg --list-keys

You will see a list of keys similar to the following:

pub rsa4096 2021-10-27 [SC] [expires: 2023-01-20]
F9A211976ED662F00E59361E5E3C45D7B312C643
uid [ unknown] Spotify Public Repository Signing Key tux@spotify.com

pub rsa4096 2013-11-19 [SC] [expires: 2027-11-11]
222B85B0F90BE2D24CFEB93F47484E50656D16C7
uid [ unknown] Keybase.io Code Signing (v1) code@keybase.io
sub rsa4096 2013-11-19 [E] [expires: 2027-11-11]

pub rsa4096 2014-01-13 [SCEA] [expired: 2019-01-12]
418A7F2FB0E1E6E7EABF6FE8C2E73424D59097AB
uid [ expired] packagecloud ops (production key) ops@packagecloud.io

pub rsa4096 2016-02-18 [SCEA]
DB085A08CA13B8ACB917E0F6D938EC0D038651BD
uid [ unknown] https://packagecloud.io/slacktechnologies/slack (https://packagecloud.io/docs#gpg_signing) support@packagecloud.io
sub rsa4096 2016-02-18 [SEA]

For each of the keys, find the key ID – the long hexadecimal string on the second line, and run the following command:

gpg --keyring /etc/apt/trusted.gpg --export <key-id> | sudo tee /etc/apt/trusted.gpg/<repository>.gpg

Finally, tidy up after yourself by deleting the key from trusted.gpg:

sudo gpg --keyring /etc/apt/trusted.gpg --delete-key <key-id>

You can even specify multiple keys on the command line.

And that’s it.

Installing a LetsEncrypt certificate on an HPE iLO 5

Once again, I’ve spent far too long trying to work out how the heck to get a LetsEncrypt X.509 certificate on to the HPE Integrated Lights-Out 5 board.

To save me some time in three months, and to save you some time since you’re already here, the instructions are really straightforward. Be aware that this method uses a DNS-based challenge which may be tricky for you to do unless you can automate DNS updates.

First, from the iLO web interface, select Security and SSL Certificate, and click Customize Certificiate.

Next, Click Generate CSR and go back to the page in a few minutes when the certificate signing request (CSR) has been generated.

Copy the CSR in to a file on a machine with the ‘certbot’ client installed and run the following command:

certbot certonly --csr request.csr --preferred-challenges dns

When the certificate has been issued, select Import Certificate and paste in the entire PEM-formatted file.

And it’s really as straightforward as that.

Let’s Encrypt and Zabbix Agents

A week or two ago, some of a number of servers became unreachable via Zabbix. Both pairs of servers (as we run everything in pairs) showed the same problem, and no other host did.

Looking in the Zabbix log showed this rather cryptic error:

SSL_shutdown() with 172.31.16.32 set result code to 6

That’s not very helpful, and several hours of head-scratching went by before we finally stumbled across what was happening.

The X.509 (or SSL) certificate on the target machine is issued by Let’s Encrypt, who are in the process of signing new certificates with a new key. Within Zabbix, we check that the certificate presented by the client when we connect is issued by a specific issuer and since this had changed, the server was refusing to connect.

How did we fix it? Really easily – by going in to the host configuration in Zabbix, and setting the issuer to:

CN=R3,O=Let's Encrypt,C=US

Ridiculously straightforward, and if you hover over the red ‘ZBX’ status box, you’ll see an error saying that the wrong issuer was found on the certificate.

That’s a few hours we’ll never get back, but it’s great to have solved the problem. And as it happens, there was a correlation between the servers affected – two pairs were built at the same time, and the remaining pair was built just about 90 days before the others. Let’s Encrypt certificates have a lifetime of 90 days.

We’re expecting further servers to drop off Zabbix, but at least we know why and we can fix it.

Reading a VAXstation 3100 ROM

I’ve had a VAXstation 3100 M38 sitting in my flat for years, but it doesn’t quite work. I’ll write up the repair process elsewhere, but as part of troubleshooting I decided to see what’s on some of the ROMS.

There are two M27C1024 chips installed on the main board. These are STMicroelectronics chips, both 1Mbit EPROMs. Reading them on Minipro produced an interesting error:

pwh@angel:~/src/minipro$ ./minipro -p "M27C1024@DIP40" -r rom2.bin -y
WARNING: Chip ID mismatch: expected 0x20008C00, got 0x20FF8CFF (unknown)

Maybe this is a DEC variant. Throwing caution to the wind, I re-ran Minipro with the -y switch to ignore the error. This produced a 131,072 byte file for each chip, which is correct for a 1Mbit IC because 131072 bytes is 1 megabit.

The contents of the files were all over the place – nothing stood out as text except for this:

) ns Fn
s uie mae) 1Desc 9)taan
Dts (hwz) 10Nerlds 3Enis 1)or
) glh rishri) 1 Ptu
s 5Es
o 3)uo
) anis 1 Sns
) anisCadi) 1 Vam

If you’ve used a VAX before, you might recognise that this looks a little like the language selection menu:

0) Dansk                      8) Français (Suisse Romande)
1) Deutsch                    9) Italiano
2) Deutsch (Schweiz)         10) Nederlands
3) English                   11) Norsk
4) English (British/Irish)   12) Português
5) Español                   13) Suomi
6) Français                  14) Svenska
7) Français (Canadien)       15) Vlaams

Since this is the second ROM of a pair, let’s look at what the first contains:

0Dak 8)raai(SssRond
) uth Ilio
2)euchScei ) dean
) glh 1 Nsk 4Enis(Bti/Ish 2)orgu
) pal 1 Smi 6Fr
a 4)veka 7Fr a
(naen 5)las

If we take two bytes from one ROM, and two bytes from the second ROM, we can take a guess as to how the image is laid out. The characters “En” are from the the second ROM, “gl” from the first ROM, “is” from the second ROM, “h ” from the first ROM, and so on.

If it sounds like madness, but the datasheet makes it clear why:

It is ideally suited for microprocessor systems requiring large data or program storage and is organized as 65,536 words of 16 bits.

This handy bit of Python code will reassemble the data by reading two bytes from one file, two bytes from the second and so on:

#!/usr/bin/env python
#
#  Read the content of two files and output an interleaved file
#

from pathlib import Path

INTERLEAVE = 2
FILE_A = './rom1.bin'
FILE_B = './rom2.bin'

if Path(FILE_A).stat().st_size != Path(FILE_B).stat().st_size:
    print('Files differ')

fileA = open(FILE_A, 'rb')
fileB = open(FILE_B, 'rb')

data = b""

for n in range(Path(FILE_A).stat().st_size):
    data = data + fileA.read(2)
    data = data + fileB.read(2)

f = open('./out.bin', 'w+b')
f.write(data)
f.close()

How do we know it’s worked? Simple – the content of the binary file is exactly the same as the original KA42B firmware!