Hue-ge pain in the butt

I have a Philips Hue bridge which lets me control the lights in my flat in a variety of useful ways. It’s a good bit of kit, but with one major problem – it assumes you’re running PAT (Port Address Translation), and that your Hue bridge and the device you access https://account.meethue.com/bridge both access the Internet from the same source IP address. If not, even though the devices may be in the same broadcast domain and the same IPv4 subnet, you won’t be able to link your Hue to your account.

Despite tweeting for assistance, I ended up crying shibboleet and reverse-engineering the method of linking they’re using. Here’s what I found out, in the hope it’ll save somebody else a lot of time.

My Internet services are through the excellent A&A, and I can’t recommend them highly enough. I have a public IPv4 subnet, and each of my devices accesses the Internet without any address translation. Inbound connectivity is restricted – there are only a few things I need accessible from the Internet. (As an aside, I have two DSL lines, with my IPv4 subnet routed down each – load balancing and resilience)

My Hue bridge connects to https://discovery.meethue.com/, and that service makes a note of an inventory that the bridge sends to it. Here’s where the problem is – visiting discovery.meethue.com only returns the devices that registered from the IP address you’re connecting from. That’s fine if all your devices go through address translation and appear to come from a single external IP address, but useless for me – my mobile device uses an entirely different IPv4 address, as does my desktop and laptop. The Hue app reports that no devices were found.

After some frustrating interactions on Twitter, I solved the problem myself. I set up IP Masquerade – essentially port address translation behind the router’s external IPv4 address – for my Hue bridge and my mobile device, so they’d appear to be coming from the router’s external IP address. Rebooting the Hue, disabling one of the PPP connections on my router (necessary since they both have an IP address assigned, and my outbound traffic is load-balanced per TCP connection) and linking the device from my mobile phone then worked. Rolling it all back and rebooting the Hue again leaves the device linked to my account.

What a mess. Adding a “Enter the IP address of your Hue then when prompted, press the button” on the device linking page would have been a whole load easier. Not everyone’s Internet connection is the same, nor is everyone as experienced in network engineering as I am… yet still it took me three days to work out a fix.

In summary: buy Hue devices – they’re good, but beware if you’re doing anything that possibly deviates from the common case.

Integration and Unit Tests with IntelliJ IDEA and JUnit 5

When working on a project in Java, I like to name my integration and unit tests separately. Integration tests end ‘-IT’, and unit tests don’t.

This makes it really easy to run just unit or integration tests in IntelliJ by using one of these two patterns:

  • Integration tests – ^(.*IT.*).*$
  • Unit tests – ^(?!.*IT.*).*$

Make sure the Run configuration has the Test Kind set to Pattern, and searches for tests in the whole project.

OpenLDAP with TLS and LetsEncrypt on Ubuntu 16.04

A project I’m working on requires a Kerberos and LDAP infrastructure. As with most tech, it’s easy to do something quickly, but much harder to do it properly and get it documented.
One of the biggest problems I encountered was when setting up replication between LDAP servers. We use SaltStack to build and maintain our server estate, so deployment and configuration needs to be automated.
Using LetsEncrypt to issue a certificate to each server, OpenLDAP can take the certificate and use it to encrypt and authenticate connections from other LDAP servers. It’s meant to be simple – add this LDIF file to your directory:
dn: cn=config
add: olcTLSCACertificateFile
olcTLSCACertificateFile: /etc/letsencrypt/live/ldap1.example.com/fullchain.pem
-
add: olcTLSCertificateFile
olcTLSCertificateFile: /etc/letsencrypt/live/ldap1.example.com/cert.pem
-
add: olcTLSCertificateKeyFile
olcTLSCertificateKeyFile: /etc/letsencrypt/live/ldap1.example.com/privkey.pem

This error kept cropping up:
$ sudo ldapmodify -Y EXTERNAL -H ldapi:/// -f ./ssl.ldif
SASL/EXTERNAL authentication started
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
modifying entry "cn=config"
ldap_modify: Other (e.g., implementation specific) error (80)

Restarting slapd with all debugging switched on resulted in nothing, and strace-ing the daemon whilst running was similarly fruitless.
Cutting to the chase, the problem was that the openldap user didn’t have access to the certificate symlinks, nor the certificates either, and that AppArmor was blocking access to the files under /etc/letsencrypt.
So how do we solve it?
First, using setfacl, give the openldap user rx permissions on /etc/letsencrypt/live and /etc/letsencrypt/archive. This will allow slapd to follow read and follow the symbolic links to the actual files in the archive directory. Next, add the following to /etc/apparmor.d/local/usr.sbin.slapd:
/etc/letsencrypt/live/{{ grains.id }} r,
/etc/letsencrypt/archive/{{ grains.id }} r,
/etc/letsencrypt/archive/{{ grains.id }}/** r,

The trailing comma on the last line isn’t an error – this will stop AppArmor blocking slapd access to the certificate symlink and the actual certificate, as above. Remember to restart AppArmor afterwards.
And that’s it.

Turning off SSID broadcast for HP LaserJet printers

I took delivery of a shiny HP LaserJet wireless printer recently. Setting it up on the office network was fairly easy once I’d connected it via a cable (and then disconnected it afterwards) – except it kept broadcasting an open wireless network (SSID) which anyone could use.
That’s not awesome, but on HP’s Support Forum, I found these instructions from a user who worked out how to disable the network:

  1. Make sure you do not have a USB cable attached to your
  2. Go to the web configuration portal for the printer
  3. Go to the Networking tab> Wi-Fi Direct Setup
  4. Set WiFi Direct to "On"
  5. Set Connection Method to "Advanced"
  6. Check the box "Do not broadcast the Wi-Fi Direct name"
  7. Click "Apply"
  8. Restart the printer
  9. Verify you no longer see the SSID
  10. Go back to the Wi-Fi Direct Setup and set the WiFi Direct setting to "off"

Converting from assert() to assertEquals() in Java

When I was inexperienced in Java, I wrote a lot of tests using assert(), rather than using assertEquals().
Revisiting code today, I wanted to update many test suites to use assertEquals(), which requires I flip the expected and actual values around. Too difficult to do quickly by hand, so I used the following regular expression:
Find: assert \((.+)\)\.equals\((.+)\)\);
Replace: assertEquals($2, $1));
It worked like a treat.

Securing an HP LaserJet printer with LetsEncrypt

The fantastic Let’s Encrypt service lets you issue SSL/TLS certificates to devices without charge. It’s not everything you may want at the enterprise level, but for the professional in their home environment, it’s great.
I wanted to replace the self-signed certificate on an HP printer I had, but it wasn’t an easy process. I’ve documented it here so it can be useful to others too.
First, use certbot to generate your certificate. Run the command as follows:
certbot -d host.example.com --manual --preferred-challenges dns certonly
This will instruct you to add a TXT record to the DNS record for the host for authentication, after which you’ll receive your certificate.
To convert this in to a PKCS#12 file, suitable for loading on to the printer, use the following command:
openssl pkcs12 -export -out certificate.pfx -inkey config/live/host.example.com/privkey.pem -in config/live/host.example.com/cert.pem
The .pfx file can then be uploaded to the printer and it’ll use it immediately.

Ubuntu 16.04LTS

I will freely admit that I’ve been putting off upgrading my Ubuntu 14.04LTS boxes to 16.04LTS. In a previous post, I wrote about my battle with getting the Ubuntu desktop to be usable in the way I wanted it. Having tried this out on 16.04LTS, I realised that I’d have to change the way I work.
I am two weeks in to running the new upgraded system and I wish I’d gone through the pain earlier. Making the Unity Launcher smaller, getting used to the menu bar in the top row of windows, and the close, minimise and maximise buttons landing in the top left of the screen when maximised – none of those took particularly long to get past.
On previous installs, I’ve wanted shortcuts to the common applications at the top of the screen by the clock, but I’ve locked these in to the Unity Launcher – and there’s more space for them.
The only irritation that’s still there is resizing terminal windows. It takes a while to re-learn that I don’t have to be precise with the cursor positioning to change the window size. And that’s it.
Ubuntu 16.04LTS, you are forgiven – I thought you were going to be a nightmare, but you’re lovely. And when I unplug one of my monitors from the graphics card, you put everything back on the screen still plugged in. That’s awesome!

Citrix Receiver and CA support

On installing an Ubuntu 16.04 desktop from scratch and putting Citrix Receiver on it, I found I couldn’t connect to one of the servers that I need to for work. The strange error suggested that the Citrix client doesn’t trust the VeriSign Class 3 Public Primary Certification Authority – G5.
This seemed confusing at first – I have the CA certificate in /usr/share/ca-certificates/mozilla. A quick look at where Citrix Receiver installed itself quickly found what was going on.
Citrix Receiver only has a handful of CA certificates installed in /opt/Citrix/ICAClient/keystore/cacerts, so if you’re connecting to a server with an SSL certificate other than the dozen or so in here, simply copy the .crt or .pem file and you’re sorted.

On mainframes and freedom

Last week, CIO Magazine reported on the shortfall in mainframe skills.
It reminded me of a situation I faced a couple of years ago. My team was preparing to write software which integrated with an IBM z/OS system, and I knew literally nothing about it. I didn’t need to know all the details, but I wanted to be able to talk to the system programmers coherently and be taken seriously. I didn’t know much about CICS, nor RACF, and I’d not even heard of RACF. I wanted to learn.
I am a self-starter, and I want to get my hands on the technology I’m going to use in my work life. From learning perl, PHP and SQL in the late 1990s so I could capture ISDN call log data over RADIUS, to buying a bunch of older Cisco equipment to bring me up to speed on VoIP, it’s worked stunningly well. I wanted to do the same with z/OS – not to become an overnight expert, but to be able to bridge the gap between Java developers and the people who ran the mainframe we were going to be using. It would make the whole project happen with much less friction.
After no more than an hour or so’s research, I found IBM wanted $900-or-so a year, on top of buying dedicated PC hardware just so I could run z/OS and spend a bit of time becoming familiar with it. But why? I can download any number of GNU/Linux distributions without charge, or evaluate most Microsoft products for several months without any hassle. Why can’t I do that with z/OS? There were several sites offering TSO and CICS access, but that was only from a user’s perspective. I wanted to peer inside and see what made it tick.
Luckily, the project succeeded and I learnt a fair amount about z/OS in the process, but it wasn’t as quick or easy as if I’d been able to spend some of my own time learning. In fact, when I was given access to a non-production CICS region with the software on the other side of our interface, I spent several hours of after-work time getting to know it, and came away being able to help our developers write an even better product than if I’d stayed in my pure consultancy role.
If IBM want to change the perception that mainframes are an enigma understood only by the balding and bearded stalwarts at large companies, they need to get people hooked. Make the latest release of z/OS and a suitable emulator available for download, and let the world see how great your software is.

Enterprise IT Onboarding

I visited a customer site last week to collect a building access card and get set up on their email system. I managed to get both done within 30 minutes.
The impressive thing was that my Inbox already contained instructions on how to set up remote access to Webmail and my desktop via Citrix. All I needed was Google Authenticator or similar on my phone and that was it.
Why can’t all Enterprise IT departments be this efficient?