Category: Uncategorized

  • Migrating your email

    Background

    Back when dial-up was 33.6k and HTML email was frowned upon, I ran my own mail server. This was a luxury that not many people had, but it gave me huge amounts of experience running a live service on the Internet.

    There are several problems to hosting your own email. The most significant is that you’re responsible for everything. Whilst that means you have ultimate control, there are more exciting things to do in life than continually review DNS blocklists and anti-malware technology.

    Around 2013, I decided that it was time to migrate to GMail, and let Google handle my e-mail. Google are huge, and huge is frequently (but not always) a good choice when it comes to service providers.

    In 2021, I decided to split my company email away from my personal email, so set up a Microsoft 365 tenant and got used to the fact that I’d have two mail clients. That fact alone makes it so much easier to think of the bigger picture, what you’ll do in five years time when your business grows and you have more employees.

    Keeping decisions under periodic review is good, as something that was right for you a decade ago may not be right for you any more – and that’s fine.

    Given the direction the world could be going in, I decided it was right to migrate my personal email away from Google and on to Proton, an organisation with a different, more positive and more private outlook.

    The process was entirely seamless, and I am happy with the Proton Mail client, the free VPN software (even though I use my existing VPN) and the cloud storage feature with Proton Drive.

    Advice

    For those of you who’ve never migrated email between providers, it can be a scary experience. Here’s my list of things to think about when you’re migrating:

    1. Do you have control of DNS records for your domain? You need to be able to change these by yourself, commonly achieved by logging in to a provider’s web interface and submitting changes there.
    2. Have you reduced the TTL (time-to-live) on the MX records in your domain to a much smaller value, e.g. 15 minutes? Doing this means that if you mess up some of your changes, you can roll back without the Internet caching invalid records for a long time.
    3. Have you verified that your new provider can receive email for your domain name? It’s difficult to do without changing your MX records, but possible if you get friendly with telnet and submit a test email by hand.
    4. Will your new provider deliver all email to one mailbox, or do you need to set up aliases? For example, you might have everything to @example.com sent to your mailbox, or you might have separate mailboxes for postmaster@ and hostmaster@.
    5. Do you need to migrate your previous email to your new provider, or are you going to start from scratch? It might be that you just back up or download your mailbox and keep it locally if you don’t often refer back to old email. Or, you might want to migrate your sent and personal items, but ignore other folders such as bulk emails.

    I can’t emphasise enough that lowering the TTL on your MX records is really important. Having caused a day-long outage years ago with a typo in an MX record, the simple act of saying “Don’t cache this for more than 15 minutes” can be an absolute lifesaver.

    Appraising Proton

    A month or so in, I’m very happy with Proton. The migration was seamless, and the tools very good. I can’t vouch for the level of support, nor for any outages, but those vary wildly between providers.

    One weak point is that it wasn’t simple to migrate certain folders from Google to Proton. There’s Proton Bridge which you can run locally and use to copy over the email you want, but it seems very, very slow. That could be Thunderbird’s fault, or it could be the transactional nature of “download an email”, “upload an email”. Copying email overnight was the easiest workaround.

    Another point is that I’ve not found out how to share my calendar with the Apple Calendar app on either my laptop or iPhone. Whether you consider this a negative or not will depend on your personal comfort levels, but it isn’t too much of a hassle here.

    Proton VPN, which isn’t something I think I’ll use very often, is ridiculously straightforward and is a Wireguard configuration that takes seconds to configure. I’ve tried it out and it might prove useful when I’m on an insecure, untrusted WiFi network and want another layer of security. However, I have an existing VPN to my own equipment which will probably get used more often.

    Conclusions

    I’m happy with my decision. It was in no way as painful as migrations have been in the past, which I put down to the fact that this isn’t my first rodeo. If Proton doesn’t live up to my expectations, it’s my domain name so I can move it elsewhere.

  • Mikrotik to Ubiquiti

    After leaving the networking industry in 2013, I decided to replace the Cisco network equipment at home with Mikrotik for reasons of cost and power. Twelve years later, I’ve ditched Mikrotik and moved to Ubiquiti for some solid reasons.

    What attracted me to Mikrotik was the price/performance ratio. I was using a Cisco 1801 with dual VRFs, terminating a PPPoE connection in one VRF, routing through a Cisco ASA5505 and dropping back in to a second VRF. When I upgraded from a 24Mbps ADSL2+ line to an 80Mbps VDSL line, I quickly found my setup wasn’t anywhere near powerful enough to handle 160Mbps of traffic.

    Over the course of a decade, I went from a single RB750GL to a pair of RB4011iGS+RM routers, a pair of CSS326-24G-2S+RM switches and an assortment of CAPsMAN-managed access points. It worked well, but there were always some awkward parts – VLAN support was confusing, CAPsMAN seemed like magic that either worked or didn’t, and there was no firewall state synchronisation in RouterOS. But it was inexpensive, and I stuck with it.

    The final straw came when one of my routers – handling WiFi and acting as a backup for the other handling PPPoE termination and routing – crashed without warning at 3am. It was stuck in a reboot loop, loading its kernel but then rebooting without any helpful error message.

    I went through the manual process of moving CAPsMAN to the other router, but couldn’t get it working after four hours of trying. In the end, I reset one of my access points and set up a single SSID to get my IoT devices back online. It wasn’t optimal, but bought me time – but after a further eight hours of troubleshooting and little joy, I gave up.

    Enter Ubiquiti. I’d heard of them but never looked at their kit. I started off with a U7 Lite access point. Management of all Unifi kit is via a centralised web interface, and setup was blissful despite my historical dislike of web interfaces for network mangaement.

    Within a week, I’d bought the strangely named Dream Machine Special Edition which runs as a controller, switch and router. It took under an hour to get it up and running with two PPPoE connections, and I was happy. Unfortunately, it doesn’t have VRF support nor can it handle more than two Internet connections, so with two primary PPPoE and one backup domestic-grade fibre connection, I was a little stuck.

    A quick explanation for those of you who are wondering why I use a pair of copper connections as a primary Internet connection, and fibre as backup. I don’t live in an area where OpenReach have FTTH available, and the only choice is VDSL or fibre from another ISP. The VDSL connections have a public routable address range – which is perfect for remote access – plus a very strong support team at the ISP. The fibre connection is a heck of a lot faster, a lot cheaper, but has a single static IPv4 address and is liable to multi-hour outages and latency spikes, so it’s relegated to backup connectivity only.

    I decided to run one of the PPPoE connections alongside the fibre connection for a day or two, to see if I could route my home devices via faster fibre, and work devices via the other connection. To my complete amazement, this sort of policy routing was available without any complication, and is simply two routes to route traffic from the office network via one ISP, and everything else via another. Heck, I can even route specific traffic over a specific connection – low-latency ssh via VDSL and Facebook over fibre.

    Within a few more days, I’d decided to replace the Mikrotik gear completely. A second U7 Lite handles the upper levels of my flat, and three scarily inexpensive Flex Mini 2.5G switches replace the desktop and lab switches. A Pro Max 24 switch, of which eight ports at 2.5GbE capable, has replaced the Mikrotik switches, and it’s game on.

    The real test came when I needed to set up a VPN for access to some internal servers whilst I was travelling. Having struggled with RouterOS to set up a Wireguard VPN, on the Ubiquiti kit, it was a thirty-second job. Download a configuration, connect my laptop to a hotspot on my phone, set up the VPN and I’m in. Zero hassle.

    I used the VPN consistently over the next week or so without any problem. I couldn’t even tell all my traffic was going through a tunnel and back out again, apart from when using on-train WiFi, but we’ll discount that as it’s universally dodgy at the best of times.

    So now, less than a month after a decision to ditch a platform that had served me well for over a decade, I’ve migrated everything over to Ubiquiti and I couldn’t be happier.

  • Fixing ‘host must not be null’ with LocalStack and AWS S3 Client

    After being inspired by Piotr Przybyl‘s talk about integration testing, I started off a new project with some thorough testing of a Spring Boot JAR.

    Piotr Przybyl on Integration Testing

    Software development is full of obscure problems, and when uploading an object to an emulated Amazon S3 service using LocalStack, I kept getting the error:

    java.lang.NullPointerException: host must not be null.

    Not the most helpful error message, especially since I was setting endpointOverride on my S3Client builder in this way:

    this.s3Client = S3Client.builder().endpointOverride(localstack.getEndpoint()).credentialsProvider(StaticCredentialsProvider.create(credentials)).build();

    Some hours of painful searching suggested that the error message was pointing me in the wrong direction.

    The solution? Add this to the Builder:

    .forcePathStyle(Boolean.TRUE)