On mainframes and freedom

Last week, CIO Magazine reported on the shortfall in mainframe skills.

It reminded me of a situation I faced a couple of years ago. My team was preparing to write software which integrated with an IBM z/OS system, and we knew literally nothing about it. I didn’t need to know all the details, but I wanted to be able to talk to the system programmers coherently and be taken seriously. I didn’t know much about CICS, nor RACF, and I’d not even heard of RACF. I wanted to learn.

I am a self-starter, and I want to get my hands on the technology I’m going to use in my work life. From learning perl, PHP and SQL in the late 1990s so I could capture ISDN call log data over RADIUS, to buying a bunch of older Cisco equipment to bring me up to speed on VoIP, it’s worked stunningly well. I wanted to do the same with z/OS – not to become an overnight expert, but to be able to bridge the gap between Java developers and the people who ran the mainframe we were going to be using. It would make the whole project happen with much less friction.

After no more than an hour or so’s research, I found IBM wanted $900-or-so a year, on top of buying dedicated PC hardware just so I could run z/OS and spend a bit of time becoming familiar with it. But why? I can download any number of GNU/Linux distributions without charge, or evaluate most Microsoft products for several months without any hassle. Why can’t I do that with z/OS? There were several sites offering TSO and CICS access, but that was only from a user’s perspective. I wanted to peer inside and see what made it tick.

Luckily, the project succeeded and I learnt a fair amount about z/OS in the process, but it wasn’t as quick or easy as if I’d been able to spend some of my own time learning. In fact, when I was given access to a non-production CICS region with the software on the other side of our interface, I spent several hours of after-work time getting to know it, and came away being able to help our developers write an even better product than if I’d stayed in my pure consultancy role.

If IBM want to change the perception that mainframes are an enigma understood only by the balding and bearded stalwarts at large companies, they need to get people hooked. Make the latest release of z/OS and a suitable emulator available for download, and let the world see how great your software is.

Enterprise IT Onboarding

I’ve worked at many organisations, from educational establishments up to multinational banks and corporations. My gripe with all of them is they never make the Enterprise IT on-boarding process slick.

The Enterprise IT adoption cycle visualised
Graphic ©2012 Simon Wardley and CC BY-SA 3.0

Some of the smaller companies I’ve worked for are different. I had to wait a week to get a building pass once because the only badge printer was faulty, and at another company, they’d actually run out of access cards and simply stuck a white sticker over somebody else’s card and told me to use that. Security let me through and in to the building with a quick flash of a piece of plastic with the company’s logo on it and nothing more.

That’s small companies – but in large companies, people join all the time and it really should be a business-as-usual (BAU) process.

Last week, I went to a new a customer site last week to collect a building access card and get set up on their email system.  I was in and out of the building within 30 minutes.  I had an ID card and building access sorted immediately by the security team, which let me go up to the floor I’d be working on and get email access.

My manager gave me a piece of paper with my username and temporary password, and on logging on to the nearest thin client, I had access to everything I needed. Remote access was a breeze – the first email in my Inbox contained instructions on how to get access to my virtual desktop remotely using Citrix and Google Authenticator.

Why can’t everyone’s on-boarding process be like this slick?

Novell RPL Boot under VirtualBox

One of my recent retrocomputing projects was to set up a Novell NetWare 4.11 server and boot clients from it. Remote boot, or Remote Initial Program Load, was a common method for booting network clients over the LAN before IP became commonplace.
RPL requires a boot ROM on the network card which finds a nearby server, connects to it and downloads a disk image which it then executes. By today’s standards, it’s trivial – but by late 1990s standards, it was anything but.

I spent a few hours trying to get VirtualBox to do RPL boot. Etherboot doesn’t appear to support RPL, so I tracked down a ROM image on Intel’s website. There isn’t much demand for RPL and Intel deprecated it in 2005. Luckily, they’ve kept an old version of their drivers available which contains a boot ROM image supporting RPL.

The executable, PRORPL.EXE, will uncompress using 7z and produce two interesting looking files with the extension FLB. One of these is 63,488 bytes, and the other is 139,264 bytes.

Installing these in a VirtualBox machine is straightforward but unfortunately undocumented:

vboxmanage setextradata "vmName" VBoxInternal/Devices/pcbios/0/Config/LanBootRom romLocation

After booting the virtual machine from cold, VirtualBox didn’t complain, but also didn’t use the ROM. Looking in the Log Viewer showed the vague message rc=VERR_TOO_MUCH_DATA.

The vital piece of information I forgot is that boot ROMs must be smaller than 64 kilobytes. The Intel image is very close to that size. Back to the drawing board!

With some further searching, I found a Generic BootRom Utility on AMD’s website which contains a 16kb file. This file, RBOOT.ROM, is a working RPL boot ROM for AMD PCnet network cards. Coincidentally, the VirtualBox machine I’m using has an AMD PCnet-FAST III card. Result!

Re-running the vboxmanage command above with the path to the newly discovered boot ROM works a treat. I can boot a virtual machine straight off a virtual Novell NetWare server. By today’s standards, the process is quite cumbersome but I’ll leave a description of that for another time.

The curious case of the IP Alias

Trying to log on to Skype earlier in the week on my MacBook Pro didn’t work. For some reason it simply wouldn’t connect – it just timed out. Everything else worked absolutely fine, no issues.
Figuring it was an IPv6 issue, I unbound IPv6 from en0 and tried again. Nothing. It wasn’t my Cisco ASA firewall playing games either, although logging on to it showed a vast number of packets dropped from 192.168.1.x on its inside interface (reverse path check, I don’t use 192.168.1.x internally). How could this be?
It turns out that I had a 192.168.1.x bound to en0 from when I was testing out some locally connected kit. Skype saw this as the first IP address it could use and bound to it – whereas everything else worked fine letting the OS choose. Unbinding this address made Skype leap in to action.

Cleanweb, February 2015

Last night, I gave a talk on Open Rail Data at Cleanweb.

I wanted to stay longer – there were plenty of discussions to be had, but after a busy Open Data Day on Saturday, bed won over the pub.

Missed the presentation? I’ve uploaded the slides and they’re available PDF format.

If you want to continue the discussion, join the openraildata-talk mailing list and come chat to like-minded people!

The end is near…

It doesn’t seem like five years, but it is. Five years since I wanted the API to National Rail Enquiries’ Live Departure Board web service to be available for everyone so they can innovate and do great things.
We’ve come a heck of a long way in those five years – as from this week, you can sign up for the Open Live Departure Boards Web Services. A round of applause, please!
So, is that the end? Unfortunately not – there’s even more data to unlock, even more value to be created and stories to be told – but I think it’s been demonstrated that open and permissive trumps closed and expensive.
I get the feeling it’s going to be a smoother ride from here on.

Configuring a WebSphere MQ server

In a previous post, I documented the “>steps to install IBM WebSphere MQ on Ubuntu. Now, more generically (and mostly for my own reference), here’s how to set up a queue manager and queues.
You’ll need the WebSphere MQ installation packages – if you’re only evaluating WMQ at the moment, try the WebSphere MQ 90-day trial. Also, you’ll need to read the previous blog post and set your sysctl settings appropriately.
First off, install the MQSeriesRuntime and MQSeriesServer packages – they’re the only ones you’ll need. After installation, run the following command:
/opt/mqm/bin/setmqinst -i -p /opt/mqm
This will set your default MQ installation path.
Next, ‘su’ to ‘mqm’, then create a broker by running the really simple command:
crtmqm MQSVR1
Before doing anything else, you’ll need to start the queue manager:
strmqm MQSVR1
You will also need to start a listener to be able to connect, so use the ‘runmqsc’ command to submit commands to create a listener on TCP port 1414, and also create a server connection channel called ‘SYSTEM.ADMIN.SVRCONN’:
runmqsc MQSVR1
DEFINE LISTENER(MQSVR1.LISTENER) TRPTYPE(TCP) PORT(1414) CONTROL(QMGR)
START LISTENER (MQSVR1.LISTENER)
DEFINE CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN)
^D
The statement ‘CONTROL(QMGR)’ is important here – this will start and stop the listener with the queue manager. If you don’t include this, you’ll need to start the listener every time you bring up the queue manager.
At this point, you have the barest of bare WebSphere MQ server setups. I’ll cover authentication for WMQ 7.5 and higher in another blog post.

Importing Ordnance Survey Open Data in to PostgreSQL with PostGIS

Some time ago, I looked at some uses for Ordnance Survey Open Data, coming to the conclusion that a sensible way to work with it would be to import it in to a geospatial-enabled database.
Each set of data is provided in ESRI Shapefile format, and has four files:

  • shp – shape format
  • shx – shape index format
  • dbf – attribute format in dBase IV format
  • prj – projection format

The shp2pgsql command converts SHP files in to a set of SQL commands which will effectively import the data in to PostgreSQL. Here’s a ridiculously simple guide to importing a file:
createdb os_opendata
psql -d os_opendata -c "CREATE EXTENSION POSTGIS"
shp2pgsql <filename>.shp <table_name> | psql -d os_opendata
Depending on the speed of your machine, in a few seconds you’ll find a new table in your database with all the data included.
And finally, what if you just want to import all of the data at once? Try this:
find Data/ -name "*.shp" | xargs -I % -n1 shp2pgsql % | psql -d os_opendata

Ubuntu 14.04 for Productive People

Way back in 2011, I blogged about Ubuntu 11.10 for Productive People, which took the form of a mini tutorial on how to wrestle some of Ubuntu’s UI candy away and replace it with something better suited to being productive.
I’m still standing by my assertation that Ubuntu is too ‘pretty’ on the desktop now, and lacks a ‘power user’ mode, but I won’t argue with anyone who says it’s great. It’s not a false dichotomy – you can have a power mode and a pretty mode in a desktop operating system.
Updated for the current beta of Ubuntu 14.04LTS, here are the instructions on how to get the latest release of Ubuntu in to shape:

  • Install Ubuntu 14.04
  • Install Gnome using apt-get install gnome – use lightdm as the display manager
  • Remove the slightly obstructive overlay scrollbar with apt-get remove overlay-scrollbar
  • Log out, then log back in again but click the Ubuntu logo by your username and select ‘GNOME Flashback (Metacity)’
  • Run gnome-tweak-tool, select Fonts and set the text scaling factor to 0.9, then under Appearance, set the Icon theme to Gnome and Cursor theme to Adwaita. Under Top Bar, check ‘Show date’ and ‘Show seconds’

Refreshingly easy, isn’t it? I’m going to be updating to 14.04LTS when it’s released!