Planet HantsLUG

June 14, 2021

Alan Pope


Over the weekend I participated in FOSS Talk Live. Before The Event this would have been an in-person shindig at a pub in London. A bunch of (mostly) UK-based podcasters get together and record live versions of their shows in front of a “studio audience”. It’s mostly an opportunity for a bunch of us middle-aged farts who speak into microphones to get together, have a few beers and chat.

Due to The Event, this year it was a virtual affair, done online via YouTube. Joe Ressington typically organised the in-person events, but with a lack of skills in video streaming, Martin Wimpress and Marius Quabeck stepped in to run the show behind-the-scenes.

There was representation from Linux Lads (Shane, Mike and Conor), Late Night Linux (Joe, Graham, Felim and Will), Ubuntu Podcast Voltage (Myself, Martin, Mark and Stuart (borrowed from Bad Voltage - hence the reband (yes, we considered calling it “Bad Podcast”))), and The New Show (Me again with Joe and Dan).

During our section (which you can find 2 hours and 20 mins into the live stream), we presented four creations based on the following premise:

“Make a thing (program, game, hardware device, whatever) that is controlled by one button. Once your thing is running it can only accept a single button with a pressed and not pressed state (this could be a physical button, a keyboard key, or a software UI button) as input until it stops running”

We then provided an opportunity for the audience to vote for the best one on Twitter.

Spoiler: I won!

I made a game which I called Adrift. The idea is you’re in a failing space ship. If your hull integrity or life support gets to zero, the ship explodes and you all die, or support systems fail, and you die. Keep the ship in working order by re-charging the systems by holding down the space key, and releasing in one of four quadrants of the controls. Here’s what it looks like :)

Adrift screenshot

In addition if the shield gauge gets low, the hull will fail faster. If engines get low, everything fails faster. The weapons are used to kill enemies who come on screen from the left. They currently don’t shoot you, so there’s no jeopardy from them, because it’s unfinished.

You can install the one I showed off in the live stream from the Snap Store and I only published a 64-bit X86 build for Linux. I haven’t built a binary for Windows or MacOS yet. This is a Linux exclusive “game”! ;)

It is very much a prototype / work in progress / game jam affair. That is to say, there’s a ton of stuff missing from it. I managed to pull together an MVP (Minimum Viable Product) that I could video and demo on the show, but that’s about it.

This is actually the first game I have ever made and released publicly, so you can understand why it’s a bit awful. It was also done under a time pressure (I believe they call this “crunch” in the games industry ;) ). It’s also the first thing I’ve made in Blitz for a very, very long time.

There was some amusing “controversy” from the viewers in the live chat (you can see it if you watch the video linked above) about my entry. This revolved around how many buttons were being used. We were only supposed to use one button for the game. In Adrift, only the spacebar does any game-related activities. However I’d added a few options including P to pause, and some function keys to enable me to debug the code.

I only added the pause button to enable me record the video you see in the stream, and pause the gameplay so I could talk over it. Sure, I could have used a video editor, but there was a bit of a time pressure, and it was actually quicker for me to add a pause function. My video was made at 16:05 on Saturday, with our deadline being 17:00, and the whole event starting at 19:00. Time was of the essence!

I removed all the additional keys from the build that’s published online, however. :)

During the live stream, we only really had a few minutes to discuss our projects. On the Ubuntu Podcast we’ll have an in-depth segment soon, where all four of us will go into more detail. We can also answer any questions from the audience, should they come in.

I don’t want to go into full detail about the development in this post, but use this as an opportunity to gloat over my co-presenters, and give out the links so people can download and try out my pretty terrible attempt at a “game”. :D

I kept a bit of a diary / devblog which I’ll use to follow up with another blog post, in a few weeks. But for now, that’s it. Feel free to grab the game from the following places.

Adrift by Alan Pope

June 14, 2021 11:00 AM

June 08, 2021

Debian Bits

Registration for DebConf21 Online is Open

DebConf21 banner

The DebConf team is glad to announce that registration for DebConf21 Online is now open.

The 21st Debian Conference is being held Online, due to COVID-19, from August 22 to August 29, 2021. It will also sport a DebCamp from August 15 to August 21, 2021 (preceeding the DebConf).

To register for DebConf21, please visit the DebConf website at

Reminder: Creating an account on the site does not register you for the conference, there's a conference registration form to complete after signing in.

Participation in DebConf21 is conditional on your respect of our Code of Conduct. We require you to read, understand and abide by this code.

A few notes about the registration process:

  • We need to know attendees' locations to better plan the schedule around timezones. Please make sure you fill in the "Country I call home" field in the registration form accordingly. It's especially important to have this data for people who submitted talks, but also for other attendees.

  • We are offering limited amounts of financial support for those who require it in order to attend. Please refer to the corresponding page on the website for more information.

Any questions about registration should be addressed to

See you online!

DebConf would not be possible without the generous support of all our sponsors, especially our Platinum Sponsors Lenovo and Infomaniak, and our Gold Sponsor Matanel Foundation.

by Stefano Rivera at June 08, 2021 08:00 AM

May 28, 2021

Debian Bits

Donation from to the Debian Project and benefits for Debian members

We are pleased to announce that offsite backup and cloud storage company has generously donated several Terabytes of storage space to the Debian Project! This new storage medium will be used to backup our Debian Peertube instance.

In addition to this bountiful offer, is also providing a free-forever 500 GB account to every Debian Developer. is a dedicated offsite backup company. Since 2001, they have provided customers with a secure UNIX filesystem accessible with most SSH/SFTP applications.’s infrastructure is spread across multiple continents with a core IPv6 network and a ZFS redundant file-system assuring customer data is kept securely with integrity.

The Debian Project thanks for their generosity and support.

by Donald Norwood and Laura Arjona Reina at May 28, 2021 11:45 AM

May 26, 2021

Alan Pope

Disabling snap Autorefresh


Until recently, I worked for Canonical on the Snap Advocacy Team. Some of the things in this blog post may have changed or been fixed since I left. It’s quite a long post, but I feel it’s neccessary to explain fully the status-quo. This isn’t intended to be a “hit piece” on my previous employer, but merely information sharing for those looking to control their own systems.

I’ve previously provided feedback in my previous role as Snap Advocate, to enable them to better control updates. However, a lot of this feedback was spread over forum threads and other online conversations. So I thought I’d put together some mitigations and the steps I take in one place.

At the end of this post I detail what I do on my systems to be in more control of snap updates. Skip to that if you know what you’re doing.


Snaps are software packages for Linux. They’ve been developed by Canonical over the last ~six years. There are thousands of snap packages in the Snap Store, in use by millions of users across 50+ Linux distributions.

One of the fundamental design goals of snaps is that they’re kept up to date. At a high level, the snapd daemon running on end-user systems will check in with the Snap Store periodically, looking for new revisions of installed snaps. By default snapd does this every day, four times a day, at semi-random times. This seeks to ensure end-user systems have relatively up-to-date software.

If a developer publishes a new revision of a snap package in the Snap Store, they can be relatively confident that users will get the update soon after publication. Indeed as a snap publisher myself, I can see from the Snap Store metrics page a pretty graph showing the uptake of new versions.

ncspot metrics

In the above diagram you can see from the coloured bars that each new revision of the application almost completely replaces the previous versions in the field within a week. Indeed in the above example, I had released a new revision just a few days ago and already 80% of existing users have received it.

This helps to reduce the number of old versions of applications in the wild. So the publisher can be more confident that there’s not insecure (as in “known-vulnerabilities I’ve already fixed”) or unsupported (as in “old versions I no longer want to provide support for”) revisions of my application.

So this is an attractive feature for some developers and some users. But as always in the software world, you can’t please everyone, all the time.


Over the years since the introduction of snap as a packaging format, there have been numerous empassioned requests to change this default behaviour. There’s a few very valid reasons why, including, but not limited to:

Metered connection

Only having access to the Internet via metered or very low speed connection means a user may not want their expensive bits consumed in the background by large files being downloaded.

A user who doesn’t connect often may find that when they do, snapd wakes up and eats their data allowance. Even in highly-networked Western nations, on a slow connection, such as in a coffee shop or on a train, snapd can slow down the ability to work while it beavers away in the background.


Some users & adminstrators would prefer to have visibility of software updates before they land on end-user systems. Organisations using Ubuntu for example, can ‘gate’ updates which come from the apt repository by hosting an internal repo. Software which has been fully tested can then be allowed through to their user community. Out of the box, this is not straightforward to implement with snapd because the daemon talks directly to the Snap Store.


With snapd an application may refresh while a user is actively using it. This leads to some application instability as files disappear or move from underneath the program in memory. This can present as the fonts in the application suddenly becoming squares, or the application flat-out crashing. Users tend to prefer applications don’t crash or become unusable while they’re trying to use them. The bug about this was filed in 2016, has numerous duplicates.


Many believe that their computer should be under their control. Having a daemon hard-wired to force updates on the user is undesireable for those who prefer to control package installation. A user may just want to keep a specific version of an application around because it has the features they prefer or have a workflow built around.


The snapd and Snap Store teams have indeed listened to this feedback over the years. Sometimes it resulted in new developments which can mitigate the issues outlined. Sometimes it results in lengthy discussion with no user-discerable outcome. Below are some mitigations you can use, until those issues are addressed.

Bypassing Store Refresh

Snaps installed via snap install foo directly from the store will automatically get updates per the schedule, when the publisher releases a new revision. Instead, it’s possible to snap download foo && snap install foo --dangerous instead. This completely opts-out of automatic updates for that specific snap only.

Note: Do not snap download foo then snap ack foo as this will acknowledge the assertion from the store, and will enable auto update.

Deferring updates

The refresh of applications can be deferred with the refresh.hold option. This can, for example, enable a user to defer updates to the weekend, overnight, or on a specific date/time.

Detecting metered connections

If NetworkManager detects that a connection is metered, snapd can be configured to suppress updates via refresh.metered.

Delta updates

For users on low-speed or metered connections, delta updates may help. The Snap Store decides whether to deliver deltas of packages when snapd refreshes the installed snaps. The user doesn’t need to opt-in as the Snap Store has algorithms which determine the delivery of a delta or full package each time.

Preventing updates to running applications

In snapd 2.39 (released in May 2019 adds a refresh-app-awareness experimental Work In Progress option which suppresses updates for applications which are currently running. This seeks to prevent application instability when it’s updated while running. The option was blogged about in February 2020 to raise awareness of it.

Remaining gaps

While the snapd and Snap Store teams have worked hard to address some of the issues, there’s still a few outstanding problems here.

Refresh Awareness still experimental

The refresh-app-awareness is marked “experimental” which means it’s not on by default, but requires the user to know about it, and use an arcane command line to enable it. This desparately needs attention from the team.

Worth noting though, in the event that the snap needs refreshing, and hasn’t been done for more than fourteen days, it’ll get refreshed anyway, even if it’s running.

Sixty day hard-limit

Even if a user chooses to defer updates via refresh.hold, they’ll still happen “eventually”. When is “eventually”? 60 days. As delivered, users cannot defer a snap refresh beyond the hard-wired 60-day limit.

My Systems

There’s three things I do on my systems. Two are pretty simple, one is a bit batty. Feel free to use these steps, or remix them to your own requirements.

Enable (experimental) app refresh awareness

Prevent running applications from being refreshed.

Run this command on every system. As per the February 2020 blog post.

sudo snap set core experimental.refresh-app-awareness=true

In the event snap needs to be refreshed, a notification appears to let the user know it isn’t going to happen:

Chromium notification

Further, running snap refresh for the application while it’s running, will result in a message:

alan@robot:~$ snap refresh chromium
error: cannot refresh "chromium": snap "chromium" has running apps (chromium)

Configure a time for updates

This reduces the chance of change while I’m working.

I set my systems to update at 12:00 on Sunday. So there’s no unexpected refreshes during the working week. You can of course pick a time suitable to yourself.

alan@robot:~$ snap refresh --time | grep ^timer
timer: sun,12:00

However, I don’t actually even want the updates to happen every Sunday, so that’s where the next option comes in.

Constantly defer updates

In theory, if I use the above option to defer updates to a date six months hence, they’ll still happen after the sixty-day hard-wired limit. Then they’ll happen again back on the regular schedule.

So I have a root cron job (other scheduling systems are available, apparently) which repeatedly defers updates by thirty days. I configured it for 12:30 as my system tends to be on at that time. This runs every day, constantly pushing the update window back thirty days.

alan@robot:~$ sudo crontab -l
30 12 * * * /usr/bin/snap set system refresh.hold="$(/usr/bin/date --iso-8601=seconds -d '+30 days')"

The result, when checked with snap refresh --time shows updates are held for a month.

alan@robot:~$ snap refresh --time | grep ^hold
hold: in 30 days, at 11:28 BST

You may be thinking “But Alan, why bother with the “Sunday at 12:00” thing if you’re also going to punt updates a month into the future!?". Good question. It’s simply “Belt and braces”. If my cron job failed, I’m still not going to get week-day updates.

Use a patched snapd

This is the slightly batty part.

The sixty-day limit on holding/deferring refreshes, and the fourteen-day limit on deferring running-apps updates are hard-wired in snapd, not options which can be configured.

Well, snapd is Free Software so we can recompile and install it without those limits, or with different limits.

The snapd daemon is written in Golang. I’m (currently) not a Golang programmer, so the changes I’ve made might not make sense. But it works for me.

The script does the following:

  • Clone the source for snapd from github
  • Check out the same git commit as can be found of snapd in the latest/candidate channel in the store (configurable as SNAPD_CHANNEL)
  • Patch the sixty day maximum postponement to an arbitrary large number
  • Build amd64, armhf and arm64 snaps of snapd using launchpad via snapcraft remote-build

Once built, I snap install snapd_2.50.1-dirty_amd64.snap --dangerous (filename will vary of course). This will install the patched snapd, and won’t itself get updated, due to being side-loaded locally.

The snapd package doesn’t get updated super often, so I don’t run the script all the time.

The script is called build-snapd and there’s a copy here, and I’ve pasted it at the bottom of this blog post.

If you want to do this, you’ll need snapcraft, git and a launchpad account. You could build locally, in which case maybe use lxd or multipass. That’s all in the snapcraft docs. The goal of this script is to build snapd quickly and efficiently.

Hold the snapd deb

Even with the patched snapd snap, it’s possible a newer build of the snapd debian package from the Ubuntu archives might “sneak” in via apt ugrade or unattended-upgrades and undo the patches I’ve made above. So we can pin the deb to prevent that updating.

$ sudo apt-mark hold snapd
snapd set on hold.

In case you weren’t aware, if you have the deb and the snap installed, whichever has the higher version number will be used. I build the snapd snap from source rather than the deb because there’s an easy path to remotely build it. Alternatively I could build the deb, and remove the snap. But I suspect in the future I may be required to use the snapd snap as some other application may need it, and if that happens it may undo whatever I do with the patched deb.


There are downsides to all of this, of course. I won’t get security updates to snapd, core or any other application, until I manually choose to update them. I also have to manage my snap updates. That’s pretty easy though, just like I’ve been updating with apt forever.

It’s a bit manual to setup, but only takes a few minutes to run the various commands. If inclined, I expect one could use a GitHub Action or similar cloud based job to automate the snapd build script.

Pedantically one could argue this blog post is mistitled as “Disable snap Autorefresh” and should rather say “Massively defer snap Autorefresh”. Potato, potato.

I hope that’s helpful to someone.

# Build snapd with longer time between forced refresh, effectively
# allowing systems to prevent any refreshes at all, "easily".

# While it's possible to defer updates to a later date, like this:
# $ sudo snap set system refresh.hold="$(/usr/bin/date --iso-8601=seconds -d '+30 days')"
# After 60 days, snapd will eventually force refresh, even if you run 
# the above command every day to push the refresh time back continuously.

# All this script does is build snapd with a way longer interval between
# 'forced' refreshes.

# To undo this, we patch snapd, rebuild and install it
# Allow us to push updates long into the future (1825 days, 5 years)
# Set maxPostponement = 1825 * 24 * time.hour
# Set maxInhibition = 1825 * 24 * time.Hour

# Patch only on Tuesday
# Set defaultRefreshSchedule = "tue,12:00"

# Temp dir to do the work in

# What snap store channel should we yoink the snapd version from

# Push updates back a ludicrous amount of time. Five years should do.

# When should refreshes happen, if they do
# Default is every day, four times a day

# Get version in snap store from candidate, we build that
# That way we stay a little ahead of the stable channel, sometimes
CANDIDATE="$(snap info snapd | grep "$SNAPD_CHANNEL" | awk -F ' ' '{print $2}')"

# snap source is in github

# Clone the upstream source
cd "$SNAPD_BUILDDIR" || exit 8
if git clone -q $SNAPD_SOURCE; then
	echo "*** Cloned"
	echo "*** Failed to clone"
	exit 1
cd snapd || exit 7
if git checkout -q "$CANDIDATE"; then
	echo "*** Checked out $CANDIDATE"
	echo "*** Failed to check out $CANDIDATE"
	exit 2

# Patch things
if sed -i "s|const maxPostponement = 60|const maxPostponement = $MAXPOSTPONEMENT|" overlord/snapstate/autorefresh.go; then
	echo "*** Patched maxPostponement"
	echo "*** Failed to patch maxPostponement"
	exit 3
if sed -i "s|const maxInhibition = 7|const maxInhibition = $MAXINHIBITION|" overlord/snapstate/autorefresh.go; then
	echo "*** Patched maxInhibition"
	echo "*** Failed to patch maxInhibition"
	exit 4
if sed -i "s|00:00~24:00/4|$REFRESHTIME|" overlord/snapstate/autorefresh.go; then
	echo "*** Patched autorefresh default time"
	echo "*** Failed to patch autorefresh default time"
	exit 5

# Build snapd remotely in the cloud!
# This means it'll build for whatever architecture you run this
# script on, and will not consume resources on your computer.
# In my experience when the builders aren't all busy, it takes
# ~30 minutes to build snapd
# Check it at to see builder 'queue'
if snapcraft remote-build --launchpad-accept-public-upload --build-on amd64,armhf,arm64; then
    mv snapd_*.snap "$WORKING_DIR"
    mv snapd_*.txt "$WORKING_DIR"
    # Back from where we came
    cd "$WORKING_DIR" || exit 9
    # Remove the build temporary folder
    rm -rf "$SNAPD_BUILDDIR"
    ls -l1 snapd_*
	echo "Failed to build"
	exit 6

May 26, 2021 11:00 AM

May 20, 2021

Adam Trickett

Bog Roll: Debian GNU/Linux 11.0 "Bullseye"

Yesterday I test upgraded an old/spare laptop from Debian 10 to Debian 11. The upgrade process has changed for this release cycle, it now uses apt instead of apt-get, but seemed to go well other than a few minor cases when I needed to press Y for it to continue.

I'll probably put some new systems I'm building directly onto Bullseye, but I won't upgrade the rest of my systems until the formal release rolls round later this year. It looks pretty good already, and I've noticed fewer changes on a virtual system that has been shadowing Bullseye for a while.

May 20, 2021 03:40 PM

May 17, 2021

Alan Pope

New Pastures

I tweeted back at the start of April that I’m moving on from Canonical/Ubuntu.

Well, I left on April 30th, have had two weeks of ‘funemployment’, and today I start my new gig.

I’m now Developer Advocate for Telegraf at InfluxData, and I couldn’t be more excited! 🎉


Telegraf is an Open Source “agent for collecting, processing, aggregating, and writing metrics.”. I’ll be working with the Telegraf team and wider community of contributors. You’ll likely find me in the Telegraf GitHub and on the InfluxData Community Slack!

In a bit of excellent timing, this week we’re running Influx Days - a virtual event focused on the impact of time series data. I’ll be learning along with everyone else who’s attending!

I’m really thrilled to be starting a new chapter in my career. Expect my blog posts to change direction a little from here on in.

However, some things don’t change. I already nuked Windows 10 from the company-supplied ThinkPad X1, and installed Kubuntu 21.04 instead. 🤓

May 17, 2021 07:00 AM

May 13, 2021

Debian Bits

New Debian Developers and Maintainers (March and April 2021)

The following contributors got their Debian Developer accounts in the last two months:

  • Jeroen Ploemen (jcfp)
  • Mark Hindley (leepen)
  • Scarlett Moore (sgmoore)
  • Baptiste Beauplat (lyknode)

The following contributors were added as Debian Maintainers in the last two months:

  • Gunnar Ingemar Hjalmarsson
  • Stephan Lachnit


by Jean-Pierre Giraud at May 13, 2021 02:00 PM

May 04, 2021

Steve Kemp

Password store plugin: env

Like many I use pass for storing usernames and passwords. This gives me easy access to credentials in a secure manner.

I don't like the way that the metadata (i.e. filenames) are public, but that aside it is a robust tool I've been using for several years.

The last time I talked about pass was when I talked about showing the age of my credentials, via the integrated git support.

That then became a pass-plugin:

  frodo ~ $ pass age
  6 years ago GPG/root@localhost.gpg
  6 years ago GPG/
  4 years, 8 months ago Domains/
  4 years, 7 months ago Mobile/
  1 year, 3 months ago Websites/
  1 year ago Financial/
  1 year ago Mobile/KiK.gpg
  4 days ago Enfuce/sre.tst.gpg

Anyway today's work involved writing another plugin, named env. I store my data in pass in a consistent form, each entry looks like this:

   username: steve
   password: secrit
   # Extra data

The keys vary, sometimes I use "login", sometimes "username", other times "email", but I always label the fields in some way.

Recently I was working with some CLI tooling that wants to have a username/password specified and I patched it to read from the environment instead. Now I can run this:

     $ pass env internal/cli/tool-name
     export username="steve"
     export password="secrit"

That's ideal, because now I can source that from within a shell:

   $ source <(pass env internal/cli/tool-name)
   $ echo username

Or I could directly execute the tool I want:

   $ pass env --exec=$HOME/ldap/ internal/cli/tool-name
   you are steve

TLDR: If you store your password entries in "key: value" form you can process them to export $KEY=$value, and that allows them to be used without copying and pasting into command-line arguments (e.g. "~/ldap/ --username=steve --password=secrit")

May 04, 2021 06:00 PM

April 26, 2021

Steve Kemp

Writing a text-based adventure game for CP/M

In my previous post I wrote about how I'd been running CP/M on a Z80-based single-board computer.

I've been slowly working my way through a bunch of text-based adventure games:

  • The Hitchhiker's Guide To The Galaxy
  • Zork 1
  • Zork 2
  • Zork 3

Along the way I remembered how much fun I used to have doing this in my early teens, and decided to write my own text-based adventure.

Since I'm not a masochist I figured I'd write something with only three or four locations, and solicited facebook for ideas. Shortly afterwards a "plot" was created and I started work.

I figured that the very last thing I wanted to be doing was to be parsing text-input with Z80 assembly language, so I hacked up a simple adventure game in C. I figured if I could get the design right that would ease the eventual port to assembly.

I had the realization pretty early that using a table-driven approach would be the best way - using structures to contain the name, description, and function-pointers appropriate to each object for example. In my C implementation I have things that look like this:

{name: "generator",
 desc: "A small generator.",
 use: use_generator,
 use_carried: use_generator_carried,
 get_fn: get_generator,
 drop_fn: drop_generator},

A bit noisy, but simple enough. If an object cannot be picked up, or dropped, the corresponding entries are blank:

{name: "desk",
 desc: "",
 edesc: "The desk looks solid, but old."},

Here we see something that is special, there's no description so the item isn't displayed when you enter a room, or LOOK. Instead the edesc (extended description) is available when you type EXAMINE DESK.

Anyway over a couple of days I hacked up the C-game, then I started work porting it to Z80 assembly. The implementation changed, the easter-eggs were different, but on the whole the two things are the same.

Certainly 99% of the text was recycled across the two implementations.

Anyway in the unlikely event you've got a craving for a text-based adventure game I present to you:

April 26, 2021 06:00 PM

April 17, 2021

Steve Kemp

Having fun with CP/M on a Z80 single-board computer.

In the past, I've talked about building a Z80-based computer. I made some progress towards that goal, in the sense that I took the initial (trivial steps) towards making something:

  • I built a clock-circuit.
  • I wired up a Z80 processor to the clock.
  • I got the thing running an endless stream of NOP instructions.
    • No RAM/ROM connected, tying all the bus-lines low, meaning every attempted memory-read returned 0x00 which is the Z80 NOP instruction.

But then I stalled, repeatedly, at designing an interface to RAM and ROM, so that it could actually do something useful. Over the lockdown I've been in two minds about getting sucked back down the rabbit-hole, so I compromised. I did a bit of searching on tindie, and similar places, and figured I'd buy a Z80-based single board computer. My requirements were minimal:

  • It must run CP/M.
  • The source-code to "everything" must be available.
  • I want it to run standalone, and connect to a host via a serial-port.

With those goals there were a bunch of boards to choose from, rc2014 is the standard choice - a well engineered system which uses a common backplane and lets you build mini-boards to add functionality. So first you build the CPU-card, then the RAM card, then the flash-disk card, etc. Over-engineered in one sense, extensible in another. (There are some single-board variants to cut down on soldering overhead, at a cost of less flexibility.)

After a while I came across, which describes a simple board called the the Z80 playground.

The advantage of this design is that it loads code from a USB stick, making it easy to transfer files to/from it, without the need for a compact flash card, or similar. The downside is that the system has only 64K RAM, meaning it cannot run CP/M 3, only 2.2. (CP/M 3.x requires more RAM, and a banking/paging system setup to swap between pages.)

When the system boots it loads code from an EEPROM, which then fetches the CP/M files from the USB-stick, copies them into RAM and executes them. The memory map can be split so you either have ROM & RAM, or you have just RAM (after the boot the ROM will be switched off). To change the initial stuff you need to reprogram the EEPROM, after that it's just a matter of adding binaries to the stick or transferring them over the serial port.

In only a couple of hours I got the basic stuff working as well as I needed:

  • A z80-assembler on my Linux desktop to build simple binaries.
  • An installation of Turbo Pascal 3.00A on the system itself.
  • An installation of FORTH on the system itself.
    • Which is nice.
  • A couple of simple games compiled from Pascal
    • Snake, Tetris, etc.
  • The Zork trilogy installed, along with Hitchhikers guide.

I had some fun with a CP/M emulator to get my hand back in things before the board arrived, and using that I tested my first "real" assembly language program (cls to clear the screen), as well as got the hang of using the wordstar keyboard shortcuts as used within the turbo pascal environment.

I have some plans for development:

  • Add command-line history (page-up/page-down) for the CP/M command-processor.
  • Add paging to TYPE, and allow terminating with Q.

Nothing major, but fun changes that won't be too difficult to implement.

Since CP/M 2.x has no concept of sub-directories you end up using drives for everything, I implemented a "search-path" so that when you type "FOO" it will attempt to run "A:FOO.COM" if there is no file matching on the current-drive. That's a nicer user-experience at all.

I also wrote some Z80-assembly code to search all drives for an executable, if not found in current drive and not already qualified. Remember CP/M doesn't have a concept of sub-directories) that's actually pretty useful:


I've also written some other trivial assembly language tools, which was surprisingly relaxing. Especially once I got back into the zen mode of optimizing for size.

I forked the upstream repository, mostly to tidy up the contents, rather than because I want to go into my own direction. I'll keep the contents in sync, because there's no point splitting a community even further - I guess there are fewer than 100 of these boards in the wild, probably far far fewer!

April 17, 2021 02:00 PM

April 10, 2021

Andy Smith

rsync and sudo without X forwarding

Five years ago I wrote about how to do rsync as root on both sides. That solution required using ssh-askpass which in turn requires X forwarding.

The main complication here is that sudo on the remote side is going to ask for a password, which either requires an interactive terminal or a forwarded X session.

I thought I would mention that if you’ve disabled tty_tickets in the sudo configuration then you can “prime” the sudo authentication with some harmless command and then do the real rsync without it asking for a sudo password:

local$ ssh -t sudo whoami
[sudo] password for you: 
local$ sudo rsync --rsync-path="sudo rsync" -av --delete \ /etc/secret/

This suggestion was already supplied as a comment on the earlier post five years ago, but I keep forgetting it.

I suggest this is only for ad hoc commands and not for automation. For automation you need to find a way to make sudo not ever ask for a password, and some would say to add configuration to sudo with a NOPASSWD directive to accomplish that.

I would instead suggest allowing a root login by ssh using a public key that is only for the specific purpose, as you can lock it down to only ever be able to execute that one script/program.

Also bear in mind that if you permanently allow “host A” to run rsync as root with unrestricted parameters on “host B” then a compromise of “host A” is also a compromise of “host B”, as full write access to filesystem is granted. Whereas if you only allow “host A” to run a specific script/program on “host B” then you’ve a better chance of things being contained.

by Andy at April 10, 2021 10:32 AM

April 07, 2021

Adam Trickett

Bog Roll: NFSv4 over a VPN

Over the Easter weekend, we were visting (fully vaccinated) family. So we were away from the house. Using my WireGuard VPN I was easily able to read email from my home server on my laptop without having to do much to make it all work. I still need to tweak the dynamically generated /etc/resolv.conf file, but I can live with that.

For a laugh I tried to see if NFS would work over WireGuard. Other than adding my machine's VPN name (already in BIND) to the exports file, nothing actually needed to be changed, and autofs started working as if the laptop was at home, and I was able to stream FLAC files over NFSv4 fom home to my laptop away from home...!

I think that's a result!

April 07, 2021 06:31 PM

March 07, 2021

Adam Trickett

Bog Roll: Le chat mange un croissant

I have been with a French person for quite a while. While I did make some effort to learn French, there wasn't much point really while we lived in England, worked in English and all our friends were English. Since we decided to move to France, learning French became more, critical, so I started lessons, and used various resources physical and on-line.

For nearly three years, I have been using Duolingo every day. I've pretty much got to the point that some stupid design flaws in French don't bother me anymore and I they come naturally. English has its own design flaws too, but you don't notice them in your first language until you try to learn another one....!

Every day before bed I plug away at the site, learning something new and repeating something I've done before. To be fair I've got faster, and lots of things come naturally without thinking, and I can now watch French TV (with French subtitles) and follow most of what's going on.

One thing that does bug me though is that in its limited form and to try to create some level of variety Duolingo varies the sentences a bit but that does mean you get a fair share of the silly ones, my favourite being Le chat mange un croissant.

March 07, 2021 03:21 PM

March 06, 2021

Andy Smith

grub-install: error: embedding is not possible, but this is required for RAID and LVM install

The Initial Problem

The recent security update of the GRUB bootloader did not want to install on my fileserver at home:

$ sudo apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  grub-common grub-pc grub-pc-bin grub2-common
4 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,067 kB of archives.
After this operation, 72.7 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Setting up grub-pc (2.02+dfsg1-20+deb10u4) ...
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.
Installing for i386-pc platform.
grub-install: warning: your core.img is unusually large.  It won't fit in the embedding area.
grub-install: error: embedding is not possible, but this is required for RAID and LVM install.

Four identical error messages, because this server has four drives upon which the operating system is installed, and I’d decided to do a four way RAID-1 of a small first partition to make up /boot. This error is coming from grub-install.

Ancient History

This system came to life in 2006, so it’s 15 years old. It’s always been Debian stable, so right now it runs Debian buster and during those 15 years it’s been transplanted into several different iterations of hardware.

Choices were made in 2006 that were reasonable for 2006, but it’s not 2006 now. Some of these choices are now causing problems.

Aside: four way RAID-1 might seem excessive, but we’re only talking about the small /boot partition. Back in 2006 I chose a ~256M one so if I did the minimal thing of only having a RAID-1 pair I’d have 2x 256M spare on the two other drives, which isn’t very useful. I’d honestly rather have all four system drives with the same partition table and there’s hardly ever writes to /boot anyway.

Here’s what the identical partition tables of the drives /dev/sd[abcd] look like:

$ sudo fdisk -u -l /dev/sda
Disk /dev/sda: 298.1 GiB, 320069031424 bytes, 625134827 sectors
Disk model: ST3320620AS     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot   Start       End   Sectors  Size Id Type
/dev/sda1  *         63    514079    514017  251M fd Linux raid autodetect
/dev/sda2        514080   6393869   5879790  2.8G fd Linux raid autodetect
/dev/sda3       6393870 625121279 618727410  295G fd Linux raid autodetect

Note that the first partition starts at sector 63, 32,256 bytes into the disk. Modern partition tools tend to start partitions at sector 2,048 (1,024KiB in), but this was acceptable in 2006 for me and worked up until a few days ago.

Those four partitions /dev/sd[abcd]1 make up an mdadm RAID-1 with metadata version 0.90. This was purposefully chosen because at the time of install GRUB did not have RAID support. This metadata version lives at the end of the member device so anything that just reads the device can pretend it’s an ext2 filesystem. That’s what people did many years ago to boot off of software RAID.

What’s Gone Wrong?

The last successful update of grub-pc seems to have been done on 7 February 2021:

$ ls -la /boot/grub/i386-pc/core.img
-rw-r--r-- 1 root root 31082 Feb  7 17:19 /boot/grub/i386-pc/core.img

I’ve got 62 sectors available for the core.img so that’s 31,744 bytes – just 662 bytes more than what is required.

The update of grub-pc appears to be detecting that my /boot partition is on a software RAID and is now including MD RAID support even though I don’t strictly require it. This makes the core.img larger than the space I have available for it.

I don’t think it is great that such a major change has been introduced as a security update, and it doesn’t seem like there is any easy way to tell it not to include the MD RAID support, but I’m sure everyone is doing their best here and it’s more important to get the security update out.

Possible Fixes

So, how to fix? It seems to me the choices are:

  1. Ignore the problem and stay on the older grub-pc
  2. Create a core.img with only the modules I need
  3. Rebuild my /boot partition

Option #1 is okay short term, especially if you don’t use Secure Boot as that’s what the security update was about.

Option #2 doesn’t seem that feasible as I can’t find a way to influence how Debian’s upgrade process calls grub-install. I don’t want that to become a manual process.

Option #3 seems like the easiest thing to do, as shaving ~1MiB off the size of my /boot isn’t going to cause me any issues.

Rebuilding My /boot

Take a backup

/boot is only relatively small so it seemed easiest just to tar it up ready to put it back later.

$ sudo tar -C /boot -cvf ~/boot.tar .

I then sent that tar file off to another machine as well, just in case the worst should happen.

Unmount /boot and stop the RAID array that it’s on

I’ve already checked in /etc/fstab that /boot is on /dev/md0.

$ sudo umount /boot
$ sudo mdadm --stop md0         
mdadm: stopped md0

At this point I would also recommend doing a wipefs -a on each of the partitions in order to remove the MD superblocks. I didn’t and it caused me a slight problem later as we shall see.

Delete and recreate first partition on each drive

I chose to use parted, but should be doable with fdisk or sfdisk or whatever you prefer.

I know from the fdisk output way above that the new partition needs to start at sector 2048 and end at sector 514,079.

$ sudo parted /dev/sda                                                             
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) rm 1
(parted) mkpart primary ext4 2048 514079s
(parted) set 1 raid on
(parted) set 1 boot on
(parted) p
Model: ATA ST3320620AS (scsi)
Disk /dev/sda: 625134827s
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start     End         Size        Type     File system  Flags
 1      2048s     514079s     512032s     primary  ext4         boot, raid, lba
 2      514080s   6393869s    5879790s    primary               raid
 3      6393870s  625121279s  618727410s  primary               raid

(parted) q
Information: You may need to update /etc/fstab.

Do that for each drive in turn. When I got to /dev/sdd, this happened:

Error: Partition(s) 1 on /dev/sdd have been written, but we have been unable to
inform the kernel of the change, probably because it/they are in use.  As a result,
the old partition(s) will remain in use.  You should reboot now before making further changes.

The reason for this seems to be that something has decided that there is still a RAID signature on /dev/sdd1 and so it will try to incrementally assemble the RAID-1 automatically in the background. This is why I recommend a wipefs of each member device.

To get out of this situation without rebooting I needed to repeat my mdadm --stop /dev/md0 command and then do a wipefs -a /dev/sdd1. I was then able to partition it with parted.

Create md0 array again

I’m going to stick with metadata format 0.90 for this one even though it may not be strictly necessary.

$ sudo mdadm --create /dev/md0 \
             --metadata 0.9 \
             --level=1 \
             --raid-devices=4 \
mdadm: array /dev/md0 started.

Again, if you did not do a wipefs earlier then mdadm will complain that these devices already have a RAID array on them and ask for confirmation.

Get the Array UUID

$ sudo mdadm --detail /dev/md0
           Version : 0.90
     Creation Time : Sat Mar  6 03:20:10 2021
        Raid Level : raid1
        Array Size : 255936 (249.94 MiB 262.08 MB)
     Used Dev Size : 255936 (249.94 MiB 262.08 MB)
      Raid Devices : 4
     Total Devices : 4
   Preferred Minor : 0
       Persistence : Superblock is persistent

       Update Time : Sat Mar  6 03:20:16 2021
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              UUID : e05aa2fc:91023169:da7eb873:22131b12 (local to host specialbrew.localnet)
            Events : 0.18
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1

Change your /etc/mdadm/mdadm.conf for the updated UUID of md0:

$ grep md0 /etc/mdadm/mdadm.conf
ARRAY /dev/md0 level=raid1 num-devices=4 UUID=e05aa2fc:91023169:da7eb873:22131b12

Make a new filesystem on /dev/md0

$ sudo mkfs.ext4 -m0 -L boot /dev/md0
mke2fs 1.44.5 (15-Dec-2018)
Creating filesystem with 255936 1k blocks and 64000 inodes
Filesystem UUID: fdc611f2-e82a-4877-91d3-0f5f8a5dd31d
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729, 204801, 221185

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

My /etc/fstab didn’t need a change because it mounted by device name, i.e. /dev/md0, but if yours uses UUID or label then you’ll need to update that now, too.

Mount it and put your files back

$ sudo mount /boot
$ sudo tar -C /boot -xvf ~/boot.tar

Reinstall grub-pc

$ sudo apt reinstall grub-pc
Setting up grub-pc (2.02+dfsg1-20+deb10u4) ...
Installing for i386-pc platform.
Installation finished. No error reported.
Installing for i386-pc platform.
Installation finished. No error reported.
Installing for i386-pc platform.
Installation finished. No error reported.
Installing for i386-pc platform.
Installation finished. No error reported.


You probably should reboot now to make sure it all works when you have time to fix any problems, as opposed to risking issues when you least expect it.

$ uprecords 
     #               Uptime | System                                     Boot up
     1   392 days, 16:45:55 | Linux 4.7.0               Thu Jun 14 16:13:52 2018
     2   325 days, 03:20:18 | Linux 3.16.0-0.bpo.4-amd  Wed Apr  1 14:43:32 2015
->   3   287 days, 16:03:12 | Linux 4.19.0-9-amd64      Fri May 22 12:33:27 2020
     4   257 days, 07:31:42 | Linux 4.19.0-6-amd64      Sun Sep  8 05:00:38 2019
     5   246 days, 14:45:10 | Linux 4.7.0               Sat Aug  6 06:27:52 2016
     6   165 days, 01:24:22 | Linux 4.5.0-rc4-specialb  Sat Feb 20 18:18:47 2016
     7   131 days, 18:27:51 | Linux 3.16.0              Tue Sep 16 08:01:05 2014
     8    89 days, 16:01:40 | Linux 4.7.0               Fri May 26 18:28:40 2017
     9    85 days, 17:33:51 | Linux 4.7.0               Mon Feb 19 17:17:39 2018
    10    63 days, 18:57:12 | Linux 3.16.0-0.bpo.4-amd  Mon Jan 26 02:33:47 2015
1up in    37 days, 11:17:07 | at                        Mon Apr 12 15:53:46 2021
no1 in   105 days, 00:42:44 | at                        Sat Jun 19 05:19:23 2021
    up  2362 days, 06:33:25 | since                     Tue Sep 16 08:01:05 2014
  down     0 days, 14:02:09 | since                     Tue Sep 16 08:01:05 2014
   %up               99.975 | since                     Tue Sep 16 08:01:05 2014

My Kingdom For 7 Bytes

My new core.img is 7 bytes too big to fit before my original /boot:

$ ls -la /boot/grub/i386-pc/core.img
-rw-r--r-- 1 root root 31751 Mar  6 03:24 /boot/grub/i386-pc/core.img

by Andy at March 06, 2021 05:10 AM

February 25, 2021

Andy Smith

Just had my COVID-19 first vaccination (Pfizer/BioNTech)

Just got back from having my first COVID-19 vaccination. Started queueing at 10:40, pre-screening questions at 10:50, all done by 10:53 then I poked at my phone for 15 minutes while waiting to check I wouldn’t keel over from anaphylactic shock (I didn’t).

I was first notified that I should book an appointment in the form of a text message from sender “GPSurgery” on Monday 22nd February 2021:


You have been invited to book your COVID-19 vaccinations.

Please click on the link to book:…
[Name of My GP Surgery]

The web site presented me with a wide variety of dates and times, the earliest being today, 3 days later, so I chose that. My booking was then confirmed by another text message, and another reminder message was sent yesterday. I assume these text messages were sent by some central service on behalf of my GP whose role was probably just submitting my details.

A very smooth process a 15 minute walk from my home, and I’m hearing the same about the rest of the country too.

Watching social media mentions from others saying they’ve had their vaccination and also looking at the demographics in the queue and waiting room with me, I’ve been struck by how many people have—like me—been called up for their vaccinations quite early unrelated to their age. I was probably in the bottom third age group in the queue and waiting area: I’m 45 and although most seemed older than me, there were plenty of people around my age and younger there.

It just goes to show how many people in the UK are relying on the NHS for the management of chronic health conditions that may not be obviously apparent to those around them. Which is why we must not let this thing that so many of us rely upon be taken away. I suspect that almost everyone reading either is in a position of relying upon the NHS or has nearest and dearest who do.

The NHS gets a lot of criticism for being a bottomless pit of expenditure that is inefficient and slow to embrace change. Yes, healthcare costs a lot of money especially with our ageing population, but per head we spend a lot less than many other countries: half what the US spends per capita or as a proportion of GDP; our care is universal and our life expectancy is slightly longer. In 2017 the Commonwealth Fund rated the NHS #1 in a comparison of 11 countries.

So the narrative that the NHS is poor value for money is not correct. We are getting a good financial deal. We don’t necessarily need to make it perform better, financially, although there will always be room for improvement. The NHS has a funding crisis because the government wants it to have a funding crisis. It is being deliberately starved of funding so that it fails.

The consequences of selling off the NHS will be that many people are excluded from care they need to stay alive or to maintain a tolerable standard of living. As we see with almost every private sector takeover of what were formerly public services, they strip the assets, run below-par services that just about scrape along, and then when there is any kind of downturn or unexpected event they fold and either beg for bailout or just leave the mess in the hands of the government. Either way, taxpayers pay more for less and make a small group of wealthy people even more wealthy.

We are such mugs here in UK that even other countries have realised that they can bid to take over our public services, provide a low standard of service at a low cost to run, charge a lot to the customer and make a hefty profit. Most of our train operating companies are owned by foreign governments.

The NHS as it is only runs as well as it does because the staff are driven to breaking point with an obscene amount of unpaid overtime and workplace stress.

If you’d like to learn some more about the state of the NHS in the form of an engaging read then I recommend Adam Kay’s book This is Going to Hurt: Secret Diaries of a Junior Doctor. It will make you laugh, it will make you cry and if you’ve a soul it will make you angry. Also it may indelibly sear the phrase “penis degloving injury” into your mind.

Do not accept the premise that the NHS is too expensive.

If the NHS does a poor job (and it sometimes does), understand that underfunding plays a big part.

Privatising any of it will not improve matters in any way, except for a very small number of already wealthy people.

Please think about this when you vote.

by Andy at February 25, 2021 12:55 PM

July 10, 2020

Martin A. Brooks

Getting started with a UniFi Dream Machine Pro

It’s not an exaggeration to say that I’m an Ubiquiti fanboy. I like their kit a lot and my home network has been 100% UniFi for quite a few years now.

I’ve just moved in to a new home which I’m getting rewired and this will include putting structured network cabling in, terminating back to a patch panel in a rack in the loft. I have a small amount of “always on” kit and I wanted as much as it as reasonably possible to be in standard 19″ rack format. This is when I started looking at the Ubiquiti Dream Machine Pro to replace a combination of a UniFi CloudKey and Security Gateway, both excellent products in their own right.

My expectation was that I would connect the UDMP to some power, move the WAN RJ45 connection from the USG to the UDMP, fill in some credentials and (mostly) done! As I’m writing this down, you can probably guess it didn’t quite work out like that.

The UDMP completely failed to get an internet connection via all the supported methods applicable. PPPoE didn’t work, using a surrogate router via DHCP didn’t work, static configuration didn’t work. I reached out to the community forum and, in fairness, got very prompt assistance from a Ubiquiti employee.

I needed to upgrade the UDMP’s firmware before it would be able to run its “first setup” process, but updating the firmware via the GUI requires a working internet connection. It’s all a little bit chicken and egg. Instead, this is what you need to do:

  • Download the current UDMP firmware onto a laptop.
  • Reconfigure the laptop’s IP to be and plug it in to any of the main 8 ethernet ports on the UDMP.
  • Use scp to copy the firmware to the UDMP using the default username of “root” with the password “ubnt”:
    scp /path/to/fw.bin root@
  • SSH in to the UDMP and install the new firmware:
    ubnt-upgrade /mnt/data/fw.bin

The UDMP should reboot onto the new firmware automatically. Perhaps because I’d been attempting so many variations of the setup procedure, after rebooting my UDMP was left in a errored state with messages like “This is taking a little longer..” and “UDM Pro is having an issue booting. Try to reboot or enter Recovery Mode”. To get round this I updated the firmware again, this time doing a factory reset:

ubnt-upgrade -c /mnt/data/fw.bin

The UDMP then rebooted again without error and I was able to complete the setup process normally.

It’s a bit unfortunate that UDMPs are shipping with essentially non-functional firmware, and it’s also unfortunate that the process for dealing with this is completely undocumented.

by Martin A. Brooks at July 10, 2020 06:07 PM

May 29, 2020

Martin A. Brooks

Letter from my MP regarding Dominic Cummings

I wrote to my MP, Julia Lopez (CON), asking for her view on whether Dominic Cummings had broken the law or not and if he should be removed from his position. Here is her response:

Thank you for your email about the Prime Minister’s adviser, Dominic Cummings, and his movements during the lockdown period. I apologise for taking a few days to get back to you, however I am in the last weeks of my maternity leave and am working through a number of tasks in preparation for my return.

I have read through all the emails sent to me about Mr Cummings and completely understand the anger some correspondents feel. It has been a very testing time for so many of us as we have strived to adhere to new restrictions that have separated us from loved ones, led us to make very difficult decisions about our living and working arrangements or seen us miss important family occasions – both happy and sad. Those sacrifices have often been painful but were made in good faith in order to protect ourselves, our families and the most vulnerable in the broader community.

Given the strength of feeling among constituents, I wrote to the Prime Minister this week to advise him of the number of emails I had received and the sentiments expressed within them, highlighting in particular the concern over public health messaging. Mr Cummings has sought to explain his actions in a press conference in Downing Street and has taken questions from journalists. While his explanation has satisfied some constituents, I know others believe it was inadequate and feel that this episode requires an independent inquiry. I have made that request to the Prime Minister on behalf of that group of constituents.

Mr Cummings asserts that he acted within lockdown rules which permitted travel in exceptional circumstances to find the right kind of childcare. In the time period in question, he advises that he was dealing with a sick wife, a child who required hospitalisation, a boss who was gravely ill, security concerns at his home, and the management of a deeply challenging public health crisis. It has been asserted that Mr Cummings believes he is subject to a different set of rules to everyone else, but he explained in this period that he did not seek privileged access to covid testing and did not go to the funeral of a very close family member.

I am not going to be among those MPs calling for Mr Cummings’ head to roll. Ultimately it is for the Prime Minister to decide whether he wishes Mr Cummings to remain in post – and to be accountable for and accept the consequences of the decision he makes – and for the relevant authorities to determine whether he has broken the law. Whatever one thinks of this episode, I think the hounding of Mr Cummings’ family has been disturbing to watch and I hope that in future the press can find a way of seeking truth without so aggressively intruding into the lives of those who have done nothing to justify their attention.

Thank you again for taking the trouble to share with me your concerns. I regret that we cannot address everyone individually but the team continues to receive a high number of complex cases involving those navigating healthcare, financial and other challenges and these constituents are being prioritised. I shall send you any response I receive from the Prime Minister.

Best wishes


by Martin A. Brooks at May 29, 2020 01:33 PM

May 14, 2018

Martin A. Brooks

My affiliate links

It occurred to me that collecting all these in one place might mean I remember to tell people about them and therefore they might get used!

I’ve been a customer of Zen Internet for a very long time.   They’re an award winning ISP and have the best customer support I’ve ever experienced, not that I’ve need to use it very often.  Using my link gets us both some free stuff.

Huel is a meal replacement product.  If you’re like me and can only rarely be bothered cooking for one then Huel gives you a quick, easy, nutritionally complete drink to chug down with very little time and effort involved.  I like the vanilla flavour and some of the flavour packs are nice.  Using my link gets you and me £10 off an order.

Top Cashback is one the UK’s most popular cashback sites.  I’ve probably got several hundred pounds from it over the years.  It requires some discipline to use and may require you to use less draconian ad and cookie blocking software.  Using my link gets us both £7.50.

by Martin A. Brooks at May 14, 2018 05:26 PM

January 24, 2017

Martin Wimpress

DIY SNES Classic

Inspired by the recent NES Classic I made a DIY SNES Classic just in time for the Christmas holidays and it's very portable!

To make one yourself you'll need:

Both controllers use Bluetooth, so two player wire-free gaming is possible. The USB cables are just for charging, but if you've got no charge they can be used as wired controllers too. Retropie can be controlled via the controllers, no keyboard/mouse required.

by Martin Wimpress at January 24, 2017 12:00 PM

December 13, 2016

Martin Wimpress

Raspberry Pi 3 Nextcloud Box running on Ubuntu Core

I recently bought the Nextcloud Box. When it came to setting it up I ran into a problem, I only had Raspberry Pi 3 computers available and at the time of writting the microSDHC card provided with the Nextcloud Box only supports the Raspberry Pi 2. Bummer!


This guide outlines how to use Ubuntu Core on the Raspberry Pi 3 to run Nextcloud provided as a snap from the Ubuntu store.

If you're not familiar with Ubuntu Core, here's a quote:

Ubuntu Core is a tiny, transactional version of Ubuntu for IoT devices and large container deployments. It runs a new breed of super-secure, remotely upgradeable Linux app packages known as snaps

After following this guide Ubuntu Core and any installed snaps (and their data) will reside on the SD card and the 1TB hard disk in the Nextcloud box will be available for file storage. This guide explains how to:

  • Install and configure Ubuntu Core 16 for the Raspberry Pi 3
  • Format the 1TB hard disk in the Nextcloud Box and auto-mount it
  • Install the Nextcloud snap and connect the removable-media interface to allow access to the hard disk
  • Activate and configure the Nextcloud External Storage app so the hard disk can be used to store files
  • Optional configuration of Email and HTTPS for Nextcloud

Prepare a microSDHC card

I explained the main steps in this post but you really should read and follow the Get started with a Raspberry Pi 2 or 3 page as it fully explains how to use a desktop computer to download an Ubuntu Core image for your Raspberry Pi 2 or 3 and copy it to an SD card ready to boot.

Here's how to create an Ubuntu Core microSDHC card for the Raspberry Pi 3 using an Ubuntu desktop:

  • Download Ubuntu Core 16 image for Raspberry Pi 3
  • Insert the microSDHC card into your PC
    • Use GNOME Disks and its Restore Disk Image... option, which natively supports XZ compressed images.
    • Select your SD card from the panel on the left
    • Click the "burger menu" on the right and Select Restore Disk Image...
    • Making sure the SD card is still selected, click the Power icon on the right.
  • Eject the SD card physically from your PC.
GNOME Disks - Restore Disk Image

Ubuntu Core first boot

An Ubuntu SSO account is required to setup the first user on Ubuntu Core:

Insert the Ubuntu Core microSHDC into the Raspberry Pi, which should be in the assembled Nextcloud Box with a keyboard and monitor connected. Plug in the power.

  • The system will boot then become ready to configure
  • The device will display the prompt "Press enter to configure"
  • Press enter then select "Start" to begin configuring your network and an administrator account. Follow the instructions on the screen, you will be asked to configure your network and enter your Ubuntu SSO credentials
  • At the end of the process, you will see your credentials to access your Ubuntu Core machine:
This device is registered to <Ubuntu SSO email address>.
Remote access was enabled via authentication with the SSO user <Ubuntu SSO user name>
Public SSH keys were added to the device for remote access.


Once setup is done, you can login to Ubuntu Core using ssh, from a computer on the same network, using the following command:

ssh <Ubuntu SSO user name>@<device IP address>

The user name is your Ubuntu SSO user name.

Reconfiguring network

Should you need to reconfigure the network at a later stage you can do so with:

sudo console-conf

Prepare 1TB hard disk

Log in to your Raspberry Pi 3 running Ubuntu Core via ssh.

ssh <Ubuntu SSO user name>@<device IP address>

Partition and format the Nextcloud Box hard disk

This will create a single partition formatted with the ext4 filesystem.

sudo fdisk /dev/sda

Do the following to create the partition:

Command (m for help): o
Created a new DOS disklabel with disk identifier 0x253fea38.

Command (m for help): n
Partition type
    p   primary (0 primary, 0 extended, 4 free)
    e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-1953458175, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953458175, default 1953458175):

Created a new partition 1 of type 'Linux' and of size 931.5 GiB.

Command (m for help): w

Now format the partition and give it the label data. This label will be used to reference it for mounting later:

sudo mkfs.ext4 -L data /dev/sda1

Automatically mount the partition

Most of the Ubuntu Core root file system is read-only, so it is not possible to edit /etc/fstab. Therefore we'll use systemd to achieve that.

Be aware of one of the systemd.mount pitfalls:

Mount units must be named after the mount point directories they control. Example: the mount point /home/lennart must be configured in a unit file home-lennart.mount.

Yes that's right! The unit filename must match the mount point path.

Create the media-data.mount unit:

sudo vi /writable/system-data/etc/systemd/system/media-data.mount

Add the following content:

Description=Mount unit for data



Reload systemd, scanning for new or changed units:

sudo systemctl daemon-reload

Start the media-data.mount unit, which will mount the volume, and also enable it so it will be automatically mounted on boot.

sudo systemctl start media-data.mount
sudo systemctl enable media-data.mount

And just like any other unit, you can view its status using systemctl status:

sudo systemctl status media-data.mount

Update Ubuntu Core

Make sure Ubuntu Core is up-to-date and reboot.

sudo snap refresh
sudo reboot

After the reboot, make sure /media/data is mounted. If not double check the steps above.

Install Nextcloud

The Nextcloud snap uses the removable-media interface, which grants access to /media/*, and requires manual connection:

sudo snap install nextcloud
sudo snap connect nextcloud:removable-media core:removable-media

Browse to the Nextcloud IP address and create the admin user account, for example:

  • http://nextcloud.local/

Nextcloud configuration

In the examples below replace nextcloud.local with the IP address or hostname of your Nextcloud Box and replace with your domain.

External Storage

Enable the External Storge app via:

  • http://nextcloud.local/index.php/settings/apps?category=disabled#

Configure External Storage app via:

  • http://nextcloud.local/index.php/settings/admin/externalstorages

Use these settings:

  • Folder name: data
  • External storage: Local
  • Authentication: None
  • Configuration: /media/data
  • Available for: All


Configure your outgoing email settings via:

  • http://nextcloud.local/index.php/settings/admin/additional

I use Sendgrid for sending email alerts from my servers and devices. These are the settings that work for me:

  • Send mode: SMTP
  • Encryption: STARTTLS
  • From address:
  • Authentication method: Plain
  • Authentication required: Yes
  • Server address:
  • Username: apikey
  • Password: theactualapikey

Enabling HTTPS

It is strongly recommend that you use HTTPS if you intend to expose your Nextcloud to the Internet.

First do a test to see if you can install a Let's Encrypt certificate:

sudo nextcloud.enable-https -d

Answer the questions:

Have you met these requirements? (y/n) y
Please enter an email address (for urgent notices or key recovery):
Please enter your domain name(s) (space-separated):
Attempting to obtain certificates... done
Looks like you're ready for HTTPS!

If everything went well, then install the certificate

sudo nextcloud.enable-https

Answer the questions again:

Have you met these requirements? (y/n) y
Please enter an email address (for urgent notices or key recovery):
Please enter your domain name(s) (space-separated):
Attempting to obtain certificates... done
Restarting apache... done

If Let's Encrypt didn't work for you, you can always use Nextcloud with a self-signed certificate.

sudo nextcloud.enable-https -s

Manual configuration changes

If you need to make any tweaks to the Nextcloud configuration file you can edit it like so:

sudo vi /var/snap/nextcloud/current/nextcloud/config/config.php

If you have manually editted the Nextcloud configuration you may need to restart nextcloud:

sudo snap disable nextcloud
sudo snap enable nextcloud


So there it is, Nextcloud running on Ubuntu Core powered by a Raspberry Pi 3. The performance is reasonable, obviously not stellar, but certainly good enough to move some cloud services for a small family away from the likes of Google and Dropbox. Now go and install some Nextcloud clients for your desktops and devices :-)

by Martin Wimpress at December 13, 2016 05:17 PM

August 22, 2016

Anton Piatek

Now with added SSL from letsencrypt

I’ve had SSL available on my site for some time using startssl, but as the certificate was expiring and requires manual renewal, I though it was time to try out letsencrypt. I’m a huge fan of the idea of letsencrypt, which is trying to bring free SSL encryption to the whole of the internet, in particular all the smaller sites who might not have the expertise to roll out SSL or where a cost might be restrictive.

There are a lot of scripts for powering letsencrypt, but getssl looked the best fit for my use case as I just wanted a simple script to generate certificates, not manage apache configs or anything else. It seems to do a pretty good job so far. I swapped over the certificates to the newly generated ones and it seems pretty smooth sailing.

by Anton Piatek at August 22, 2016 06:51 PM

December 16, 2015

Martin Wimpress

HP Microserver N54L power saving and performance tuning using Debian.

I've installed Open Media Vault on a HP ProLiant MicroServer G7 N54L and use it as media server for the house. OpenMediaVault (OMV) is a network attached storage (NAS) solution based on Debian.

I want to minimise power consumption but maximise performance. Here are some tweaks reduce power consumption and improve network performance.

Power Saving

Install the following.

apt-get install amd64-microcode firmware-linux firmware-linux-free \
firmware-linux-nonfree pciutils powertop radeontool

And for ACPI.

apt-get install acpi acpid acpi-support acpi-support-base


First I enabled PCIE ASPM in the BIOS and forced the kernel to use it and ACPI via grub by changing GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, so it looks like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi=force pcie_aspm=force nmi_watchdog=0"

Then update grub and reboot.


Enable Power Saving via udev

The following rules file /etc/udev/rules.d/90-local-n54l.rules enables power saving modes for all PCI, SCSI and USB devices and ASPM. Futher the internal Radeon card power profile is set to low as there is rarely a monitor connected. The file contains the following:

SUBSYSTEM=="module", KERNEL=="pcie_aspm", ACTION=="add", TEST=="parameters/policy", ATTR{parameters/policy}="powersave"
SUBSYSTEM=="i2c", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="pci", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="usb", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="usb", ACTION=="add", TEST=="power/autosuspend", ATTR{power/autosuspend}="2"
SUBSYSTEM=="scsi", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="spi", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="drm", KERNEL=="card*", ACTION=="add", DRIVERS=="radeon", TEST=="power/control", TEST=="device/power_method", ATTR{device/power_method}="profile", ATTR{device/power_profile}="low"
SUBSYSTEM=="scsi_host", KERNEL=="host*", ACTION=="add", TEST=="link_power_management_policy", ATTR{link_power_management_policy}="min_power"

Add this to /erc/rc.local.

echo '1500' > '/proc/sys/vm/dirty_writeback_centisecs'

Hard disk spindown

Using the Open Media Vault web interface got to Storage -> Physical Disks, select each disk in turn and click Edit then set:

  • Advanced Power Management: Intermediate power usage with standby
  • Automatic Acoustic Management: Minimum performance, Minimum acoustic output
  • Spindown time: 20 minutes

Performance Tuning


The following tweaks improve network performance ,but I have a HP NC360T PCI Express Dual Port Gigabit Server Adapter in my N54L so these settings may not be applicable to the onboard NIC.

Add this to /erc/rc.local.

ethtool -G eth0 rx 4096
ethtool -G eth1 rx 4096
ethtool -G eth0 tx 4096
ethtool -G eth1 tx 4096
ifconfig eth0 txqueuelen 1000
ifconfig eth1 txqueuelen 1000

Add the following to /etc/sysctl.d/local.conf.

fs.file-max = 100000
net.core.netdev_max_backlog = 50000
net.core.optmem_max = 40960
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_max_syn_backlog = 30000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
vm.swappiness = 10


With these settings applied powertop reports everything that can be in a power saving mode is and the room temperature is measurably cooler. More importantly, with four 4TB drives in a RAID-5 configuration formatted with XFS and dual bonded gigabit ethernet, I am able to backup data to the server at a sustained rate of 105MB/s, which is 0.85 Gbit.

Not too shabby for an AMD Turion II Neo N54L (2.2GHz) :-D


by Martin Wimpress at December 16, 2015 12:00 PM

October 05, 2015

Philip Stubbs

Gear profile generator

Having been inspired by the gear generator found at I decided to have a go at doing this myself.

Some time ago, I had tried to do this in Java as a learning exercise. I only got so far and gave up before I managed to generate any involute curves required for the tooth profile. Trying to learn Java and the math required at the same time was probably too much and it got put aside.

Recently I had a look at the Go programming language. Then Matthias Wandel produced the page mentioned above, and I decided to have another crack at drawing gears.

The results so far can be seen on Github, and an example is shown here.

Gear Profile Example Image

What I have learnt

  • Math makes my head hurt.
  • The Go programming language fits the way my brain works better than most other languages. I much prefer it to Java, and will try and see if I can tackle other problems with it, just for fun.

by stuphi ( at October 05, 2015 08:32 AM

June 22, 2015

Anton Piatek

Hello Pace

After leaving IBM I’ve joined Pace at their Belfast office. It is quite a change of IT sectors, though still the same sort of job. Software development seems to have a lot in common no matter which industry it is for.

There’s going to be some interesting learning, things like DVB are pretty much completely new to me, but at the same time it’s lots of Java and C++ with similar technology stacks involved. Sadly less perl, but more Python so maybe I’ll learn that properly. I’m likely to work with some more interesting Javascript frameworks, in particular Angular.js which should be fun.

The job is still Software Development, and there should be some fun challenges with things like allowing a TV set top box to do on demand video content when all you have is a one-way data stream from a satellite, for instance, which make for some interesting solutions. I’m working in the Cobalt team which deals with a delivering data from the TV provider onto set top boxes, so things like settings, software updates, programme guides and on demand content and even apps. Other teams in the office work with the actual video content encryption and playback and the UI the set top box shows.

The local office seems to be all running Fedora, so I’m saying goodbye to Ubuntu at work. I already miss it, but hopefully will find Fedora enjoyable in the long term.

The office is on the other side of Belfast so is a marginally longer commute, but it’s still reasonable to get to. Stranmillis seems a nice area of Belfast, and it’s a 10 minute walk to the Botanical gardens so I intend to make some time to see it over lunch, which will be nice as I really miss getting out as I could in Hursley and its surrounding fields.

by Anton Piatek at June 22, 2015 02:53 PM

June 04, 2015

Anton Piatek

Bye bye big blue

After nearly 10 years with IBM, I am moving on… Today is my last day with IBM.

I suppose my career with IBM really started as a pre-university placement at IBM, which makes my time in IBM closer to 11 years.  I worked with some of the WebSphere technical sales and pre-sales teams in Basingstoke, doing desktop support and Lotus Domino administration and application design, though I don’t like to remind people that I hold qualifications on Domino :p

I then joined as a graduate in 2005, and spent most of my time working on Integration Bus (aka Message Broker, and several more names) and enjoyed working with some great people over the years. The last 8 months or so have been with the QRadar team in Belfast, and I really enjoyed my time working with such a great team.

I have done test roles, development roles, performance work, some time in level 3 support, and enjoyed all of it. Even the late nights the day before release were usually good fun (the huge pizzas helped!).

I got very involved with IBM Hursley’s Blue Fusion events, which were incredible fun and a rather unique opportunity to interact with secondary school children.

Creating an Ubuntu-based linux desktop for IBM, with over 6500 installs, has been very rewarding and something I will remember fondly.

I’ve enjoyed my time in IBM and made some great friends. Thanks to everyone that helped make my time so much fun.


by Anton Piatek at June 04, 2015 10:00 AM

April 11, 2015

Philip Stubbs


Suddenly decided that I needed a USB OTG cable. Rather than wait for one in the post, i decided to make one from spare cables found in my box of bits.
Initially I thought that it would be a simple case of just cutting the cables and reconnecting a USB connector from a phone lead to a female USB socket. Unfortunately that is not the case.
The USB cable has four wires, but the micro USB plug has five contacts. The unused contact needs to connected to ground to make the OTG cable. The plug on the cable I used does not have a connection for the  extra pin, so I needed to rip it apart and blob a lump of solder on two pins. The body of the plug has a wall between each pin, so I rammed a small screwdriver in there to allow the soldered pins to fit.

I then reassembled the plug, and continued with the connecting the wires together. This was an easy case of , red to red, black to black, green to green and white to white. A piece of heat shrink covers the mess.
Now to use it. It allows me to plug a keyboard into my Nexus tablet. If I plug a mouse in, a pointer pops up. All of a sudden using the tablet feels like using a real computer. I am typing this with a keyboard on my lap down the garden with my tablet.
The real motivation for the cable was to allow me to use my phone to adjust the settings on my MultiWii based control board of my Quadcopter. For that, it seems even better than MultiWiiConf, and certainly a lot more convenient when out flying.

by stuphi ( at April 11, 2015 04:31 PM

January 29, 2015

Philip Stubbs

Arduino and NRF24L01 for Quad-copter build

As part of my Quadcopter build, I am using a couple of Arduino's along with some cheap NRF24L01 from Banggood for the radio transmitter and reciever. The idea came from watching the YouTube channel iforce2d.

When I started developing (copying) the code for the NRF modules, I did a quick search for the required library. For no good reason, I opted for the RadioHead version. Part of my thinking was by using a different library from iforce2d, I would have to poke around in the code a bit more and lean something.

All went well with the initial trials. I managed to get the two modules talking to each other, and even had a simple processing script show the stick outputs by reading from the serial port of the receiver.

Things did not look so good when I plugged the flight controller in. For that I am using an Afro Mini32. With that connected to the computer and Baseflight running, the receiver tab showed a lot of fluctuations on the control signals.

Lots of poking , thinking, and even taking it into work to connect to an oscilloscope, it looked like the radio was mucking up with the timing of the PWM signal for the flight controller. Finally, I decided to give an alternative NRF library a try, and from the Arduino playground site, I selected this one. As per iforce2d, I think.

Well that fixed it. Although, at the same time I cleaned up my code and pulled lots debugging stuff out and changed one if loop to a while loop, so there is a chance that changing the Library was not the answer. Anyhow, it works well now. Just need some more bits to turn up and I can start on the actual copter!

by stuphi ( at January 29, 2015 04:28 PM

June 23, 2014

Tony Whitmore

Tom Baker at 80

Back in March I photographed the legendary Tom Baker at the Big Finish studios in Kent. The occasion was the recording of a special extended interview with Tom, to mark his 80th birthday. The interview was conducted by Nicholas Briggs, and the recording is being released on CD and download by Big Finish.

I got to listen in to the end of the recording session and it was full of Tom’s own unique form of inventive story-telling, as well as moments of reflection. I got to photograph Tom on his own using a portable studio set up, as well as with Nick and some other special guests. All in about 7 minutes! The cover has been released now and it looks pretty good I think.

Tom Baker at 80

The CD is available for pre-order from the Big Finish website now. Pre-orders will be signed by Tom, so buy now!

Pin ItThe post Tom Baker at 80 first appeared on Words and pictures.

by Tony at June 23, 2014 05:31 PM

May 19, 2014

Tony Whitmore

Mad Malawi Mountain Mission

This autumn I’m going to Malawi to climb Mount Mulanje. You might not have heard of it, but it’s 3,000m high and the tallest mountain in southern Africa. I will be walking 15 miles a day uphill, carrying a heavy backpack. I will be bitten by mosquitoes and other flying buzzy things. It’ll be hard work, is what I’m saying.

I’m doing this to raise money for AMECA. They’ve built a hospital in Malawi that is completely sustainable and not reliant on charity to keep operating. Adults pay for their treatments and children are treated for free. But AMECA also support nurses from the UK to go and work in the hospital. The people of Malawi get better healthcare and the nurses get valuable experience to bring back to the UK.

And that’s what the money I’m raising will go towards. There are just 15 surgeons in Malawi for 15 million people so the extra support is so valuable.

There have been lots of amazing, generous donors already. My family, friends, colleagues, members of the Ubuntu and FLOSS community, Doctor Who fans, random people off the Internet have all donated. Thank you, everyone. I have been touched by the response. But there’s still a way to go. I have just one month to raise £190. So much has been raised already, but I would love it if you could help push me over my target. Or, if you don’t like me and want to see me suffer, help me reach my target and I’ll be sure to post lots of photos of the injuries I sustain. Either way…

Please donate here.

Pin ItThe post Mad Malawi Mountain Mission first appeared on Words and pictures.

by Tony at May 19, 2014 04:59 PM

May 12, 2014

Tony Whitmore

Paul Spragg

I was very sorry to hear on Friday that Paul Spragg had passed away suddenly. Paul was an essential part of Big Finish, working tirelessly behind the scenes to make everything keep ticking over. I had the pleasure of meeting him on a number of occasions. I first met him at the recording for Dark Eyes 2. It was my first engagement for Big Finish and I was unsure of what to expect and generally feeling a little nervous. Paul was friendly right from the start and helped me get set up and ready. He even acted as my test subject as I was setting up my dramatic side lights, which is where the photo below comes from. It’s just a snap really, but it’s Paul.

He was always friendly and approachable, and we had a few chats when I was in the studio at other recording sessions. We played tag on the spare room at the studios, which is where interviews are done as well as being a makeshift photography studio. It was always great to bump into him at other events too.

Thanks to his presence on the Big Finish podcast Paul’s voice will be familiar to thousands. His west country accent and catchphrases like “fo-ward” made him popular with podcast listeners, to the extent that there were demands that he travel to conventions to meet them!

My thoughts and condolences go to his family, friends and everyone at Big Finish.

Paul Spragg from Big Finish

Pin ItThe post Paul Spragg first appeared on Words and pictures.

by Tony at May 12, 2014 05:30 PM