Planet HantsLUG

June 21, 2022

Steve Kemp

Writing a simple TCL interpreter in golang

Recently I was reading Antirez's piece TCL the Misunderstood again, which is a nice defense of the utility and value of the TCL language.

TCL is one of those scripting languages which used to be used a hell of a lot in the past, for scripting routers, creating GUIs, and more. These days it quietly lives on, but doesn't get much love. That said it's a remarkably simple language to learn, and experiment with.

Using TCL always reminds me of FORTH, in the sense that the syntax consists of "words" with "arguments", and everything is a string (well, not really, but almost. Some things are lists too of course).

A simple overview of TCL would probably begin by saying that everything is a command, and that the syntax is very free. There are just a couple of clever rules which are applied consistently to give you a remarkably flexible environment.

To get started we'll set a string value to a variable:

  set name "Steve Kemp"
  => "Steve Kemp"

Now you can output that variable:

  puts "Hello, my name is $name"
  => "Hello, my name is Steve Kemp"

OK, it looks a little verbose due to the use of set, and puts is less pleasant than print or echo, but it works. It is readable.

Next up? Interpolation. We saw how $name expanded to "Steve Kemp" within the string. That's true more generally, so we can do this:

 set print pu
 set me    ts

 $print$me "Hello, World"
 => "Hello, World"

There "$print" and "$me" expanded to "pu" and "ts" respectively. Resulting in:

 puts "Hello, World"

That expansion happened before the input was executed, and works as you'd expect. There's another form of expansion too, which involves the [ and ] characters. Anything within the square-brackets is replaced with the contents of evaluating that body. So we can do this:

 puts "1 + 1 = [expr 1 + 1]"
 => "1 + 1 = 2"

Perhaps enough detail there, except to say that we can use { and } to enclose things that are NOT expanded, or executed, at parse time. This facility lets us evaluate those blocks later, so you can write a while-loop like so:

 set cur 1
 set max 10

 while { expr $cur <= $max } {
       puts "Loop $cur of $max"
       incr cur
 }

Anyway that's enough detail. Much like writing a FORTH interpreter the key to implementing something like this is to provide the bare minimum of primitives, then write the rest of the language in itself.

You can get a usable scripting language with only a small number of the primitives, and then evolve the rest yourself. Antirez also did this, he put together a small TCL interpreter in C named picol:

Other people have done similar things, recently I saw this writeup which follows the same approach:

So of course I had to do the same thing, in golang:

My code runs the original code from Antirez with only minor changes, and was a fair bit of fun to put together.

Because the syntax is so fluid there's no complicated parsing involved, and the core interpreter was written in only a few hours then improved step by step.

Of course to make a language more useful you need I/O, beyond just writing to the console - and being able to run the list-operations would make it much more useful to TCL users, but that said I had fun writing it, it seems to work, and once again I added fuzz-testers to the lexer and parser to satisfy myself it was at least somewhat robust.

Feedback welcome, but even in quiet isolation it's fun to look back at these "legacy" languages and recognize their simplicity lead to a lot of flexibility.

June 21, 2022 01:00 PM

May 30, 2022

Debian Bits

Debian welcomes its new Outreachy interns

Outreachy logo

Debian continues participating in Outreachy, and we're excited to announce that Debian has selected two interns for the Outreachy May 2022 - August 2022 round.

Israel Galadima and Michael Ikwuegbu will work on Improve yarn package manager integration with Debian, mentored by Akshay S Dinesh and Pirate Praveen.


Congratulations and welcome to Israel Galadima and Michael Ikwuegbu!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help extend Debian! You can follow the work of the Outreachy interns reading their blogs (they are syndicated in Planet Debian), and chat with us in the #debian-outreach IRC channel and mailing list.

by Abhijith Pa at May 30, 2022 10:00 AM

May 29, 2022

Adam Trickett

Bog Roll: Python

I've used many programming languages since starting with Commodore BASIC V2 on the Commodore 64. BASIC Lightning was a more structured version and quite nice. I also dabbled with 6510 assembly, which is very RISC like and rather tedious.

On PCs I first used Microsoft BASIC which was okay, but I much prefered Borland's TurboBASIC which was really quite powerful. At university I taught myself Pascal from books in the library and used Borland Turbo Pascal quite a bit. I then tried Borland's Delphi which was all visual, but basically Turbo Pascal at heart.

I then started to do web things and and at the time Perl was the language to use, so I started to use that. I quick look at PHP and came back to Perl, which I used for several years.

When working with SAP I initially worked with Perl outside of SAP, processing data from within. I later migrated to programming SAP in ABAP - another static language like Pascal and quite different from Perl. I realise that I've not done much in Perl for about 12 years - I tinker now and then, but I've not written any serious.

Years ago I thought about learning Python but never did, but for the last few weeks I've been following the OpenSAP course: Python for Beginners which was quite interesting. Don't know if I'll ever use Python again, but it has been interesting and other than missing the first week (no credit possible) I've consistently been scoring 100% on the tests and exercises.

May 29, 2022 04:27 PM

May 24, 2022

Debian Bits

Debian welcomes the 2022 GSOC interns

GSoC logo

We are very excited to announce that Debian has selected three interns to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, interns, and details of the tasks to be performed.


Project: Android SDK Tools in Debian

  • Interns: Nkwuda Sunday Cletus and Raman Sarda

The deliverables of this project will mostly be finished packages submitted to Debian sid, both for new packages and updated packages. Whenever possible, we should also try to get patches submitted and merged upstream in the Android sources.


Project: Project: Quality Assurance for Biological and Medical Applications inside Debian

  • Interns: Mohammed Bilal

Deliverables of the project: Continuous integration tests for all Debian Med applications (life sciences, medical imaging, others), Quality Assurance review and bug fixing.


Congratulations and welcome to all the interns!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

by Abhijith Pa at May 24, 2022 11:15 AM

May 13, 2022

Debian Bits

New Debian Developers and Maintainers (March and April 2022)

The following contributors got their Debian Developer accounts in the last two months:

  • Henry-Nicolas Tourneur (hntourne)
  • Nick Black (dank)

The following contributors were added as Debian Maintainers in the last two months:

  • Jan Mojžíš
  • Philip Wyett
  • Thomas Ward
  • Fabio Fantoni
  • Mohammed Bilal
  • Guilherme de Paula Xavier Segundo

Congratulations!

by Jean-Pierre Giraud at May 13, 2022 03:00 PM

May 03, 2022

Steve Kemp

A plea for books ..

Recently I've been getting much more interested in the "retro" computers of my youth, partly because I've been writing crazy code in Z80 assembly-language, and partly because I've been preparing to introduce our child to his first computer:

  • An actual 1982 ZX Spectrum, cassette deck and all.
    • No internet
    • No hi-rez graphics
    • Easily available BASIC
    • And as a nice bonus the keyboard is wipe-clean!

I've got a few books, books I've hoarded for 30+ years, but I'd love to collect some more. So here's my request:

  • If you have any books covering either the Z80 processor, or the ZX Spectrum, please consider dropping me an email.

I'd be happy to pay €5-10 each for any book I don't yet own, and I'd also be more than happy to cover the cost of postage to Finland.

I'd be particularly pleased to see anything from Melbourne House, and while low-level is best, the coding-books from Usbourne (The Mystery Of Silver Mountain, etc, etc) wouldn't go amiss either.

I suspect most people who have collected and kept these wouldn't want to part with them, but just in case ..

May 03, 2022 08:00 PM

April 26, 2022

Steve Kemp

Porting a game from CP/M to the ZX Spectrum 48k

Back in April 2021 I introduced a simple text-based adventure game, The Lighthouse of Doom, which I'd written in Z80 assembly language for CP/M systems.

As it was recently the 40th Anniversary of the ZX Spectrum 48k, the first computer I had, and the reason I got into programming in the first place, it crossed my mind that it might be possible to port my game from CP/M to the ZX Spectrum.

To recap my game is a simple text-based adventure game, which you can complete in fifteen minutes, or less, with a bunch of Paw Patrol easter-eggs.

  • You enter simple commands such as "up", "down", "take rug", etc etc.
  • You receive text-based replies "You can't see a telephone to use here!".

My code is largely table-based, having structures that cover objects, locations, and similar state-things. Most of the code involves working with those objects, with only a few small platform-specific routines being necessary:

  • Clearing the screen.
  • Pausing for "a short while".
  • Reading a line of input from the user.
  • Sending a $-terminated string to the console.
  • etc.

My feeling was that I could replace the use of those CP/M functions with something custom, and I'd have done the 99% of the work. Of course the devil is always in the details.

Let's start. To begin with I'm lucky in that I'm using the pasmo assembler which is capable of outputting .TAP files, which can be loaded into ZX Spectrum emulators.

I'm not going to walk through all the code here, because that is available within the project repository, but here's a very brief getting-started guide which demonstrates writing some code on a Linux host, and generating a TAP file which can be loaded into your favourite emulator. As I needed similar routines I started working out how to read keyboard input, clear the screen, and output messages which is what the following sample will demonstrate..

First of all you'll need to install the dependencies, specifically the assembler and an emulator to run the thing:

# apt install pasmo spectemu-x11

Now we'll create a simple assembly-language file, to test things out - save the following as hello.z80:

    ; Code starts here
    org 32768

    ; clear the screen
    call cls

    ; output some text
    ld   de, instructions                  ; DE points to the text string
    ld   bc, instructions_end-instructions ; BC contains the length
    call 8252

    ; wait for a key
    ld hl,0x5c08        ; LASTK
    ld a,255
    ld (hl),a
wkey:
    cp (hl)             ; wait for the value to change
    jr z, wkey

    ; get the key and save it
    ld a,(HL)
    push af

    ; clear the screen
    call cls

    ; show a second message
    ld de, you_pressed
    ld bc, you_pressed_end-you_pressed
    call 8252

    ;; Output the ASCII character in A
    ld a,2
    call 0x1601
    pop af
    call 0x0010

    ; loop forever.  simple demo is simple
endless:
    jr endless

cls:
    ld a,2
    call 0x1601  ; ROM_OPEN_CHANNEL
    call 0x0DAF  ; ROM_CLS
    ret

instructions:
    defb 'Please press a key to continue!'
instructions_end:

you_pressed:
    defb 'You pressed:'
you_pressed_end:

end 32768

Now you can assemble that into a TAP file like so:

$ pasmo --tapbas hello.z80 hello.tap

The final step is to load it in the emulator:

$ xspect -quick-load -load-immed -tap hello.tap

The reason I specifically chose that emulator was because it allows easily loading of a TAP file, without waiting for the tape to play, and without the use of any menus. (If you can tell me how to make FUSE auto-start like that, I'd love to hear!)

I wrote a small number of "CP/M emulation functions" allowing me to clear the screen, pause, prompt for input, and output text, which will work via the primitives available within the standard ZX Spectrum ROM. Then I reworked the game a little to cope with the different screen resolution (though only minimally, some of the text still breaks lines in unfortunate spots):

The end result is reasonably playable, even if it isn't quite as nice as the CP/M version (largely because of the unfortunate word-wrapping, and smaller console-area). So now my repository contains a .TAP file which can be loaded into your emulator of choice, available from the releases list.

Here's a brief teaser of what you can expect:

Outstanding bugs? Well the line-input is a bit horrid, and unfortunately this was written for CP/M accessed over a terminal - so I'd assumed a "standard" 80x25 resolution, which means that line/word-wrapping is broken in places.

That said it didn't take me too long to make the port, and it was kinda fun.

April 26, 2022 08:00 PM

April 15, 2022

Adam Trickett

Bog Roll: Upgrade cycle

Last May I started the process of upgrading all my Debian systems from version 10 to 11. I completed all the desktop and laptop systems pretty quickly and by the summer only my home server and cloud server remained on the older version.

In the autumn we moved into my mother-in-law's while the major work on the house was done and it wasn't habitable. As a result the upgrade to the server was delayed as we were somewhat at sixes and sevens, and a server upgrade wasn't the best thing to do.

The house rebuild isn't complete, but it's getting closers and I really should look at upgrading my servers from Debian 10 to 11. It's mostly that there are a few changes that this upgrade introduces and I don't want to rock the boat too much if I need to make changes to accommodate the upgrade.

April 15, 2022 04:45 PM

March 14, 2022

Andy Smith

Using Duplicity to back up to Amazon S3 over IPv6 (only)

Scenario

I have a server that I use for making backups. I also send backups from that server into Amazon S3 at the “Infrequent Access” storage class. That class is cheaper to store but expensive to access. It’s intended for backups of last resort that you only access in an emergency. I use Duplicity to handle the S3 part.

(I could save a bit more by using one of the “Glacier” classes but at the moment the cost is minimal and I’m not that brave.)

I recently decided to change which server I use for the backups. I noticed that renting a server with only IPv6 connectivity was cheaper, and as all the hosts I back up have IPv6 connectivity I decided to give that a go.

This mostly worked fine. The only thing I really noticed was when I tried to install some software from GitHub. GitHub doesn’t support IPv6, so I had to piggy back that download through another host.

Then I came to set up Duplicity again and found that I needed to make some non-obvious changes to make it work with S3 over IPv6-only.

S3 endpoint

The main issue is that the default S3 endpoint URL is https://s3.<region>.amazonaws.com, and this host only has an A (IPv4) record! For example:

$ host s3.us-east-1.amazonaws.com
s3.us-east-1.amazonaws.com has address 52.216.89.254

If you run Duplicity with a target like s3://yourbucketname/path/to/backup then it will try that endpoint, get only an IPv4 address, and return Network unreachable.

S3 does actually support IPv6, but for that to work you need to use a dual stack endpoint! They look like this:

$ host s3.dualstack.us-east-1.amazonaws.com
s3.dualstack.us-east-1.amazonaws.com has address 54.231.129.0
s3.dualstack.us-east-1.amazonaws.com has IPv6 address 2600:1fa0:80dc:5101:34d9:451e::

So we need to specify the S3 endpoint to use.

Specifying the S3 endpoint

In order to do this you need to switch Duplicity to the “boto3” backend. Assuming you’ve installed the correct package (python3-boto3 on Debian), this is as simple as changing the target from s3://… to boto3+s3://….

That then allows you to use the command line arguments --s3-region-name and --s3-endpoint-url so you can tell it which host to talk to. That ends up giving you both an IPv4 and an IPv6 address and your system correctly chooses the IPv6 one.

The full script

The new, working script now looks something like this:

export PASSPHRASE="highlysecret"
export AWS_ACCESS_KEY_ID="notquiteassecret"
export AWS_SECRET_ACCESS_KEY="extremelysecret"
# Somewhere with plenty of free space.
export TMPDIR=/var/tmp

duplicity --encrypt-key ABCDEF0123456789 \
          --asynchronous-upload \
          -v 4 \
          --archive-dir=/path/to/your/duplicity/archives \
          --s3-use-ia \
          --s3-use-multiprocessing \
          --s3-use-new-style \
          --s3-region-name "us-east-1" \
          --s3-endpoint-url "https://s3.dualstack.us-east-1.amazonaws.com" \
          incr \
          --full-if-older-than 30D \
          /stuff/you/want/backed/up \
          "boto3+s3://yourbucketname/path/to/backups"

The previous version of the script looked a bit like:

# All the exports stayed the same
duplicity --encrypt-key ABCDEF0123456789 \
          --asynchronous-upload \
          -v 4 \
          --archive-dir=/path/to/your/duplicity/archives \
          --s3-use-ia \
          --s3-use-multiprocessing \
          incr \
          --full-if-older-than 30D \
          /stuff/you/want/backed/up \
          "s3+http://yourbucketname/path/to/backups"

by Andy at March 14, 2022 11:32 PM

January 28, 2022

Andy Smith

Building BitFolk’s Rescue VM

Overview

BitFolk‘s Rescue VM is a live system based on the Debian Live project. You boot it, it finds its root filesystem over read-only NFS, and then it mounts a unionfs RAM disk over that so that you can make changes (e.g. install packages) that don’t persist. People generally use it to repair broken operating systems, reset root passwords etc.

Every few years I have to rebuild it, because it’s important that it’s new enough to be able to effectively poke around in guest filesystems. Each time I have to try to remember how I did it. It’s not that difficult but it’s well past time that I document how it’s done.

Basic concept of Debian Live

The idea is that everything under the config/ directory of your build area is either

  • a set of configuration options for the process itself,
  • some files to put in the image,
  • some scripts to run while building the image, or
  • some scripts to run while booting the image.

Install packages

Pick a host running at least the latest Debian stable. It might be possible to build a live image for a newer version of Debian, but the live-build system and its dependencies like debootstrap might end up being too old.

$ sudo apt install live-build live-boot live-config

Prepare the work directory

$ sudo mkdir -vp /srv/lb/auto
$ cd /srv/lb

Main configuration

All of these config options are described in the lb_config man page.

$ sudo tee auto/config >/dev/null <<'_EOF_'
#!/bin/sh

set -e

cacher_prefix="apt-cacher.lon.bitfolk.com/debian"
mirror_host="deb.debian.org"
main_mirror="http://${cacher_prefix}/${mirror_host}/debian/"
sec_mirror="http://${cacher_prefix}/${mirror_host}/debian-security/"

lb config noauto \
    --architectures                     amd64 \
    --distribution                      bullseye \
    --binary-images                     netboot \
    --archive-areas                     main \
    --apt-source-archives               false \
    --apt-indices                       false \
    --backports                         true \
    --mirror-bootstrap                  "$main_mirror" \
    --mirror-chroot-security            "$sec_mirror" \
    --mirror-binary                     "$main_mirror" \
    --mirror-binary-security            "$sec_mirror" \
    --memtest                           none \
    --net-tarball                       true \
    "${@}"
_EOF_

The variables at the top just save me having to repeat myself for all the mirrors. They make both the build process and the resulting image use BitFolk's apt-cacher to proxy the deb.debian.org mirror.

I'm not going to describe every config option as you can just look them up in the man page. The most important one is --binary-images netboot to make sure it builds an image that can be booted by network.

Extra packages

There's some extra packages I want available in the rescue image. Here's how to get them installed.

$ sudo tee config/package-lists/bitfolk_rescue.list.chroot > /dev/null <<_EOF_
pwgen
less
binutils
build-essential
bzip2
gnupg
openssh-client
openssh-server
perl
perl-modules
telnet
screen
tmux
rpm
_EOF_

Installing a backports kernel

I want the rescue system to be Debian 11 (bullseye), but with a bullseye-backports kernel.

We already used --backports true to make sure that we have access to the backports package mirrors but we need to run a script hook to actually install the backports kernel in the image while it's being built.

$ sudo tee config/hooks/live/9000-install-backports-kernel.hook.chroot >/dev/null <<'_EOF_'
#!/bin/sh

set -e

apt -y install -t bullseye-backports linux-image-amd64
apt -y purge -t bullseye linux-image-amd64
apt -y purge -t bullseye 'linux-image-5.10.*'
_EOF_

Set a static /etc/resolv.conf

This image will only be booted on one network where I know what the nameservers are, so may as well statically override them. If you were building an image to use on different networks you'd probably instead want to use one of the public resolvers or accept what DHCP gives you.

$ sudo tee config/includes.chroot/etc/resolv.conf >/dev/null <<_EOF_
nameserver 85.119.80.232
nameserver 85.119.80.233
_EOF_

Set an explanatory footer text in /etc/issue.footer

The people using this rescue image don't necessarily know what it is and how to use it. I take the opportunity to put some basic info in the file /etc/issue.footer in the image, which will later end up in the real /etc/issue

$ sudo tee config/includes.chroot/etc/issue.footer >/dev/null <<_EOF_
BitFolk Rescue Environment - https://tools.bitfolk.com/wiki/Rescue

Blah blah about what this is and how to use it
_EOF_

Set a random password at boot

By default a Debian Live image has a user name of "user" and a password of "live". This isn't suitable for a networked service that will have sshd active from the start, so we will install a hook script that sets a random password. This will be run near the end of the image's boot process.

$ sudo tee config/includes.chroot/lib/live/config/2000-passwd >/dev/null <<'_EOF_'
#!/bin/sh

set -e

echo -n " random-password "

NEWPASS=$(/usr/bin/pwgen -c -N 1)
printf "user:%s\n" "$NEWPASS" | chpasswd

RED='\033[0;31m'
NORMAL='\033[0m'

{
    printf "****************************************\n";
    printf "Resetting user password to random value:\n";
    printf "\t${RED}New user password:${NORMAL} %s\n" "$NEWPASS";
    printf "****************************************\n";
    cat /etc/issue.footer
} >> /etc/issue
_EOF_

This script puts the random password and the footer text into the /etc/issue file which is displayed above the console login prompt, so the user can see what the password is.

Fix initial networking setup

This one's a bit unfortunate and is a huge hack, but I'm not sure enough of the details to report a bug yet.

The live image when booted is supposed to be able to set up its network by a number of different ways. DHCP would be the most sensible for an image you take with you to different networks.

The BitFolk Rescue VM is only ever booted in one network though, and we don't use DHCP. I want to set static networking through the ip=…s syntax of the kernel command line.

Unfortunately it doesn't seem to work properly with live-boot as shipped. I had to hack the /lib/live/boot/9990-networking.sh file to make it parse the values out of the kernel command line.

Here's a diff. Copy /lib/live/boot/9990-networking.sh to config/includes.chroot/usr/lib/live/boot/9990-networking.sh and then apply that patch to it.

It's simple enough that you could probably edit it by hand. All it does is comment out one section and replace it with some bits that parse IP setup out of the $STATICIP variable.

Fix the shutdown process

Again this is a horrible hack and I'm sure there is a better way to handle it, but I couldn't work out anything better and this works.

This image will be running with its root filesystem on NFS. When a shutdown or halt command is issued however, systemd seems extremely keen to shut off the network as soon as possible. That leaves the shutdown process unable to continue because it can't read or write its root filesystem any more. The shutdown process stalls forever.

As this is a read-only system with no persistent state I don't care how brutal the shutdown process is. I care more that it does actually shut down. So, I have added a systemd service that issues systemctl --force --force poweroff any time that it's about to shut down by any means.

$ sudo tee config/includes.chroot/etc/systemd/system/always-brutally-poweroff.service >/dev/null <<_EOF_
[Unit]
Description=Every kind of shutdown will be brutal poweroff
DefaultDependencies=no
After=final.target

[Service]
Type=oneshot
ExecStart=/usr/bin/systemctl --force --force poweroff

[Install]
WantedBy=final.target
_EOF_

And to force it to be enabled at boot time:

$ sudo tee config/includes.chroot/etc/rc.local >/dev/null <<_EOF_
#!/bin/sh

set -e

systemctl enable always-brutally-poweroff
_EOF_

Build it

At last we're ready to build the image.

$ sudo lb clean && sudo lb config && sudo lb build

The "lb clean" is there because you probably won't get this right first time and will want to iterate on it.

Once complete you'll find the files to put on your NFS server in binary/ and the kernel and initramfs to boot on your client machine in tftpboot/live/

$ sudo rsync -av binary/ my.nfs.server:/srv/rescue/

Booting it

The details of exactly how I boot the client side (which in BitFolk's case is a customer VM) are out of scope here, but this is sort of what the kernel command line looks like on the client (normally all on one line):

root=/dev/nfs
ip=192.168.0.225:192.168.0.243:192.168.0.1:255.255.248.0:rescue
hostname=rescue
nfsroot=192.168.0.243:/srv/rescue
nfsopts=tcp
boot=live
persistent

Explained:

root=/dev/nfs
Get root filesystem from NFS.
ip=192.168.0.225:192.168.0.243:192.168.0.1:255.255.248.0:rescue
Static IP configuration on kernel command line. Separated by colons:

  • Client's IP
  • NFS server's IP
  • Default gateway
  • Netmask
  • Host name
hostname=rescue
Host name.
nfsroot=192.168.0.243:/srv/rescue
Where to mount root from on NFS server.
nfsopts=tcp
NFS client options to use.
boot=live
Tell live-boot that this is a live image.

persistent
Look for persistent data.

In action

Here's an Asciinema of this image in action.

Improvements

There's a few things in here which are hacks. What I have works but no doubt I am doing some things wrong. If you know better please do let me know in comments or whatever. Ideally I'd like to stick with Debian Live though because it's got a lot of problems solved already.

by Andy at January 28, 2022 11:20 PM

December 21, 2021

Adam Trickett

Bog Roll: Network

After months of feeble and insane excuses Orange.fr have now connected my mother-in-law's house up to a fibre network. They were so useless that we actually had ADSL activated over the old copper line, to which they then complained that ADSL isn't available in areas with fibre - to which we said okay then install fibre...

Yesterday two blokes from the subcontracting firm arrived and ran a new fibre from the pole to the bracket on the house for the old phone line, and then installed a new fibre run, tested it and hooked it up to a new Orange router box. All this while the old ADSL was still working, so no loss of service.

December 21, 2021 09:58 AM

November 30, 2021

Andy Smith

btrfs compression wins

Some quite good btrfs compression results from my backup hosts (which back up customer data).

Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       64%       68G         105G         1.2T
none       100%       24G          24G         434G
zlib        54%       43G          80G         797G
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       74%       91G         123G         992G
none       100%       59G          59G         599G
lzo         50%       32G          63G         393G
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       73%       16G          22G         459G
none       100%       12G          12G         269G
lzo         40%      4.1G          10G         190G
Type       Perc     Disk Usage   Uncompressed Referenced
TOTAL       71%      105G         148G         1.9T
none       100%       70G          70G         910G
zlib        40%       24G          60G         1.0T
lzo         58%       10G          17G          17G

So that’s 398G that takes up 280G, a 29.6% reduction.

The “none” type is incompressible files such as media that’s already compressed. I started off with lzo compression but I’m switching to zlib now as it compresses more and this data is rarely accessed so I’m not too concerned about performance. I need newer kernels on these before I can try zstd.

I’ve had serious concerns about btrfs before based on issues I’ve had using it at home, but these were mostly around multiple device usage. Here they get a single block device that has redundancy underneath so the only remotely interesting thing that btrfs is doing here is the compression.

Might try some offline deduplication next.

by Andy at November 30, 2021 01:45 AM

July 16, 2021

Alan Pope

Team Building via Chess

One of the things I really love about working at Influx Data is the strong social focus for employees. We’re all remote workers, and the teams have strategies to enable us to connect better. One of those ways is via Chess Tournaments!

I haven’t played chess for 20 years or more, and was never really any good at it. I know the basic moves, and can sustain a game, but I’m not a chess strategy guru, and don’t know all the best plays.

So I was somewhat cautious about putting my name in the hat for the (entirely optional) Influx Data chess tournament this month. However, I did, and I’m so glad I decided to.

First of all I was invited to join a #Chess slack channel - we have channels for everything inluding pets (pictures of cats and dogs feature heavily here), pits (pictures of ribs, smokers and recipies are strong here) and #mad_props where people call out the great work of their team mates.

I then signed up to chess.com and played a couple of practice games. I was super rusty though, and didn’t do well.

The “Chexperts” (sorry) in the company ran a few training sessions over Zoom where they talked us through some opening moves, strategies and pitfalls. I learned more about chess in those sessions than I expected. I’m still terrible at it though.

In the tournament, I was matched up with someone - Pat - who works in another part of the company. I used to work with Pat at Canonical, conveniently. We’re apparently at a similar level - both terribad at Chess. We played two “10 min” games, one playing as black, and the other as white.

I joined a Zoom call at the pre-arranged time, and played my two games against Pat. The best part about this, the Chexpert, Steven - live streams it to the rest of the company on Twitch, with commentary and analysis! This I found super hilarious. While we were playing, I was seeing words of encouragement from the spectators on my team in the Slack channel. It was so much fun!

Want to see how I did? Well, the video has been kindly archived along with the other matches, along with Steven’s commentary, over on YouTube! Check it out here, or embedded below.

Yes, I suck at this game. But it doesn’t matter, because I really enjoyed it, and found it a really great way to increase my Chess skills, while engaging with my new friends at Influx.

July 16, 2021 11:00 AM

July 08, 2021

Alan Pope

LXD - Container Manager

Preamble

I recently started working for InfluxData as a Developer Advocate on Telegraf, an open source server agent to collect metrics. Telegraf builds from source to ship as a single Go binary. The latest - 1.19.1 was released just yesterday.

Part of my job involves helping users by reproducing reported issues, and assisting developers by testing their pull requests. It’s fun stuff, I love it. Telegraf has an extensive set of plugins which supports gathering, aggregating & processing metrics, and sending the results to other systems.

Telegraf has a huge set of plugins, and there’s super-diverse ways our users deploy Telegraf, sometimes I have to stand up one-off environments to reproduce reported issues. So I thought I’d write up the basics of what I do, partly for me, and partly for my co-workers who also sometimes need to do this.

My personal and work computers both run Kubuntu 21.04. Sometimes issues are reported against Telegraf on other Linux distributions, or LTS releases of Ubuntu. In the past I’d use either VirtualBox or QEMU to create entire Virtual Machines for each Linux distribution or product I’m working with. Both can be slow to stand up clean machines, and take a fair chunk of disk space.

These days I prefer to use LXD. LXD is a system container manager, whose development is funded and led by Canonical, my previous employer. It’s super lightweight, easy to use and fast to setup. So it’s the tool I reach for most for these use cases.

Note that LXD can also launch Virtual Machines but I tend not to use that feature, preferring lightweight containers.

Setup

Install snapd

LXD is shipped as a snap for Linux, so snap support is required. On my Kubuntu system it’s already installed. If not, the Installing snapd documentation should get you going. Typically that just means finding the snapd package in your distro repository and installing it.

It’s also possible to install LXD from source, but that’s too much like hard work for me. The LXD Getting Started docs cover all of the above in detail.

On Windows there’s a LXD client package in Chocolatey and on MacOS it’s is available in Homebrew - but I’ve never tested those.

Install LXD

The LXD Snap Store page has all the details about the snap and how to install, but here’s the basics.

Installing LXD has multiple supported releases. I just use whatever the default, latest stable release is using this command:

$ sudo snap install lxd

Snaps by default will automatically update, so if the LXD publisher pushes a brand new major version to the latest/stable track/channel then you’ll get that update next time your system refreshes. However, it’s possible to request a specific ‘track’, to keep your machine on one major release. That can be done by specifying the track/channel on install. For example:

$ sudo snap install lxd --channel=4.15/stable

Use snap info lxd to get the full list of channels. As mentioned, personally I just use the default latest/stable track.

Initial configuration

LXD has a bunch of options to twiddle on install, but I tend to use the defaults. It sets up some space for storing the containers, and configures a network bridge interface between the host and the containers. To configure LXD, run the following and follow the prompt:

$ sudo lxd init

Personally, as I accept the defaults these days, I use the --auto switch.

$ sudo lxd init --auto

Next up, logout and back in, and make sure your user is in the lxd group.

$ groups 
alan adm cdrom sudo dip plugdev lpadmin lxd sambashare

That’s basically it. You can test that it’s setup correctly by launching a container. The last parameter is the friendly name you give the container.

$ lxc launch ubuntu:18.04 testcontainer
Creating testcontainer
Starting testcontainer       
$ lxc list testcontainer
+---------------+---------+----------------------+----------------------------------------------+-----------+-----------+
|     NAME      |  STATE  |         IPV4         |                     IPV6                     |   TYPE    | SNAPSHOTS |
+---------------+---------+----------------------+----------------------------------------------+-----------+-----------+
| testcontainer | RUNNING | 10.55.242.139 (eth0) | fd42:dd92:7d3c:7de7:216:3eff:fe0c:a98 (eth0) | CONTAINER | 0         |
+---------------+---------+----------------------+----------------------------------------------+-----------+-----------+

In use

Launch

In my case I often launch LTS Ubuntu containers. However, there are base images for many other Linux distributions.

So for example to spin up a Fedora 34 image I’d use:

$ lxc launch images:fedora/34 fedora34
Creating fedora34
Starting fedora34

You can search for images from the command line too, and optionally filter based on distro and architecture.

$ lxc image list images: fedora amd64
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
|          ALIAS           | FINGERPRINT  | PUBLIC |           DESCRIPTION            | ARCHITECTURE |      TYPE       |   SIZE   |         UPLOAD DATE          |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/33 (3 more)       | a013a473bef8 | yes    | Fedora 33 amd64 (20210707_20:33) | x86_64       | CONTAINER       | 96.12MB  | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/33 (3 more)       | da42bb5122b9 | yes    | Fedora 33 amd64 (20210707_20:33) | x86_64       | VIRTUAL-MACHINE | 628.38MB | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/33/cloud (1 more) | 9f1fb6c3286f | yes    | Fedora 33 amd64 (20210707_20:33) | x86_64       | CONTAINER       | 113.02MB | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/33/cloud (1 more) | b04e9066a00e | yes    | Fedora 33 amd64 (20210707_20:33) | x86_64       | VIRTUAL-MACHINE | 626.81MB | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/34 (3 more)       | 7856e578e7a1 | yes    | Fedora 34 amd64 (20210707_20:33) | x86_64       | CONTAINER       | 97.20MB  | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/34 (3 more)       | dd03395d4eca | yes    | Fedora 34 amd64 (20210707_20:33) | x86_64       | VIRTUAL-MACHINE | 564.31MB | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/34/cloud (1 more) | 43b84c5fa2e8 | yes    | Fedora 34 amd64 (20210707_20:33) | x86_64       | VIRTUAL-MACHINE | 570.00MB | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+
| fedora/34/cloud (1 more) | 90951791da81 | yes    | Fedora 34 amd64 (20210707_20:33) | x86_64       | CONTAINER       | 115.11MB | Jul 7, 2021 at 12:00am (UTC) |
+--------------------------+--------------+--------+----------------------------------+--------------+-----------------+----------+------------------------------+

The first time you launch a new container it will take a little while to download the base image for whatever version you specify. You can see which images you’ve already downloaded with lxc image list

$ lxc image list
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
| ALIAS | FINGERPRINT  | PUBLIC |                 DESCRIPTION                 | ARCHITECTURE |   TYPE    |   SIZE   |         UPLOAD DATE          |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|       | 66f2b020b296 | no     | Debian sid amd64 (20210708_05:24)           | x86_64       | CONTAINER | 79.94MB  | Jul 8, 2021 at 7:31am (UTC)  |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|       | 682b2f9adae4 | no     | ubuntu 18.04 LTS amd64 (release) (20210604) | x86_64       | CONTAINER | 192.13MB | Jul 8, 2021 at 12:12pm (UTC) |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|       | 7856e578e7a1 | no     | Fedora 34 amd64 (20210707_20:33)            | x86_64       | CONTAINER | 97.20MB  | Jul 8, 2021 at 1:31am (UTC)  |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+
|       | 68077359fd53 | no     | Debian bullseye amd64 (20210707_05:24)      | x86_64       | CONTAINER | 98.82MB  | Jul 7, 2021 at 1:31pm (UTC)  |
+-------+--------------+--------+---------------------------------------------+--------------+-----------+----------+------------------------------+

Shell access

To jump inside the container, I tend to just use lxc shell

$ lxc shell testcontainer
root@testcontainer:~# cat /etc/os-release 
NAME="Ubuntu"
VERSION="18.04.5 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.5 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
$ lxc shell fedora34  
[root@fedora34 ~]# cat /etc/redhat-release 
Fedora release 34 (Thirty Four)

Once inside the container if I want to do things as a ‘user’ and not ‘root’ then I simply switch to the ‘ubuntu’ user:

root@testcontainer:~# su - ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@testcontainer:~$ 

The container is configured to use sudo so, as the ubuntu user I can do all the usual things:

ubuntu@testcontainer:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]                            
Get:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]                                    
Get:4 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [8570 kB]    
Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]        
Get:6 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [1784 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en [4941 kB]
Get:8 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [329 kB]           
Get:9 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [365 kB]            
Get:10 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [151 kB]               
Get:11 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [48.9 kB]
Get:12 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [1130 kB]       
Get:13 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en [108 kB]           
Get:14 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [2131 kB]            
Get:15 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [256 kB]  
Get:16 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [19.2 kB]     
Get:17 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [4412 B]  
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [422 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [389 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/restricted Translation-en [52.8 kB]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1739 kB]
Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [371 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [26.6 kB]
Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [6792 B]
Get:25 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [10.0 kB]
Get:26 http://archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [4764 B]
Get:27 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [10.3 kB]
Get:28 http://archive.ubuntu.com/ubuntu bionic-backports/universe Translation-en [4588 B]
Fetched 23.1 MB in 4s (6506 kB/s)                                
Reading package lists... Done
Building dependency tree       
Reading state information... Done
20 packages can be upgraded. Run 'apt list --upgradable' to see them.

At this point I’ll use the container for whatever issue-reproduction or pull-request-testing is needed.

Stopping

The containers will continue running even once you are no longer in the guest shell. They’re easy to stop though.

$ lxc stop testcontainer

Removal

Getting rid of containers is super easy too.

$ lxc delete testcontainer

I tend to leave them lying around until I’m sure I no longer need them. But as they’re so fast to stand up and shut down, it’s also pretty quick to just nuke them and launch fresh every time I need one.

Summary

This post just scratches the surface. The LXD documentation is comprehensive and easy to consume. There’s an active LXD Community, and it helps that their lead developer - Stéphane Graber is a really engaged, excellent engineer.

I’ve been a big fan of LXD for some years now. I’ve found it a super fast, reliable way for me to spin up lightweight machines running random Linux distributions, and throw them away when done. It helps keep all those random and unstable pieces of software I’m testing nicely compartmentalised, and easy to nuke.

I ❤ LXD.

July 08, 2021 11:00 AM

July 05, 2021

Alan Pope

My Least Used Favourite App

I have so many applications on my Android Phone, I’ve lost count. Too many chat apps, multiple web browsers, tons of games, and other garbage.

However, there’s one app, which is one of my favourites while probably being the least used application. It doesn’t technically benefit me at all, but is useful to others, when I use it.

The app in question is “Be My Eyes”. It’s available for Android and iOS, and is very easy to setup. The service is aimed at blind and partially sighted people, a group of people I am (currently) not in.

Be My Eyes

As a sighted person, I install the app, sign up, set my language and then just leave the application installed. Someone with a vision issue, who needs help from a sighted person, can launch the app and put out the call for assistance.

Within moments, a bunch of phones in the hands of sighted people will get a notification, letting them know someone needs help. Either tap the notification, or ignore/dismiss it, if now is not a good time. There’s plenty of other people who will also get the notification.

That redundancy means the first to answer gets connected, and everyone else gets told they’re no longer needed, stand down. If you get to the notification first, a video call starts with the other party.

At that point you just need to help the other person do whatever it is they need. There’s a quick video explaining how it works on their website, embedded below:

On one occasion I got to the phone quickly and was connected to a person who needed to check their heating / water controls were set correctly.

When the call launched I was presented with a familiar old-school mechanical domestic heating control unit. The caller greeted me and immediately asked if the water was set to come on at the right time. I confirmed the settings were right, they said “Thanks” and the call was over.

Many times since then I’ve seen the notification, but not got to the phone in time, so someone else helped. In fact that’s what happens for me every time I’ve seen the notification.

I’ve been helpful to one guy, on one occasion, via the “Be My Eyes” application, and that’s fine. With more than 10x more volunteers than blind users, the system is working as designed. In the future, maybe I’ll have my phone in-hand when the next call comes in. Maybe not.

I’ve had the app installed for just over three years now, and have actively used it precisely once. So that’s why Be My Eyes is both one of my favourite apps I have installed, and least used, and I love that.

July 05, 2021 11:00 AM

July 10, 2020

Martin A. Brooks

Getting started with a UniFi Dream Machine Pro

It’s not an exaggeration to say that I’m an Ubiquiti fanboy. I like their kit a lot and my home network has been 100% UniFi for quite a few years now.

I’ve just moved in to a new home which I’m getting rewired and this will include putting structured network cabling in, terminating back to a patch panel in a rack in the loft. I have a small amount of “always on” kit and I wanted as much as it as reasonably possible to be in standard 19″ rack format. This is when I started looking at the Ubiquiti Dream Machine Pro to replace a combination of a UniFi CloudKey and Security Gateway, both excellent products in their own right.

My expectation was that I would connect the UDMP to some power, move the WAN RJ45 connection from the USG to the UDMP, fill in some credentials and (mostly) done! As I’m writing this down, you can probably guess it didn’t quite work out like that.

The UDMP completely failed to get an internet connection via all the supported methods applicable. PPPoE didn’t work, using a surrogate router via DHCP didn’t work, static configuration didn’t work. I reached out to the community forum and, in fairness, got very prompt assistance from a Ubiquiti employee.

I needed to upgrade the UDMP’s firmware before it would be able to run its “first setup” process, but updating the firmware via the GUI requires a working internet connection. It’s all a little bit chicken and egg. Instead, this is what you need to do:

  • Download the current UDMP firmware onto a laptop.
  • Reconfigure the laptop’s IP to be 192.168.1.2/24 and plug it in to any of the main 8 ethernet ports on the UDMP.
  • Use scp to copy the firmware to the UDMP using the default username of “root” with the password “ubnt”:
    scp /path/to/fw.bin root@192.168.1.1:/mnt/data/fw.bin
  • SSH in to the UDMP and install the new firmware:
    ubnt-upgrade /mnt/data/fw.bin

The UDMP should reboot onto the new firmware automatically. Perhaps because I’d been attempting so many variations of the setup procedure, after rebooting my UDMP was left in a errored state with messages like “This is taking a little longer..” and “UDM Pro is having an issue booting. Try to reboot or enter Recovery Mode”. To get round this I updated the firmware again, this time doing a factory reset:

ubnt-upgrade -c /mnt/data/fw.bin

The UDMP then rebooted again without error and I was able to complete the setup process normally.

It’s a bit unfortunate that UDMPs are shipping with essentially non-functional firmware, and it’s also unfortunate that the process for dealing with this is completely undocumented.

by Martin A. Brooks at July 10, 2020 06:07 PM

May 29, 2020

Martin A. Brooks

Letter from my MP regarding Dominic Cummings

I wrote to my MP, Julia Lopez (CON), asking for her view on whether Dominic Cummings had broken the law or not and if he should be removed from his position. Here is her response:

Thank you for your email about the Prime Minister’s adviser, Dominic Cummings, and his movements during the lockdown period. I apologise for taking a few days to get back to you, however I am in the last weeks of my maternity leave and am working through a number of tasks in preparation for my return.

I have read through all the emails sent to me about Mr Cummings and completely understand the anger some correspondents feel. It has been a very testing time for so many of us as we have strived to adhere to new restrictions that have separated us from loved ones, led us to make very difficult decisions about our living and working arrangements or seen us miss important family occasions – both happy and sad. Those sacrifices have often been painful but were made in good faith in order to protect ourselves, our families and the most vulnerable in the broader community.

Given the strength of feeling among constituents, I wrote to the Prime Minister this week to advise him of the number of emails I had received and the sentiments expressed within them, highlighting in particular the concern over public health messaging. Mr Cummings has sought to explain his actions in a press conference in Downing Street and has taken questions from journalists. While his explanation has satisfied some constituents, I know others believe it was inadequate and feel that this episode requires an independent inquiry. I have made that request to the Prime Minister on behalf of that group of constituents.

Mr Cummings asserts that he acted within lockdown rules which permitted travel in exceptional circumstances to find the right kind of childcare. In the time period in question, he advises that he was dealing with a sick wife, a child who required hospitalisation, a boss who was gravely ill, security concerns at his home, and the management of a deeply challenging public health crisis. It has been asserted that Mr Cummings believes he is subject to a different set of rules to everyone else, but he explained in this period that he did not seek privileged access to covid testing and did not go to the funeral of a very close family member.

I am not going to be among those MPs calling for Mr Cummings’ head to roll. Ultimately it is for the Prime Minister to decide whether he wishes Mr Cummings to remain in post – and to be accountable for and accept the consequences of the decision he makes – and for the relevant authorities to determine whether he has broken the law. Whatever one thinks of this episode, I think the hounding of Mr Cummings’ family has been disturbing to watch and I hope that in future the press can find a way of seeking truth without so aggressively intruding into the lives of those who have done nothing to justify their attention.

Thank you again for taking the trouble to share with me your concerns. I regret that we cannot address everyone individually but the team continues to receive a high number of complex cases involving those navigating healthcare, financial and other challenges and these constituents are being prioritised. I shall send you any response I receive from the Prime Minister.

Best wishes

Julia

by Martin A. Brooks at May 29, 2020 01:33 PM

May 14, 2018

Martin A. Brooks

My affiliate links

It occurred to me that collecting all these in one place might mean I remember to tell people about them and therefore they might get used!

I’ve been a customer of Zen Internet for a very long time.   They’re an award winning ISP and have the best customer support I’ve ever experienced, not that I’ve need to use it very often.  Using my link gets us both some free stuff.

Huel is a meal replacement product.  If you’re like me and can only rarely be bothered cooking for one then Huel gives you a quick, easy, nutritionally complete drink to chug down with very little time and effort involved.  I like the vanilla flavour and some of the flavour packs are nice.  Using my link gets you and me £10 off an order.

Top Cashback is one the UK’s most popular cashback sites.  I’ve probably got several hundred pounds from it over the years.  It requires some discipline to use and may require you to use less draconian ad and cookie blocking software.  Using my link gets us both £7.50.

by Martin A. Brooks at May 14, 2018 05:26 PM

January 24, 2017

Martin Wimpress

DIY SNES Classic

Inspired by the recent NES Classic I made a DIY SNES Classic just in time for the Christmas holidays and it's very portable!

To make one yourself you'll need:

Both controllers use Bluetooth, so two player wire-free gaming is possible. The USB cables are just for charging, but if you've got no charge they can be used as wired controllers too. Retropie can be controlled via the controllers, no keyboard/mouse required.

by Martin Wimpress at January 24, 2017 12:00 PM

December 13, 2016

Martin Wimpress

Raspberry Pi 3 Nextcloud Box running on Ubuntu Core

I recently bought the Nextcloud Box. When it came to setting it up I ran into a problem, I only had Raspberry Pi 3 computers available and at the time of writting the microSDHC card provided with the Nextcloud Box only supports the Raspberry Pi 2. Bummer!

Overview

This guide outlines how to use Ubuntu Core on the Raspberry Pi 3 to run Nextcloud provided as a snap from the Ubuntu store.

If you're not familiar with Ubuntu Core, here's a quote:

Ubuntu Core is a tiny, transactional version of Ubuntu for IoT devices and large container deployments. It runs a new breed of super-secure, remotely upgradeable Linux app packages known as snaps

After following this guide Ubuntu Core and any installed snaps (and their data) will reside on the SD card and the 1TB hard disk in the Nextcloud box will be available for file storage. This guide explains how to:

  • Install and configure Ubuntu Core 16 for the Raspberry Pi 3
  • Format the 1TB hard disk in the Nextcloud Box and auto-mount it
  • Install the Nextcloud snap and connect the removable-media interface to allow access to the hard disk
  • Activate and configure the Nextcloud External Storage app so the hard disk can be used to store files
  • Optional configuration of Email and HTTPS for Nextcloud

Prepare a microSDHC card

I explained the main steps in this post but you really should read and follow the Get started with a Raspberry Pi 2 or 3 page as it fully explains how to use a desktop computer to download an Ubuntu Core image for your Raspberry Pi 2 or 3 and copy it to an SD card ready to boot.

Here's how to create an Ubuntu Core microSDHC card for the Raspberry Pi 3 using an Ubuntu desktop:

  • Download Ubuntu Core 16 image for Raspberry Pi 3
  • Insert the microSDHC card into your PC
    • Use GNOME Disks and its Restore Disk Image... option, which natively supports XZ compressed images.
    • Select your SD card from the panel on the left
    • Click the "burger menu" on the right and Select Restore Disk Image...
    • Making sure the SD card is still selected, click the Power icon on the right.
  • Eject the SD card physically from your PC.
GNOME Disks - Restore Disk Image

Ubuntu Core first boot

An Ubuntu SSO account is required to setup the first user on Ubuntu Core:

Insert the Ubuntu Core microSHDC into the Raspberry Pi, which should be in the assembled Nextcloud Box with a keyboard and monitor connected. Plug in the power.

  • The system will boot then become ready to configure
  • The device will display the prompt "Press enter to configure"
  • Press enter then select "Start" to begin configuring your network and an administrator account. Follow the instructions on the screen, you will be asked to configure your network and enter your Ubuntu SSO credentials
  • At the end of the process, you will see your credentials to access your Ubuntu Core machine:
This device is registered to <Ubuntu SSO email address>.
Remote access was enabled via authentication with the SSO user <Ubuntu SSO user name>
Public SSH keys were added to the device for remote access.

Login

Once setup is done, you can login to Ubuntu Core using ssh, from a computer on the same network, using the following command:

ssh <Ubuntu SSO user name>@<device IP address>

The user name is your Ubuntu SSO user name.

Reconfiguring network

Should you need to reconfigure the network at a later stage you can do so with:

sudo console-conf

Prepare 1TB hard disk

Log in to your Raspberry Pi 3 running Ubuntu Core via ssh.

ssh <Ubuntu SSO user name>@<device IP address>

Partition and format the Nextcloud Box hard disk

This will create a single partition formatted with the ext4 filesystem.

sudo fdisk /dev/sda

Do the following to create the partition:

Command (m for help): o
Created a new DOS disklabel with disk identifier 0x253fea38.

Command (m for help): n
Partition type
    p   primary (0 primary, 0 extended, 4 free)
    e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-1953458175, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-1953458175, default 1953458175):

Created a new partition 1 of type 'Linux' and of size 931.5 GiB.

Command (m for help): w

Now format the partition and give it the label data. This label will be used to reference it for mounting later:

sudo mkfs.ext4 -L data /dev/sda1

Automatically mount the partition

Most of the Ubuntu Core root file system is read-only, so it is not possible to edit /etc/fstab. Therefore we'll use systemd to achieve that.

Be aware of one of the systemd.mount pitfalls:

Mount units must be named after the mount point directories they control. Example: the mount point /home/lennart must be configured in a unit file home-lennart.mount.

Yes that's right! The unit filename must match the mount point path.

Create the media-data.mount unit:

sudo vi /writable/system-data/etc/systemd/system/media-data.mount

Add the following content:

[Unit]
Description=Mount unit for data

[Mount]
What=/dev/disk/by-label/data
Where=/media/data
Type=ext4

[Install]
WantedBy=multi-user.target

Reload systemd, scanning for new or changed units:

sudo systemctl daemon-reload

Start the media-data.mount unit, which will mount the volume, and also enable it so it will be automatically mounted on boot.

sudo systemctl start media-data.mount
sudo systemctl enable media-data.mount

And just like any other unit, you can view its status using systemctl status:

sudo systemctl status media-data.mount

Update Ubuntu Core

Make sure Ubuntu Core is up-to-date and reboot.

sudo snap refresh
sudo reboot

After the reboot, make sure /media/data is mounted. If not double check the steps above.

Install Nextcloud

The Nextcloud snap uses the removable-media interface, which grants access to /media/*, and requires manual connection:

sudo snap install nextcloud
sudo snap connect nextcloud:removable-media core:removable-media

Browse to the Nextcloud IP address and create the admin user account, for example:

  • http://nextcloud.local/

Nextcloud configuration

In the examples below replace nextcloud.local with the IP address or hostname of your Nextcloud Box and replace example.org with your domain.

External Storage

Enable the External Storge app via:

  • http://nextcloud.local/index.php/settings/apps?category=disabled#

Configure External Storage app via:

  • http://nextcloud.local/index.php/settings/admin/externalstorages

Use these settings:

  • Folder name: data
  • External storage: Local
  • Authentication: None
  • Configuration: /media/data
  • Available for: All

Email

Configure your outgoing email settings via:

  • http://nextcloud.local/index.php/settings/admin/additional

I use Sendgrid for sending email alerts from my servers and devices. These are the settings that work for me:

  • Send mode: SMTP
  • Encryption: STARTTLS
  • From address: nextcloud@example.org
  • Authentication method: Plain
  • Authentication required: Yes
  • Server address: smtp.sendgrid.net:587
  • Username: apikey
  • Password: theactualapikey

Enabling HTTPS

It is strongly recommend that you use HTTPS if you intend to expose your Nextcloud to the Internet.

First do a test to see if you can install a Let's Encrypt certificate:

sudo nextcloud.enable-https -d

Answer the questions:

Have you met these requirements? (y/n) y
Please enter an email address (for urgent notices or key recovery): name@example.org
Please enter your domain name(s) (space-separated): nextcloud.example.org
Attempting to obtain certificates... done
Looks like you're ready for HTTPS!

If everything went well, then install the certificate

sudo nextcloud.enable-https

Answer the questions again:

Have you met these requirements? (y/n) y
Please enter an email address (for urgent notices or key recovery): name@example.org
Please enter your domain name(s) (space-separated): nextcloud.example.org
Attempting to obtain certificates... done
Restarting apache... done

If Let's Encrypt didn't work for you, you can always use Nextcloud with a self-signed certificate.

sudo nextcloud.enable-https -s

Manual configuration changes

If you need to make any tweaks to the Nextcloud configuration file you can edit it like so:

sudo vi /var/snap/nextcloud/current/nextcloud/config/config.php

If you have manually editted the Nextcloud configuration you may need to restart nextcloud:

sudo snap disable nextcloud
sudo snap enable nextcloud

Conclusion

So there it is, Nextcloud running on Ubuntu Core powered by a Raspberry Pi 3. The performance is reasonable, obviously not stellar, but certainly good enough to move some cloud services for a small family away from the likes of Google and Dropbox. Now go and install some Nextcloud clients for your desktops and devices :-)

by Martin Wimpress at December 13, 2016 05:17 PM

August 22, 2016

Anton Piatek

Now with added SSL from letsencrypt

I’ve had SSL available on my site for some time using startssl, but as the certificate was expiring and requires manual renewal, I though it was time to try out letsencrypt. I’m a huge fan of the idea of letsencrypt, which is trying to bring free SSL encryption to the whole of the internet, in particular all the smaller sites who might not have the expertise to roll out SSL or where a cost might be restrictive.

There are a lot of scripts for powering letsencrypt, but getssl looked the best fit for my use case as I just wanted a simple script to generate certificates, not manage apache configs or anything else. It seems to do a pretty good job so far. I swapped over the certificates to the newly generated ones and it seems pretty smooth sailing.

by Anton Piatek at August 22, 2016 06:51 PM

December 16, 2015

Martin Wimpress

HP Microserver N54L power saving and performance tuning using Debian.

I've installed Open Media Vault on a HP ProLiant MicroServer G7 N54L and use it as media server for the house. OpenMediaVault (OMV) is a network attached storage (NAS) solution based on Debian.

I want to minimise power consumption but maximise performance. Here are some tweaks reduce power consumption and improve network performance.

Power Saving

Install the following.

apt-get install amd64-microcode firmware-linux firmware-linux-free \
firmware-linux-nonfree pciutils powertop radeontool

And for ACPI.

apt-get install acpi acpid acpi-support acpi-support-base

ASPM and ACPI

First I enabled PCIE ASPM in the BIOS and forced the kernel to use it and ACPI via grub by changing GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub, so it looks like this:

GRUB_CMDLINE_LINUX_DEFAULT="quiet acpi=force pcie_aspm=force nmi_watchdog=0"

Then update grub and reboot.

update-grub
reboot

Enable Power Saving via udev

The following rules file /etc/udev/rules.d/90-local-n54l.rules enables power saving modes for all PCI, SCSI and USB devices and ASPM. Futher the internal Radeon card power profile is set to low as there is rarely a monitor connected. The file contains the following:

SUBSYSTEM=="module", KERNEL=="pcie_aspm", ACTION=="add", TEST=="parameters/policy", ATTR{parameters/policy}="powersave"
SUBSYSTEM=="i2c", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="pci", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="usb", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="usb", ACTION=="add", TEST=="power/autosuspend", ATTR{power/autosuspend}="2"
SUBSYSTEM=="scsi", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="spi", ACTION=="add", TEST=="power/control", ATTR{power/control}="auto"
SUBSYSTEM=="drm", KERNEL=="card*", ACTION=="add", DRIVERS=="radeon", TEST=="power/control", TEST=="device/power_method", ATTR{device/power_method}="profile", ATTR{device/power_profile}="low"
SUBSYSTEM=="scsi_host", KERNEL=="host*", ACTION=="add", TEST=="link_power_management_policy", ATTR{link_power_management_policy}="min_power"

Add this to /erc/rc.local.

echo '1500' > '/proc/sys/vm/dirty_writeback_centisecs'

Hard disk spindown

Using the Open Media Vault web interface got to Storage -> Physical Disks, select each disk in turn and click Edit then set:

  • Advanced Power Management: Intermediate power usage with standby
  • Automatic Acoustic Management: Minimum performance, Minimum acoustic output
  • Spindown time: 20 minutes

Performance Tuning

Network

The following tweaks improve network performance ,but I have a HP NC360T PCI Express Dual Port Gigabit Server Adapter in my N54L so these settings may not be applicable to the onboard NIC.

Add this to /erc/rc.local.

ethtool -G eth0 rx 4096
ethtool -G eth1 rx 4096
ethtool -G eth0 tx 4096
ethtool -G eth1 tx 4096
ifconfig eth0 txqueuelen 1000
ifconfig eth1 txqueuelen 1000

Add the following to /etc/sysctl.d/local.conf.

fs.file-max = 100000
net.core.netdev_max_backlog = 50000
net.core.optmem_max = 40960
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.ip_local_port_range = 10000 65000
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_max_syn_backlog = 30000
net.ipv4.tcp_max_tw_buckets = 2000000
net.ipv4.tcp_rfc1337 = 1
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_sack=0
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.udp_rmem_min = 8192
net.ipv4.udp_wmem_min = 8192
vm.swappiness = 10

Conclusion

With these settings applied powertop reports everything that can be in a power saving mode is and the room temperature is measurably cooler. More importantly, with four 4TB drives in a RAID-5 configuration formatted with XFS and dual bonded gigabit ethernet, I am able to backup data to the server at a sustained rate of 105MB/s, which is 0.85 Gbit.

Not too shabby for an AMD Turion II Neo N54L (2.2GHz) :-D

References

by Martin Wimpress at December 16, 2015 12:00 PM

October 05, 2015

Philip Stubbs

Gear profile generator

Having been inspired by the gear generator found at woodgears.ca I decided to have a go at doing this myself.

Some time ago, I had tried to do this in Java as a learning exercise. I only got so far and gave up before I managed to generate any involute curves required for the tooth profile. Trying to learn Java and the math required at the same time was probably too much and it got put aside.

Recently I had a look at the Go programming language. Then Matthias Wandel produced the page mentioned above, and I decided to have another crack at drawing gears.

The results so far can be seen on Github, and an example is shown here.

Gear Profile Example Image

What I have learnt

  • Math makes my head hurt.
  • The Go programming language fits the way my brain works better than most other languages. I much prefer it to Java, and will try and see if I can tackle other problems with it, just for fun.

by stuphi (noreply@blogger.com) at October 05, 2015 08:32 AM

June 22, 2015

Anton Piatek

Hello Pace

After leaving IBM I’ve joined Pace at their Belfast office. It is quite a change of IT sectors, though still the same sort of job. Software development seems to have a lot in common no matter which industry it is for.

There’s going to be some interesting learning, things like DVB are pretty much completely new to me, but at the same time it’s lots of Java and C++ with similar technology stacks involved. Sadly less perl, but more Python so maybe I’ll learn that properly. I’m likely to work with some more interesting Javascript frameworks, in particular Angular.js which should be fun.

The job is still Software Development, and there should be some fun challenges with things like allowing a TV set top box to do on demand video content when all you have is a one-way data stream from a satellite, for instance, which make for some interesting solutions. I’m working in the Cobalt team which deals with a delivering data from the TV provider onto set top boxes, so things like settings, software updates, programme guides and on demand content and even apps. Other teams in the office work with the actual video content encryption and playback and the UI the set top box shows.

The local office seems to be all running Fedora, so I’m saying goodbye to Ubuntu at work. I already miss it, but hopefully will find Fedora enjoyable in the long term.

The office is on the other side of Belfast so is a marginally longer commute, but it’s still reasonable to get to. Stranmillis seems a nice area of Belfast, and it’s a 10 minute walk to the Botanical gardens so I intend to make some time to see it over lunch, which will be nice as I really miss getting out as I could in Hursley and its surrounding fields.

by Anton Piatek at June 22, 2015 02:53 PM

June 04, 2015

Anton Piatek

Bye bye big blue

After nearly 10 years with IBM, I am moving on… Today is my last day with IBM.

I suppose my career with IBM really started as a pre-university placement at IBM, which makes my time in IBM closer to 11 years.  I worked with some of the WebSphere technical sales and pre-sales teams in Basingstoke, doing desktop support and Lotus Domino administration and application design, though I don’t like to remind people that I hold qualifications on Domino :p

I then joined as a graduate in 2005, and spent most of my time working on Integration Bus (aka Message Broker, and several more names) and enjoyed working with some great people over the years. The last 8 months or so have been with the QRadar team in Belfast, and I really enjoyed my time working with such a great team.

I have done test roles, development roles, performance work, some time in level 3 support, and enjoyed all of it. Even the late nights the day before release were usually good fun (the huge pizzas helped!).

I got very involved with IBM Hursley’s Blue Fusion events, which were incredible fun and a rather unique opportunity to interact with secondary school children.

Creating an Ubuntu-based linux desktop for IBM, with over 6500 installs, has been very rewarding and something I will remember fondly.

I’ve enjoyed my time in IBM and made some great friends. Thanks to everyone that helped make my time so much fun.

 

by Anton Piatek at June 04, 2015 10:00 AM

April 11, 2015

Philip Stubbs

DIY USB OTG Cable

Suddenly decided that I needed a USB OTG cable. Rather than wait for one in the post, i decided to make one from spare cables found in my box of bits.
Initially I thought that it would be a simple case of just cutting the cables and reconnecting a USB connector from a phone lead to a female USB socket. Unfortunately that is not the case.
The USB cable has four wires, but the micro USB plug has five contacts. The unused contact needs to connected to ground to make the OTG cable. The plug on the cable I used does not have a connection for the  extra pin, so I needed to rip it apart and blob a lump of solder on two pins. The body of the plug has a wall between each pin, so I rammed a small screwdriver in there to allow the soldered pins to fit.





I then reassembled the plug, and continued with the connecting the wires together. This was an easy case of , red to red, black to black, green to green and white to white. A piece of heat shrink covers the mess.
Now to use it. It allows me to plug a keyboard into my Nexus tablet. If I plug a mouse in, a pointer pops up. All of a sudden using the tablet feels like using a real computer. I am typing this with a keyboard on my lap down the garden with my tablet.
The real motivation for the cable was to allow me to use my phone to adjust the settings on my MultiWii based control board of my Quadcopter. For that, it seems even better than MultiWiiConf, and certainly a lot more convenient when out flying.

by stuphi (noreply@blogger.com) at April 11, 2015 04:31 PM

January 29, 2015

Philip Stubbs

Arduino and NRF24L01 for Quad-copter build

As part of my Quadcopter build, I am using a couple of Arduino's along with some cheap NRF24L01 from Banggood for the radio transmitter and reciever. The idea came from watching the YouTube channel iforce2d.

When I started developing (copying) the code for the NRF modules, I did a quick search for the required library. For no good reason, I opted for the RadioHead version. Part of my thinking was by using a different library from iforce2d, I would have to poke around in the code a bit more and lean something.

All went well with the initial trials. I managed to get the two modules talking to each other, and even had a simple processing script show the stick outputs by reading from the serial port of the receiver.

Things did not look so good when I plugged the flight controller in. For that I am using an Afro Mini32. With that connected to the computer and Baseflight running, the receiver tab showed a lot of fluctuations on the control signals.

Lots of poking , thinking, and even taking it into work to connect to an oscilloscope, it looked like the radio was mucking up with the timing of the PWM signal for the flight controller. Finally, I decided to give an alternative NRF library a try, and from the Arduino playground site, I selected this one. As per iforce2d, I think.

Well that fixed it. Although, at the same time I cleaned up my code and pulled lots debugging stuff out and changed one if loop to a while loop, so there is a chance that changing the Library was not the answer. Anyhow, it works well now. Just need some more bits to turn up and I can start on the actual copter!

by stuphi (noreply@blogger.com) at January 29, 2015 04:28 PM

June 23, 2014

Tony Whitmore

Tom Baker at 80

Back in March I photographed the legendary Tom Baker at the Big Finish studios in Kent. The occasion was the recording of a special extended interview with Tom, to mark his 80th birthday. The interview was conducted by Nicholas Briggs, and the recording is being released on CD and download by Big Finish.

I got to listen in to the end of the recording session and it was full of Tom’s own unique form of inventive story-telling, as well as moments of reflection. I got to photograph Tom on his own using a portable studio set up, as well as with Nick and some other special guests. All in about 7 minutes! The cover has been released now and it looks pretty good I think.

Tom Baker at 80

The CD is available for pre-order from the Big Finish website now. Pre-orders will be signed by Tom, so buy now!

Pin ItThe post Tom Baker at 80 first appeared on Words and pictures.

by Tony at June 23, 2014 05:31 PM

May 19, 2014

Tony Whitmore

Mad Malawi Mountain Mission

This autumn I’m going to Malawi to climb Mount Mulanje. You might not have heard of it, but it’s 3,000m high and the tallest mountain in southern Africa. I will be walking 15 miles a day uphill, carrying a heavy backpack. I will be bitten by mosquitoes and other flying buzzy things. It’ll be hard work, is what I’m saying.

I’m doing this to raise money for AMECA. They’ve built a hospital in Malawi that is completely sustainable and not reliant on charity to keep operating. Adults pay for their treatments and children are treated for free. But AMECA also support nurses from the UK to go and work in the hospital. The people of Malawi get better healthcare and the nurses get valuable experience to bring back to the UK.

And that’s what the money I’m raising will go towards. There are just 15 surgeons in Malawi for 15 million people so the extra support is so valuable.

There have been lots of amazing, generous donors already. My family, friends, colleagues, members of the Ubuntu and FLOSS community, Doctor Who fans, random people off the Internet have all donated. Thank you, everyone. I have been touched by the response. But there’s still a way to go. I have just one month to raise £190. So much has been raised already, but I would love it if you could help push me over my target. Or, if you don’t like me and want to see me suffer, help me reach my target and I’ll be sure to post lots of photos of the injuries I sustain. Either way…

Please donate here.

Pin ItThe post Mad Malawi Mountain Mission first appeared on Words and pictures.

by Tony at May 19, 2014 04:59 PM

May 12, 2014

Tony Whitmore

Paul Spragg

I was very sorry to hear on Friday that Paul Spragg had passed away suddenly. Paul was an essential part of Big Finish, working tirelessly behind the scenes to make everything keep ticking over. I had the pleasure of meeting him on a number of occasions. I first met him at the recording for Dark Eyes 2. It was my first engagement for Big Finish and I was unsure of what to expect and generally feeling a little nervous. Paul was friendly right from the start and helped me get set up and ready. He even acted as my test subject as I was setting up my dramatic side lights, which is where the photo below comes from. It’s just a snap really, but it’s Paul.

He was always friendly and approachable, and we had a few chats when I was in the studio at other recording sessions. We played tag on the spare room at the studios, which is where interviews are done as well as being a makeshift photography studio. It was always great to bump into him at other events too.

Thanks to his presence on the Big Finish podcast Paul’s voice will be familiar to thousands. His west country accent and catchphrases like “fo-ward” made him popular with podcast listeners, to the extent that there were demands that he travel to conventions to meet them!

My thoughts and condolences go to his family, friends and everyone at Big Finish.

Paul Spragg from Big Finish

Pin ItThe post Paul Spragg first appeared on Words and pictures.

by Tony at May 12, 2014 05:30 PM