Planet HantsLUG

November 30, 2023

Debian Bits

New Debian Developers and Maintainers (September and October 2023)

The following contributors got their Debian Developer accounts in the last two months:

  • François Mazen (mzf)
  • Andrew Ruthven (puck)
  • Christopher Obbard (obbardc)
  • Salvo Tomaselli (ltworf)

The following contributors were added as Debian Maintainers in the last two months:

  • Bo YU
  • Athos Coimbra Ribeiro
  • Marc Leeman
  • Filip Strömbäck

Congratulations!

by Jean-Pierre Giraud at November 30, 2023 03:00 PM

November 23, 2023

Debian Bits

archive.debian.org rsync address change

The proposed and previously announced changes to the rsync service have become effective with the rsync://archive.debian.org address now being discontinued.

The worldwide Debian mirrors network has served archive.debian.org via both HTTP and rsync. As part of improving the reliability of the service for users, the Debian mirrors team is separating the access methods to different host names:

  • http://archive.debian.org/ will remain the entry point for HTTP clients such as APT

  • rsync://rsync.archive.debian.org/debian-archive/ is now available for those who wish to mirror all or parts of the archives.

rsync service on archive.debian.org has stopped, and we encourage anyone using the service to migrate to the new host name as soon as possible.

If you are currently using rsync to the debian-archive from a debian.org server that forms part of the archive.debian.org rotation, we also encourage Administrators to move to the new service name. This will allow us to better manage which back-end servers offer rsync service in future.

Note that due to its nature the content of archive.debian.org does not change frequently - generally there will be several months, possibly more than a year, between updates - so checking for updates more than once a day is unnecessary.

For additional information please reach out to the Debian Mirrors Team maillist.

by Donald Norwood, Adam D. Barratt at November 23, 2023 07:00 AM

November 18, 2023

Debian Bits

Debian Events: MiniDebConfCambridge-2023

MiniConfLogo

Next week the #MiniDebConfCambridge takes place in Cambridge, UK. This event will run from Thursday 23 to Sunday 26 November 2023.

The 4 days of the MiniDebConf include a Mini-DebCamp and of course the main Conference talks, BoFs, meets, and Sprints.

We give thanks to our partners and sponsors for this event

Arm - Building the Future of Computing

Codethink - Open Source System Software Experts

pexip - Powering video everywhere

Please see the MiniDebConfCambridge page more for information regarding Travel documentation, Accomodation, Meal planning, the full conference schedule, and yes, even parking.

We hope to see you there!

by The Debian Publicity Team at November 18, 2023 10:00 AM

November 07, 2023

Alan Pope

Ubuntu Summit 2023 was a success

Last week, I wrote about my somewhat last-minute plans to attend the 2023 Ubuntu Summit in Riga, Latvia. The event is now over, and I’m back home collating my thoughts about the weekend.

The tl;dr: It was a great, well-organised and run event with interesting speakers.

Here’s my “trip report”.

Logistics

The event was held at the Radisson Blu Latvija. Many of the Canonical staff stayed at the Raddison, while most (perhaps all) of the non-Canonical attendees were a short walk away at the Tallink Hotel.

Everything kicked off with a “Welcome” session at 14:00 on Friday. That may seem like a weird time to start an event, but it’s squashed on the weekend between an internal Canonical Product Sprint and an Engineering Sprint.

The conference rooms were spread across a couple of floors, with decent signage, and plenty of displays showing the schedule. It wasn’t hard to plan your day, and make sure you were in the right place for each talk.

The talks were live streamed, and as I understand it, also recorded. So remote participants could watch the sessions, and for anyone who missed them, they should be online soon.

Coffee, cold drinks, snacks, cakes and fruit were refreshed through the day to keep everyone topped up. A buffet lunch was provided on Saturday and Sunday.

A “Gaming” night was organised for the Saturday evening. There was also a party after the event finished, on the Sunday.

A bridged Telegram/Matrix chat was used during the event to enable everyone to co-ordinate meeting up, alert people of important things, or just invite friends for beer. Post-event it was also used for people to post travel updates, and let everyone know when they got home safely.

An email was sent out early on at the start of each day, to give everyone a heads-up on the main things happening that day, and provide information about social events.

There were two styles of lanyard from which to hang your name badge. One was coloured diffierently to indicate the individual did not wish to be photographed. I saw similar at Akademy back in 2018, and appreciate this option.

Sessions

There was one main room with a large stage used for plenary and keynote style talks, two smaller rooms for talks and two further workshop rooms. It was sometimes a squeeze in the smaller rooms when a talk was popular, but it was rarely ‘standing room only’.

The presentation equipment that was provided worked well, for the most part. A few minor display issues, and microphone glitches occurred, but overall I could hear and see everything I was expected to experience.

There was also a large open area with standing tables, where people could hang out between sessions, and noodle around with things - more on that later. A few sessions which left an impression on me are detailed below, with a conclusion at the end.

Ubuntu Asahi

Tobias Heider (Canonical) was on stage, with a remote Hector Martin (Asahi Linux) via video link. They presented some technical slides about the MacOS boot process, and how Asahi is able to be installed on Apple Silicon devices. I personally found this interesting, understandable, and accessible. Hector speaks fast, but clearly, and covered plenty of ground in the time they had.

Tobias then took over to talk about some of the specifics of the Ubuntu Asahi build, how to install it, and some of the future plans. I was so interested and inspired that I immediately installed Ubuntu Asahi on my M1 Apple MacBook Air. More on that experience in a future blog post.

MoonRay

This was a great talk about the process of open sourcing a component of the video production pipeline. While that sounds potentially dull, it wasn’t. Partly helped by plenty of cute rendered DreamWorks characters in the presentation, along with short video clips. We got a quick primer on rendering scenes, then moved into the production pipeline and finally to MoonRay. Hearing how and why a large movie production house like DreamWorks would open source a core part of the pipeline was fascinating. We even got to see Bilby at the end.

Ubuntu Core Desktop

Oliver Smith and Ken VanDine presented Ubuntu Core Desktop Preview, from a laptop running the Core Desktop. I talked a little about this in Ubuntu Core Snapdeck.

It’s essentially the Ubuntu desktop packaged up as a bunch of snap packages. Very much like Fedora Silverblue, or SteamOS on the steamdeck, Ubuntu Core Desktop is an “immutable” system.

It was interesting to see the current blockers to release. It’s already quite usable, but they’re not quite ready to share images of Ubuntu Core Desktop. Not that they’re hard to find if you’re keen!

Framework

This was one of my favourite talks. Daniel Schaefer talks about the Framework laptops, their design and decisions made during their development. The talk straddled the intersection of hardware, firmware and software which tickles me. I was also pleased to see Daniel fiddle with parts of the laptop while giving a talk from it. Demonstrating the replacable magnetically attached display bezel and replacing the keyboard while using the laptop is a great demo and fun sight.

Security

Mark Esler, from the Ubuntu Security Team gave a great overview of security best practices. They had specific, and in some cases simple, actionable things developers can do to improve their application security. We had a brief discussion afterwards about snap application security, which I’ll cover in a future post.

Discord

Some of the team behind the Ubuntu Discord presented stats about the sizable community that use Discord. They also went through their process for ensuring a friendly environment for support.

Hallway track

At all these kinds of events the so-called ‘Hallway track’ is just as important as the scheduled sessions. There were opportunities to catch-up with old friends, meet new people I’d only seen online before, and play with technology.

Some highlights for me on the hallway track include:

Kind words

Quite a few people approached and introduced themselves to me over the weekend. It was a great opportunity to meet people I’ve not seen before, only met online, or not seen since a, Ubuntu Developer Summit long ago.

A few introduced themselves then thanked me as I’d inspired them to get involved in Linux or Ubuntu as part of their career. It was very humbling to think those years were a positive impact on people’s lives. So I greatly appreciated their comments.

UBports

Previously known as Ubuntu Touch, the UBports project had a stand to exhibit the status of the project to bring the converged desktop to devices. I have a great fondness for the UBports project, having worked on the core apps for Ubuntu Touch. It always puts a smile on my face to see the Music, Terminal, Clock, Calendar and other apps I worked on, still in use on UBports today.

I dug out my OnePlus 5 when I got home, and might give UBports another play in a spare moment.

Raspberry Pi 5

Dave Jones from Canonical had a Raspberry Pi 5 which he’d hooked up to a TV, keyboard and mouse, and was running Ubuntu Desktop. I’d not seen a Pi running the Ubuntu desktop so fluidly before, so I had a play with it. We installed a bunch of snaps from the store, to put the device through its paces, and see if any had problems on the new Pi. The collective brains of myself, Dave, Ogra and Martin solved a bug or two and sent the results over the network to my laptop to be pushed to Launchpad.

Gaming Night

A large space was set aside for gaming night on the Saturday evening. Most people left the event, found food, then came back to ‘game’. There were board games, cards, computers and consoles. A fair number of people were not actually gaming, but coding and just chatting. It was quite nice to have a bit of space to just chill out and get on with whatever you like.

One part which amused me greatly was Ken VanDine and Dave Jones attempting to get the aforementioned Ubuntu Core Desktop Preview working on the new Raspberry Pi 5. They had the pi, cables, keyboard and mouse, but no display. There were however, projectors around the room. Unfortunately the HDMI sockets were nowhere near the actual projection screen. So we witnessed Dave, Ken and others burning calories walking back and forth to see terminal output, then call out commands across the loud room to the pi operator.

This went on for some time until I pointed out to Ken that Martin had a portable display in his bag. I probably should have thought about that before hand. Then someone else saved the day by walking in with a TV they’d acquired from somewhere. I’ve never seen so many nerds sat around a Raspberry Pi, reading logs from a TV screen. It’s perfectly normal at events like this, of course.

After party

Once the event was over, we all decamped to Digital Art House to relax over a beer or five. There were displays and projectors all around the venue, showing Ubuntu wallpapers, and the artworks of Sylvia Ritter.

Conclusion

I think the organising committee nailed it with this event. The number of rooms and tracks was about right. There was a good mix of talks. Some were technical, and Ubuntu related, others were just generally interesting. The infrastructure worked and felt professionally run.

I had an opportunity to meet a ton of people I’ve never met, but have spoken to online for years. I also got to meet and talk with some of the new people at Canonical, of which, there are many.

I’d certainly go again if I had the opportunity. Perhaps I’ll come up with something to talk about, I’ve got a year to prepare!

November 07, 2023 11:00 AM

November 03, 2023

Alan Pope

Ubuntu Core Snapdeck

At the Ubuntu Summit in Latvia, Canonical have just announced their plans for the Ubuntu Core Desktop. I recently played with a preview of it, for fun. Here’s a nearby computer running it right now.

Ubuntu Core Desktop Development Preview on a SteamDeck

Ubuntu Core is a “a secure, application-centric IoT OS for embedded devices”. It’s been around a while now, powering IoT devices, kiosks, routers, set-top-boxes and other appliances.

Ubuntu Core Desktop is an immutable, secure and modular desktop operating system. It’s (apparently) coming to a desktop near you next year.

In case you weren’t aware, the SteamDeck is a portable desktop PC running a Linux distribution from Valve called “SteamOS”.

As a tinkerer, I thought “I wonder what Ubuntu Core on the SteamDeck looks like”. So I went grubbing around in GitHub projects to find something to play with.

I’m not about to fully replace SteamOS on my SteamDeck, of course, at least, not yet. This was just a bit of fun, to see if it worked. I’m told by the team that I’m likely the only person who has tried this so far.

Nobody at Canonical asked me to do this, and I didn’t get special access to the image. I just stumbled around until I found it, and started playing. You know, for fun.

Also, obviously I don’t speak for Canonical, these are my own thoughts. This also isn’t a how-to guide, or a recommendation that you should use this. It isn’t ready for prime time yet.

Snaps all the way down

Let’s get this out of the way up front. Ubuntu Core images are all about snaps. The kernel, applications, and even the desktop itself is a snap. Everything is a snap. Snap, snappity, snap snap! ðŸ�Š

So it has the features that come with being snap-based. Applications can be automatically updated, reverted, multiple parallel versions installed. Snaps are strictly confined using container primitives, seccomp and AppArmor.

This is not too dissimilar to the way many SteamDeck users add applications to the immutable SteamOS install. On SteamOS they use Flatpak, whereas on Ubuntu Core, Snap is used.

They achieve much the same goal though. A secure, easily updated and managed desktop OS.

Not ready yet

The image is currently described as “Ubuntu Core Desktop Development Preview”.

Indeed the wallpaper makes this very clear. Here be dragons. �

Ubuntu Core Desktop Development Preview wallpaper

This is not ready for daily production use as a primary OS, but I’m sure some nerds like me will be running it soon enough. It’s fun to play with stuff like this, and get a glimpse of what the future of Ubuntu desktop might be like.

I was pleasantly surprised that the developer preview exceeded my expectations. Here’s what I discovered.

Installation

I didn’t want to destroy the SteamOS install on my SteamDeck - I quite like playing games on the device. So I put the Ubuntu Core image on a USB stick, and ran it from that. The current image doesn’t have an ‘installer’ as such.

On first boot, you’re greeted with an Ubuntu Core logo while the various snaps are setup and configured. Once that completes, a first-run wizard pops up to walk though the initial setup.

Initial setup

This is the usual configuration steps to setup keyboard, locale, first user and so on.

Pre-installed applications

Once installed, everything was pretty familiar.

There’s a browser - Firefox, and a small set of default GNOME applications such as Eye of GNOME, Evince, GNOME Calculator, Characters, Clocks, Logs, Weather, Font Viewer and Text Editor. There’s also a graphical Ubuntu App Centre (more on that in a moment).

There’s also three terminal applications.

  • GNOME Terminal - which is a little bit useless because it’s strictly confined.

  • Console - also GNOME Terminal, but is unconfined, so can be used for system administration tasks like installing software.

  • Workshops - which provides a Toolbox / Distrobox like experience for launching LXD containers running Ubuntu or another Linux distribution. The neat part about this is there’s full GPU passthrough to the containers.

So on a suitably equipped desktop with an nVidia GPU, it’s possible to run CUDA workloads inside a container on top of Ubuntu Core.

Automatic updates

When I initially played with this a week or two back, I noticed that the core image shipped with a build of GNOME 42.

GNOME 42

One major feature of snaps is their ability to do automatic updates in the background. At some point between October 19th and today, an update brought me GNOME 45!

GNOME 45

I doubt that a final product will jump users unceremoniously from one major desktop release to another, but this is a preview remember, so interesting, exciting and frightening things happen.

Installing apps

The “traditional” (read: deb-based) Ubuntu Desktop recently shipped with a new software store front. This application, built using Flutter, makes it easy to find and install snaps on the desktop.

I tested this process by installing Steam, given this is a SteamDeck!

Installing Steam

This process was uneventful and smooth. Installing additional apps on the Ubuntu core desktop preview works as expected. However, so-called “classic” (unconfined) snaps are not yet installable. So applications like VSCode, Sublime Text and Blender can’t currently be easily installed.

Kernel switcheroo

Did I mention everything is a snap? This includes the Linux kernel. That means it’s possible to quickly switch to a completely different kernel, trivially easily, with one snap refresh command.

Switching kernel

It’s just as simple to snap revert back to the previous kernel, or try kernels specifically optimised for the hardware or use cases, such as gaming, or resource constrained computers.

Steam snap

The snap of Steam has been around for a while now, to install on the traditional Linux desktop. As a snap, it’s installable on this core desktop preview too.

The Steam snap also bundles some additional tools you might find on the SteamOS shipped on the SteamDeck, like MangoHUD.

Launching Steam on Ubuntu Core on the SteamDeck works just like it does on a traditional desktop. The SteamDeck is a desktop PC at its heart, after all.

Here’s a few screenshots, but this isn’t super remarkable, but neat nonetheless. The controller works, and the games I tested run fine. I didn’t install anything huge like GTA5, because this was all running off a USB stick. Ain’t nobody got time for that.

Steam

I didn’t try using the new Steam UI as seen on the SteamOS official builds. But I imagine it’s possible to get that working.

Steam

Audio doesn’t work in the Ubuntu Core image on the SteamDeck for me, so the whole game playing experience is a little impacted by that.

Steam

Steam

As you can see, this doesn’t really look any different to running a traditional desktop Linux distribution.

Steam

Steam

Unworking things

Not everything is smooth - this is a developer preview remember! I have fed back these things to the team - over beer, last night. I’m happy to help them debug these issues.

On my SteamDeck, I had no audio, at all. I suspect this is likely due to something missing in the Ubuntu kernel. As shown above, I did try a different, newer kernel, to no avail.

Bluetooth also didn’t work. In GNOME Settings, pressing the bluetooth enable button just toggled it back off again. I didn’t investigate this deeply, but will certainly file a bug and provide logs to the team.

Running snap refresh in the console doesn’t finish, when there’s an update to the desktop itself. I suspect this is a byproduct of Ubuntu Core usually being an unattended IoT device where it would normally do an automatic reboot when these packages are updated. You clearly don’t want a desktop to do random reboots after updates, so that behaviour seems to be supressed.

I’ve not commented at all on performance, because it’s a little unfair, given this is a preview. That’s not to say it’s slow, but I am running it on a USB stick, not the internal nvme drive. It’s certainly more than usable, but I didn’t measure any performance benchmarks yet.

The future

While the SteamDeck is a desktop “PC”, it’s a little quirky. There’s no keyboard, only one USB port, has weird audio chipset, and the display initially boots rotated by 90 degrees. It’s not really the target for this image.

I would expect this Ubuntu Core Developer Preview to be more usable on a traditional laptop or desktop computer. I haven’t tried that, but I know others have. Over time, more people will need to play with this platform, to find the sharp edges, and resolve the critical bugs before this ships for general use.

I can envisage a future where laptops from well-known vendors ship with Ubuntu Core Desktop by default. These might target developers initially, but I suspect eventually ’normie’ users will use Ubuntu Core Desktop.

It’s pretty far along already though. For some desktop use cases this is perfectly usable today, just probably not on your primary or only computer. In five months, when the next Ubuntu release comes out, I think it could be a very compelling daily driver.

Worth keeping an eye on this!

November 03, 2023 11:00 AM

November 01, 2023

Alan Pope

Heading to Ubuntu Summit 2023

Ubuntu Summit

Ubuntu Summit

This weekend the Ubuntu Summit begins in Riga, Latvia. I originally had no plans to attend until a recent change in circumstance, and a late space became available.

The Ubuntu Summit is “an event focused on the Linux and Open Source ecosystem, beyond Ubuntu itself. Representatives of outstanding projects will demonstrate how their work is changing the future of technology as we know it.”.

Essentially it’s a conference-style event with multiple tracks hosting speakers talking about Ubuntu and Linux-adjunct topics.

UDS

Back in the day, Canonical would host an “Ubuntu Developer Summit” (UDS) every six months. From 2007 through 2012, I attended at least ten of these ranging from Sevilla (Spain), Boston (USA) to Brussels (Belgium), Orlando (USA), and finally the last UDS in Copenhagen (Denmark).

In those days almost everyone employed at Canonical would attend, along with sponsored community members, partners, and others from the wider ecosystem. A UDS would be a week-long stay in an often decent, sometimes quirky hotel and conference facility. On many occasions, the UDS would be preceded or followed (or both) by an internal Canonical-only product or engineering sprint.

That meant Canonical employees would be away from home for two to three weeks, at least twice a year. Both the sprint and UDS schedule was packed with 45-60 minute sessions, spread across many rooms and floors of the facility. Employees would rush to a room, start a session, moderate conversations, try to keep it on track, take notes, assign actions then wrap up and rush to the next session.

There were also presentations to senior management at the sprints, which served as knowledge sharing, project planning, and checkpoints to ensure everything was on track for the next release of Ubuntu - or other products and projects. It was pretty intense, and exhausting.

Ubu-flu

Employees (and the community) would frequently return with the so-called “Ubu-flu”, wiped out for days after returning from the event. With two weeks away, then “swap days” to take as a vacation in lieu of the middle weekend, plus ubu-flu, these events could put a dent in engineer productivity time.

This was especially true of the six-month cycle between the .10 release in October, and the .04 release in April the following year. Productivity often took a nose-dive, what with the release in October, a two-week sprint then other major holidays like US Thanksgiving and the whole Christmas period. I’m amazed we ever delivered anything in an LTS, it’s no wonder they’re rarely packed with features.

UDS was always pretty exciting and exhausting. But for a remote-first company, it was seen as a necessary way to keep everything on track, sync up with teams, and do some intense work. It was also an opportunity to meet (often for the first time) your colleagues, socialise and bond. The social side of UDS was greatly missed when Canonical stopped holding them after 2012. The community felt it hard, too.

Virtual Ubuntu Summits

Long before The Event made everyone an expert on remote working and virtual events, we tried Virtual UDS as an alternative to the in-person event. We used contemporary online tools like the Ubuntu Wiki, Etherpad, Google Hangouts Meet (or whatever it was called back then), and IRC (RIP in Peace) to collaborate online. We were pretty good at it, but it wasn’t the same.

When someone attends an event in meatspace, they’re “present” for the whole time. Back then everyone was pretty focused on the topic in-hand during a session. People also talk about how important the hallway track is at an in-person event, where people can have ad-hoc conversations.

With the virtual event spread over multiple days, it’s hard to get contributors engaged. They may have their laptop open, watching the stream, but they’ll also be reading email, or get interrupted by colleagues (if in an office) or family (if at home).

Many of the streams had little participation and pitiful post-event viewer numbers. It might work better now, post The Event, but back then, people weren’t into it.

Ubuntu Summits return

In 2022, Canonical announced a new summit, taking place in Prague from November 7th to 9th (Monday to Wednesday). I didn’t attend, but I’m told by friends who did, that it was a great event, which rekindled some of the spirit of UDS from the past.

Ubuntu Summit group photo

It must have been good because they’re doing it again!

Ubuntu Summit 2023

That brings us to this week. Canonical announced the upcoming Ubuntu Summit 2023 in Riga, Latvia. Interestingly the event schedule is running from Friday (afternoon) to Sunday (evening). It looks like this fits in the weekend between two internal Canonical events, rather than being tacked on after or before.

As I booked late and had no expectation of attending, I hadn’t submitted any talks, which makes a change from most events I go to. So my preparation essentially consists of checking the Ubuntu Summit Schedule and clicking the fancy little 🌟 next to the talks I like the look of.

I will be taking a laptop in case there are opportunities for hacking, and to update my blog. But other than that, I have no significant responsibilities, which will make a nice change.

I am looking forward to meeting old friends from Canonical, and new people from the community I’ve not seen in person before. I’m also quite intrigued by this “Special Demonstration” on Saturday evening during Game Night.

Game Night

Maybe see you there. 🛫

November 01, 2023 11:00 AM

October 09, 2023

Steve Kemp

Please to meet you, hope you guessed my name?

"Hello, my name is Steve" - those are words I've said a million times in my life, however they are not true words.

If you want to get all technical about things, my name has always been SteveN.

Mostly this hasn't mattered to me, or anybody else, I introduce myself as Steve, people call me Steve, and Steve is the name that makes me turn my head, when shouted across a bar. However things changed when I moved to Finland.

In Finland I had to open new bank accounts, sign mortgages, hand over IDs, and there were many many pieces of paper I signed, or forms I filled out. Unfortunately I screwed up:

  • If I were thinking clearly I'd think "Oh, this is something official, I'd best write SteveN".
  • If I were distracted, or not being careful I'd write my name as "Steve", and then sign it as Steve.

The end result? I've been in Finland for approximately eight years, and I have some official documentation calling me Steve, and some other official documentation calling myself Steven. (For example my "Permanent Residency Permit" calls me Steve, but my Social Security ID knows me as Steven.)

Every now and again somebody queries the mismatch, and there are daily moments of pain where I have to interact with different agencies, so I made the obvious decision: I'm gonna change my name.

A fee of €60 and a simple online form was sufficient to initiate the process. The processing time was given as "one to five months" on the official forename changing page, but happily the process was complete in a month.

I will now need to do a little housekeeping by getting updated bank-cards, etc, and then complete the process by changing my UK passport to match. Hopefully this won't take too long - but I guess if Finland knows me as Steve and the UK knows me as Steven I'll still be in a bit of a screwed up state, albeit one that is consistent in each country!

Not a big change really, but also it feels weird to suddenly say "Hello, my name is Steve" and mean it.

People are weird.

Names are interesting.

The end.

Fin.

October 09, 2023 09:00 AM

September 24, 2023

Steve Kemp

Old-School CGI Scripts!

I'm not sure if I've talked about my job here, but I recently celebrated my one year anniversary - whilst on a company offsite trip to Sweden. When I joined the company there were approximately 100 people employed by it. Nowadays the numbers are much higher.

Having more people around is pretty awesome, but I realized that there were a lot of people wandering around the office who I didn't recognize so it occurred to me to make a game of it.

I had the idea I could write a slack bot to quiz me on my colleagues:

  • Show a random face, using the Slack profile picture.
  • Give a list of 5 names.
  • Ask me which was correct.

I spent an hour messing around with various Slack APIs, and decided the whole thing was too much of a hassle. Instead I wrote a simple script to download the details of all members of the workspace:

  • Name.
  • Email address.
  • Profile picture URL.

Then using that data, users.json, I hacked up a simple web application in Python, using the flask API. There only needed to be two pages:

  • A page ("/") to show five random images, each with five random names beneath them.
  • A page ("/quiz") to receive the HTTP POST, and score.

All in all this took only two hours or so. Old-school CGI is pretty awesome like that - Hidden values meant the whole thing could be stateless:

 <input type="hidden" name="1answer" value="Bob Smith" ..
 <input type="hidden" name="1profile" value="Sales" ..
 <input type="hidden" name="1url" value="https://.." ..

 <input type="hidden" name="2answer" value="Sally Smith" ..
 <input type="hidden" name="2profile" value="Sales" ..
 <input type="hidden" name="2url" value="https://.." ..

The only downside is that I don't have any authentication, so there is no ability to have a leaderboard. I've looked at the Okta samples and I think it would be easy to add, but I guess that would make it more complex and less portable. That said I'm not sharing the code this time, so who cares if it is tied to the company?

Anyway sometimes I forget how fast and easy it is to spinup a random virtual machine and present a HTTP(S) service for interactive use. This is one of those times when I remembered.

September 24, 2023 07:00 PM

September 12, 2023

Andy Smith

Mutt wins again – subject munging on display

TL;DR:

You can munge subjects for display only using the subjectrx Mutt configuration directive.

The Setup

I use the terminal-based email reader Mutt.

Many projects that I follow are switching away from email discussion lists in favour of web-first interfaces (“forums”, I think the youngsters are calling them now) like Discourse. That is fine—there’s lots of problems with trying to run a busy community over email—but Discourse offers a “mailing list mode” and I still find my Mutt email client to be a comfortable way to follow discussions. So all my accounts on the various Discourse instances are set to mailing list mode.

The Problem

One of the slight issues I have with this is the subject lines that Discourse uses. On an instance with a lot of categories and sub-categories, these will all be prepended to the subject line of each email using up quite a lot of screen space.

The same is true for legacy mailing list subject tags, but in that environment the admins were generally conscious that whatever text they chose would be prepended to every subject, so they tend to choose terse tags like “[users]” for example.

There was a time when subject line tags were controversial amongst experienced email users, because experienced email users know how to sort and filter their mails based on headers and don’t need a tag in the subject line to let them know what an email is. It doesn’t seem to be very controversial any more; I hypothesise that’s because new Internet users don’t use email as much and so don’t value spending much time working out how to get their filtering just right, and so on. So, most legacy mailing lists that I’m part of now do use terse subject tags and not many people complain about that.

Since the posts on Discourse are primarily intended for a web browser, the verbosity of the categories is not an issue. It’s not uncommon to see a category called, say, “Help & Support” and then within that a sub-category for a particular project, e.g. “Footronic 5.x”. When Discourse sends out an email for a post to such a category, it’ll look like this:

Subject: [Help & Support] [Footronic 5.x] Need some help getting my Foo into alignment after passing through a bar-quux transform

Lots of space used by that prefix, on every message, and pointlessly so for me since these mails will have been filtered into a folder so I always know which folder I’m looking at anyway: all of the messages in that folder would be for help and support on Footronic 5.x. Like most email clients, Mutt has an index view that shows an overview of all the emails with a single line for each. Long subjects are truncated at the edge of my terminal.

I’ve put up with this for years now but the last straw was the newly-launched Ansible forum. Their category names are large and there’s lots of sub-categories. Here’s an example of what that looks like in my 95 character wide terminal.

The index view of a Mutt email client
The index view of a Mutt email client

This is quite irritating! I wondered if it could be fixed.

The Fix

Of course the Mutt community thought of this, and years ago. subjectrx! You put it in your config, specifying a regular expression to match and what it should be replaced with. For example:

subjectrx '\[Ansible\] \[[^]]*\] *' '%L%R'

That matches any sequence of “[Ansible] ” followed by another set of “[]” that have anything inside them, and replaces all of that with the left side of the match (%L) and the right side of the match (%R). So that effectively discards the tags.

This happens only on display; it doesn’t modify the actual emails on storage.

Here’s what that looks like afterwards:

The index view of a Mutt email client, with tags removed from subject lines
The index view of a Mutt email client, with tags removed from subject lines

Much better, right?

And that’s one of the reasons why I continue to use Mutt.

Other Solutions

Off the top of my head, there are some other ways this could have been done.

Alter emails upon delivery

It would have been pretty simple to strip these tags out of emails as they were being delivered, but I really like to keep emails on storage the same as they were when they arrived. At the very least doing this will cause a DKIM failure as I would have modified the message after it was signed. That wouldn’t be an issue for my delivery since my DKIM check would happen before any such editing, but I’d still rather not.

Run the subject lines through an external filter program

The format of many things in Mutt is highly configurable and one such format is index_format, which controls how the lines on the index view are displayed.

Sadly there is not a builtin format specifier to search and replace in the subject tag (or any other tag), but you can run the whole thing through an external program, which could do anything you liked to it. That would involve fork()ing and exec()ing a process for every single mail in a mailbox though. Yuck.

On Discourse

This is not a gripe about Discourse. I think Discourse is a better way to run a busy community than email lists. At this point I’d be happy for most mailing lists I’m part of to switch to Discourse instances, especially the very busy ones. I’m impressed with the amount of work and features that Discourse now has.

The only exception to that I think is that purely question-answer support mailing lists might be better off with a StackOverflow-style approach like AskUbuntu. But failing that, I think Discourse is still many times better than a mailing list for that use case.

Not that you asked, but I think the primary problem with email as a community platform is that only old people use email. In the 21st century it’s an unacceptable barrier to entry.

The next most serious problem with email for running a community is that any decently-sized community will have a certain percentage of utter numpties; these utter numpties won’t be self-aware enough to know they are utter numpties, and they will post a lot of nonsense. The only way to counter a numpty posting nonsense is to reply to it and call them out. That is exhausting, unrewarding work, which frequently goes wrong, adding to the noise and ill-feeling. Problem posters do not get dealt with until they reach a level bad enough to warrant their posting rights being removed. Forums like Discourse scale their moderation tasks much better, with a lot of it being amenable to wide community input.

I could go on to list a lot more serious problems but those two are the worst in my opinion.

by Andy at September 12, 2023 10:50 PM

September 02, 2023

Andy Smith

Happy birthday, /dev/sdd?

One of my hard drives reaches 120,000 hours of operation in about a month:

$ ~/src/blkleaderboard/blkleaderboard.sh
     sdd 119195 hours (13.59 years) 0.29TiB ST3320620AS
     sdb 114560 hours (13.06 years) 0.29TiB ST3320620AS
     sda 113030 hours (12.89 years) 0.29TiB ST3320620AS
     sdk  76904 hours ( 8.77 years) 2.73TiB WDC WD30EZRX-00D
     sdh  66018 hours ( 7.53 years) 0.91TiB Hitachi HUA72201
     sde  45746 hours ( 5.21 years) 0.91TiB SanDisk SDSSDH31
     sdc  39179 hours ( 4.46 years) 0.29TiB ST3320418AS
     sdf  28758 hours ( 3.28 years) 1.82TiB Samsung SSD 860
     sdj  28637 hours ( 3.26 years) 1.75TiB KINGSTON SUV5001
     sdg  23067 hours ( 2.63 years) 1.75TiB KINGSTON SUV5001
     sdi   9596 hours ( 1.09 years) 0.45TiB ST500DM002-1BD14

It’s a 320GB Seagate Barracuda 7200.10.

The machine these are in is a fileserver at my home. The four 320GB HDDs are what the operating system is installed on, whereas the hodge podge assortment of differently-sized HDDs and SSDs are the main storage for files.

That is not the most performant way to do things, but it’s only at home and doesn’t need best performance. It mostly just uses up discarded storage from other machines as they get replaced.

sdd has seen every release of Debian since 4.0 (etch) and several iterations of hardware, but this can’t go on much longer. The machine that the four 320GB HDDs are in now is very underpowered but any replacement I can think of won’t be needing four 3.5″ SATA devices inside it. More like 2x 2.5″ NVMe or M.2.

Then again, I’ve been saying that it must be replaced for about 5 years now, so who knows how long it will take me. sdd will definitely reach 120,000 hours barring hardware failure in the next month.

blkleaderboard.sh is on GitHub, by the way.

by Andy at September 02, 2023 08:59 PM

August 17, 2023

Martin Wimpress

Install ZeroTier on Steam Deck

How to persist software installation across SteamOS updates on the Steam Deck.

by Martin Wimpress (martin@wimpress.com) at August 17, 2023 11:15 AM

June 24, 2023

Steve Kemp

Simple REPL for CP/M, in Z80 assembly

So my previous post documented a couple of simple "scripting languages" for small computers, allowing basic operations in a compact/terse fashion.

I mentioned that I might be tempted to write something similar for CP/M, in Z80 assembly, and the result is here:

To sum up it allows running programs like this:

0m 16k{rP _ _}
C3 03 EA 00 00 C3 06 DC 00 00 00 00 00 00 00 00

Numbers automatically get saved to the A-register, the accumulator. In addition to that there are three dedicated registers:

  • M-register is used to specify which RAM address to read/write from.
    • The instruction m copies the value of accumulator to the M-register.
    • The instruction M copies the value of the M-register to the accumulator.
  • K-register is used to execute loops.
    • The instruction k copies the value of accumulator to the K-register.
    • The instruction K copies the value of the K-register to the accumulator.
  • U-register is used to specify which port to run I/O input and output from.
    • The instruction u copies the value of accumulator to the U-register.
    • The instruction U copies the value of the U-register to the accumulator.

So the program above:

  • 0m
    • 0 is stored in the accumulator.
    • m copies the value of the accumulator to the M-register.
  • 16k
    • 16 is stored in the accumulator.
    • k copies the value of the accumulator (16) to the K-register, which is used for looping.
  • { - Starts a loop.
    • The K-register is decremented by one.
    • If the K-register is greater than zero the body is executed, up to the closing brace.
  • Loop body:
    • r Read a byte to the accumulator from the address stored in the M-register, incrementing that register in the process.
    • P: Print the contents of the accumulator.
    • _ _ Print a space.
  • } End of the loop, and end of the program.

TLDR: Dump the first sixteen bytes of RAM, at address 0x0000, to the console.

Though this program allows delays, RAM read/write, I/O port input and output, as well as loops it's both kinda fun, and kinda pointless. I guess you could spend hours flashing lights and having other similar fun. But only if you're like me!

All told the code compiles down to about 800 bytes and uses less than ten bytes of RAM to store register-state. It could be smaller with some effort, but it was written a bit adhoc and I think I'm probably done now.

June 24, 2023 01:00 PM

June 06, 2023

Martin A. Brooks

When contract hunting goes wrong: TEKsystems & Allegis Group

I was approached by a recruiter from TEKsystems who were looking for a Linux systems administration and automation type person for a project with one of their clients.  I took a look at the job description, and it seemed like a pretty good match for my skills, so I was happy to apply and for TEKsystems to represent me.

I was interviewed three times by members of the team I would be working in over the course of about two weeks.  The people were based in Sweden and Norway and, having previously lived in Norway, I felt brave enough to try out bits of my very very rusty Norwegian.  The interviews all seemed to go well and, a few days later, I was offered the role which I accepted.  A start date of May 15th 2023 was agreed.

I consider it a sincere and meaningful compliment when I am offered work, so it’s important to know that, in accepting this role, I had turned down three other opportunities, two permanent roles and one other contract.

As this role was deemed inside IR35, I would have to work through an umbrella company.  It’s usually less friction to just go with the agency’s recommended option which was to use their parent company, Allegis Group.  I duly went through their onboarding process, proving my address, identity, right to work and so on and so forth.  All pretty standard stuff.

As May 15th approached, I was conscious that I had not, as yet, received any initial onboarding instructions neither directly from the client or via the agency. Whom did I contact on the 15th, when and how?  As this was a remote work contract, I was also expecting delivery of a corporate laptop.  This had not yet turned up.

Late in the week before the 15th, I had a call from the agency saying that there had been some kind of incident that the team I would be working with had to deal with.  They had no-one available to do any kind of onboarding with me, so would I mind deferring the start of the contract by a week?

It turned out it was very convenient for me.  A friend of the family had died a few weeks earlier from breast cancer and the funeral was on the Friday beforehand and, as it happened, my wife and daughter also got stranded in France due to the strikes.  A couple of extra days free to deal with all of that were helpful, so I agreed and everyone was happy.

Towards the end of that week, there had still been radio silence from the client. The agency was trying to obtain a Scope Of Work from them which would lead to an actual contract being drawn up for signing.

The next Monday was a bank holiday and, on the Tuesday morning, I got this message from the agency.

Hello Martin

We would like to update you to confirm we are unable to continue with your onboarding journey, and as such your onboarding journey has now ceased.

We wish you all the best for your future assignments.

Many thanks,

OnboardingTeam@TEKsystems

Needless to say, this was rather surprising and resulted in me attempting to get in touch with someone there to discover what was going on.  No immediate answer was forthcoming other than vague mentions of difficulty with a Swedish business entity not being able to take on a UK-based resource.  I was told that efforts would be made to clarify the situation.  To the day of writing this, that’s still not happened.  Well, not for me at least.

At the end of that week, it became obvious that whatever problem had happened was terminal for my contract, so I started back contact hunting and reactivating my CV on the various job boards.

I asked TEKsystems if they would offer any kind of compensation.  I’d acted entirely in good faith: I’d turned down three other offers of work, told other agencies I was no longer available and deactivated my CV on the various job boards.  It seemed fair they should offer me some kind of compensation for the lost earnings, wasted time and lost opportunities.  They have declined this request leaving me entirely out of pocket for the 3 weeks I should have been working for them and, of course, unexpectedly out of work.

I’m obviously back looking for my next opportunity and I’m sure something will be along in due course.  This is a cautionary tale of what can go wrong in the world of contracting and, if your next contract involves TEKsystems or Allegis Group, you might wish to be extra careful, making sure they are actually able to offer you the work they say they are, and that you get paid.

by Martin A. Brooks at June 06, 2023 08:27 PM

May 01, 2023

Martin Wimpress

Steam Box vs Steam Deck

I declined my Steam Deck pre-order and I’m now playing more games on Linux

by Martin Wimpress (martin@wimpress.com) at May 01, 2023 05:38 PM

April 28, 2023

Martin Wimpress

November 18, 2022

Andy Smith

PowerDNS Truncated SOA Response Problem

I recently upgraded bind9 on my primary nameserver and soon after I noticed that one particular zone would no longer transfer to my secondary nameservers, which run PowerDNS. All the PowerDNS servers were saying:

Nov 18 00:25:26 daiquiri pdns_server[32452]: While checking domain freshness: Query to '2001:ba8:1f1:f085::53' for SOA of 'example.com' did not return a SOA

The confusing thing was that manually using dig to query for this did work fine:

daiquiri$ dig +short -t soa example.com @2001:ba8:1f1:f085::53
ns0.example.com. bind.example.com. 1668670704 28800 14400 3600000 86400

After scratching my head for several hours over this yesterday, I eventually broke out tcpdump and was surprised to see that the response to PowerDNS’s SOA query was indeed empty. And it was also truncated!

Back to dig, I could see that this zone was DNSSEC-signed and the SOA query with DNSSEC info was 2293 bytes in size:

daiquiri$ dig +dnssec -t soa example.com @2001:ba8:1f1:f085::53 | grep MSG
;; MSG SIZE  rcvd: 2293

That’s bigger than a DNS response can be in UDP, so it truncates and the client is supposed to retry over TCP. dig has no problem doing that, but PowerDNS can’t (yet).

Specifically what has changed in bind9 is the EDNS buffer size, down from its previous default of 4096 bytes to 1232 bytes.

I can stop PowerDNS from doing the SOA check at all by upgrading all PowerDNS servers to v4.7.x and using the secondary-check-signature-freshness=no option.

I could put bind9’s EDNS buffer size back up to 4096, but it doesn’t seem advisable to go over about 1400 bytes and so that won’t help.

For now I have enabled the minimal-responses option in bind9, which drops extra records from the Authority and Additional sections of responses unless they are absolutely required. This reduces the response size of that SOA query to 685 bytes, so it no longer truncates and PowerDNS is happy.

I’m not sure if an SOA response can ever go above 1232 bytes now. Maybe as DNSSEC signatures get bigger. So this might not be a permanenet solution and hopefully PowerDNS will gain the ability to retry those SOA queries over TCP.

by Andy at November 18, 2022 02:42 PM

July 10, 2020

Martin A. Brooks

Getting started with a UniFi Dream Machine Pro

It’s not an exaggeration to say that I’m an Ubiquiti fanboy. I like their kit a lot and my home network has been 100% UniFi for quite a few years now.

I’ve just moved in to a new home which I’m getting rewired and this will include putting structured network cabling in, terminating back to a patch panel in a rack in the loft. I have a small amount of “always on” kit and I wanted as much as it as reasonably possible to be in standard 19″ rack format. This is when I started looking at the Ubiquiti Dream Machine Pro to replace a combination of a UniFi CloudKey and Security Gateway, both excellent products in their own right.

My expectation was that I would connect the UDMP to some power, move the WAN RJ45 connection from the USG to the UDMP, fill in some credentials and (mostly) done! As I’m writing this down, you can probably guess it didn’t quite work out like that.

The UDMP completely failed to get an internet connection via all the supported methods applicable. PPPoE didn’t work, using a surrogate router via DHCP didn’t work, static configuration didn’t work. I reached out to the community forum and, in fairness, got very prompt assistance from a Ubiquiti employee.

I needed to upgrade the UDMP’s firmware before it would be able to run its “first setup” process, but updating the firmware via the GUI requires a working internet connection. It’s all a little bit chicken and egg. Instead, this is what you need to do:

  • Download the current UDMP firmware onto a laptop.
  • Reconfigure the laptop’s IP to be 192.168.1.2/24 and plug it in to any of the main 8 ethernet ports on the UDMP.
  • Use scp to copy the firmware to the UDMP using the default username of “root” with the password “ubnt”:
    scp /path/to/fw.bin root@192.168.1.1:/mnt/data/fw.bin
  • SSH in to the UDMP and install the new firmware:
    ubnt-upgrade /mnt/data/fw.bin

The UDMP should reboot onto the new firmware automatically. Perhaps because I’d been attempting so many variations of the setup procedure, after rebooting my UDMP was left in a errored state with messages like “This is taking a little longer..” and “UDM Pro is having an issue booting. Try to reboot or enter Recovery Mode”. To get round this I updated the firmware again, this time doing a factory reset:

ubnt-upgrade -c /mnt/data/fw.bin

The UDMP then rebooted again without error and I was able to complete the setup process normally.

It’s a bit unfortunate that UDMPs are shipping with essentially non-functional firmware, and it’s also unfortunate that the process for dealing with this is completely undocumented.

by Martin A. Brooks at July 10, 2020 06:07 PM

May 29, 2020

Martin A. Brooks

Letter from my MP regarding Dominic Cummings

I wrote to my MP, Julia Lopez (CON), asking for her view on whether Dominic Cummings had broken the law or not and if he should be removed from his position. Here is her response:

Thank you for your email about the Prime Minister’s adviser, Dominic Cummings, and his movements during the lockdown period. I apologise for taking a few days to get back to you, however I am in the last weeks of my maternity leave and am working through a number of tasks in preparation for my return.

I have read through all the emails sent to me about Mr Cummings and completely understand the anger some correspondents feel. It has been a very testing time for so many of us as we have strived to adhere to new restrictions that have separated us from loved ones, led us to make very difficult decisions about our living and working arrangements or seen us miss important family occasions – both happy and sad. Those sacrifices have often been painful but were made in good faith in order to protect ourselves, our families and the most vulnerable in the broader community.

Given the strength of feeling among constituents, I wrote to the Prime Minister this week to advise him of the number of emails I had received and the sentiments expressed within them, highlighting in particular the concern over public health messaging. Mr Cummings has sought to explain his actions in a press conference in Downing Street and has taken questions from journalists. While his explanation has satisfied some constituents, I know others believe it was inadequate and feel that this episode requires an independent inquiry. I have made that request to the Prime Minister on behalf of that group of constituents.

Mr Cummings asserts that he acted within lockdown rules which permitted travel in exceptional circumstances to find the right kind of childcare. In the time period in question, he advises that he was dealing with a sick wife, a child who required hospitalisation, a boss who was gravely ill, security concerns at his home, and the management of a deeply challenging public health crisis. It has been asserted that Mr Cummings believes he is subject to a different set of rules to everyone else, but he explained in this period that he did not seek privileged access to covid testing and did not go to the funeral of a very close family member.

I am not going to be among those MPs calling for Mr Cummings’ head to roll. Ultimately it is for the Prime Minister to decide whether he wishes Mr Cummings to remain in post – and to be accountable for and accept the consequences of the decision he makes – and for the relevant authorities to determine whether he has broken the law. Whatever one thinks of this episode, I think the hounding of Mr Cummings’ family has been disturbing to watch and I hope that in future the press can find a way of seeking truth without so aggressively intruding into the lives of those who have done nothing to justify their attention.

Thank you again for taking the trouble to share with me your concerns. I regret that we cannot address everyone individually but the team continues to receive a high number of complex cases involving those navigating healthcare, financial and other challenges and these constituents are being prioritised. I shall send you any response I receive from the Prime Minister.

Best wishes

Julia

by Martin A. Brooks at May 29, 2020 01:33 PM

August 22, 2016

Anton Piatek

Now with added SSL from letsencrypt

I’ve had SSL available on my site for some time using startssl, but as the certificate was expiring and requires manual renewal, I though it was time to try out letsencrypt. I’m a huge fan of the idea of letsencrypt, which is trying to bring free SSL encryption to the whole of the internet, in particular all the smaller sites who might not have the expertise to roll out SSL or where a cost might be restrictive.

There are a lot of scripts for powering letsencrypt, but getssl looked the best fit for my use case as I just wanted a simple script to generate certificates, not manage apache configs or anything else. It seems to do a pretty good job so far. I swapped over the certificates to the newly generated ones and it seems pretty smooth sailing.

by Anton Piatek at August 22, 2016 06:51 PM

October 05, 2015

Philip Stubbs

Gear profile generator

Having been inspired by the gear generator found at woodgears.ca I decided to have a go at doing this myself.

Some time ago, I had tried to do this in Java as a learning exercise. I only got so far and gave up before I managed to generate any involute curves required for the tooth profile. Trying to learn Java and the math required at the same time was probably too much and it got put aside.

Recently I had a look at the Go programming language. Then Matthias Wandel produced the page mentioned above, and I decided to have another crack at drawing gears.

The results so far can be seen on Github, and an example is shown here.

Gear Profile Example Image

What I have learnt

  • Math makes my head hurt.
  • The Go programming language fits the way my brain works better than most other languages. I much prefer it to Java, and will try and see if I can tackle other problems with it, just for fun.

by stuphi (noreply@blogger.com) at October 05, 2015 08:32 AM

June 22, 2015

Anton Piatek

Hello Pace

After leaving IBM I’ve joined Pace at their Belfast office. It is quite a change of IT sectors, though still the same sort of job. Software development seems to have a lot in common no matter which industry it is for.

There’s going to be some interesting learning, things like DVB are pretty much completely new to me, but at the same time it’s lots of Java and C++ with similar technology stacks involved. Sadly less perl, but more Python so maybe I’ll learn that properly. I’m likely to work with some more interesting Javascript frameworks, in particular Angular.js which should be fun.

The job is still Software Development, and there should be some fun challenges with things like allowing a TV set top box to do on demand video content when all you have is a one-way data stream from a satellite, for instance, which make for some interesting solutions. I’m working in the Cobalt team which deals with a delivering data from the TV provider onto set top boxes, so things like settings, software updates, programme guides and on demand content and even apps. Other teams in the office work with the actual video content encryption and playback and the UI the set top box shows.

The local office seems to be all running Fedora, so I’m saying goodbye to Ubuntu at work. I already miss it, but hopefully will find Fedora enjoyable in the long term.

The office is on the other side of Belfast so is a marginally longer commute, but it’s still reasonable to get to. Stranmillis seems a nice area of Belfast, and it’s a 10 minute walk to the Botanical gardens so I intend to make some time to see it over lunch, which will be nice as I really miss getting out as I could in Hursley and its surrounding fields.

by Anton Piatek at June 22, 2015 02:53 PM

June 04, 2015

Anton Piatek

Bye bye big blue

After nearly 10 years with IBM, I am moving on… Today is my last day with IBM.

I suppose my career with IBM really started as a pre-university placement at IBM, which makes my time in IBM closer to 11 years.  I worked with some of the WebSphere technical sales and pre-sales teams in Basingstoke, doing desktop support and Lotus Domino administration and application design, though I don’t like to remind people that I hold qualifications on Domino :p

I then joined as a graduate in 2005, and spent most of my time working on Integration Bus (aka Message Broker, and several more names) and enjoyed working with some great people over the years. The last 8 months or so have been with the QRadar team in Belfast, and I really enjoyed my time working with such a great team.

I have done test roles, development roles, performance work, some time in level 3 support, and enjoyed all of it. Even the late nights the day before release were usually good fun (the huge pizzas helped!).

I got very involved with IBM Hursley’s Blue Fusion events, which were incredible fun and a rather unique opportunity to interact with secondary school children.

Creating an Ubuntu-based linux desktop for IBM, with over 6500 installs, has been very rewarding and something I will remember fondly.

I’ve enjoyed my time in IBM and made some great friends. Thanks to everyone that helped make my time so much fun.

 

by Anton Piatek at June 04, 2015 10:00 AM

April 11, 2015

Philip Stubbs

DIY USB OTG Cable

Suddenly decided that I needed a USB OTG cable. Rather than wait for one in the post, i decided to make one from spare cables found in my box of bits.
Initially I thought that it would be a simple case of just cutting the cables and reconnecting a USB connector from a phone lead to a female USB socket. Unfortunately that is not the case.
The USB cable has four wires, but the micro USB plug has five contacts. The unused contact needs to connected to ground to make the OTG cable. The plug on the cable I used does not have a connection for the  extra pin, so I needed to rip it apart and blob a lump of solder on two pins. The body of the plug has a wall between each pin, so I rammed a small screwdriver in there to allow the soldered pins to fit.





I then reassembled the plug, and continued with the connecting the wires together. This was an easy case of , red to red, black to black, green to green and white to white. A piece of heat shrink covers the mess.
Now to use it. It allows me to plug a keyboard into my Nexus tablet. If I plug a mouse in, a pointer pops up. All of a sudden using the tablet feels like using a real computer. I am typing this with a keyboard on my lap down the garden with my tablet.
The real motivation for the cable was to allow me to use my phone to adjust the settings on my MultiWii based control board of my Quadcopter. For that, it seems even better than MultiWiiConf, and certainly a lot more convenient when out flying.

by stuphi (noreply@blogger.com) at April 11, 2015 04:31 PM

January 29, 2015

Philip Stubbs

Arduino and NRF24L01 for Quad-copter build

As part of my Quadcopter build, I am using a couple of Arduino's along with some cheap NRF24L01 from Banggood for the radio transmitter and reciever. The idea came from watching the YouTube channel iforce2d.

When I started developing (copying) the code for the NRF modules, I did a quick search for the required library. For no good reason, I opted for the RadioHead version. Part of my thinking was by using a different library from iforce2d, I would have to poke around in the code a bit more and lean something.

All went well with the initial trials. I managed to get the two modules talking to each other, and even had a simple processing script show the stick outputs by reading from the serial port of the receiver.

Things did not look so good when I plugged the flight controller in. For that I am using an Afro Mini32. With that connected to the computer and Baseflight running, the receiver tab showed a lot of fluctuations on the control signals.

Lots of poking , thinking, and even taking it into work to connect to an oscilloscope, it looked like the radio was mucking up with the timing of the PWM signal for the flight controller. Finally, I decided to give an alternative NRF library a try, and from the Arduino playground site, I selected this one. As per iforce2d, I think.

Well that fixed it. Although, at the same time I cleaned up my code and pulled lots debugging stuff out and changed one if loop to a while loop, so there is a chance that changing the Library was not the answer. Anyhow, it works well now. Just need some more bits to turn up and I can start on the actual copter!

by stuphi (noreply@blogger.com) at January 29, 2015 04:28 PM