Next week my wife and I shall be mostly visiting Poland, and spending a week in Kraków.
It has been a while since I've had a non-Helsinki-based holiday, so I'm looking forward to the trip.
In other news I've been rationalising DNS entries and domain names recently, all being well this zone should be served by Amazon shortly, subject to the usual combination of TTLs and resolution-puns.
Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.
It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.
I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the
So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:
~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release
That horrible replacement is horrible because it defers detection of all the names/numbers to the
/etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.
So that left three hosts that had a legitimate use for Python:
qemu-kvmdepends on Python solely for the script
So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.
My biggest surprise of the day was that now that we can use
dash as our default shell we still can't purge
bash. Since it is marked as
Essential. Perhaps in the future.
I (grudgingly) use the Calibre e-book management software to handle my collection of books, and copy them over to my kindle-toy.
One thing that has always bothered me was the fact that when books are imported their ratings are too. If I receive a small sample of ebooks from a friend their ratings are added to my collections.
I've always regarded ratings as things personal to me, rather than attributes of a book itself; as my tastes might not match yours, and vice-versa.
On that basis the last time I was importing a small number of books and getting annoyed at having to manually reset all the imported ratings I decided to do something about it. I started hacking and put together a simple Calibre plugin to automatically zero ratings when books are imported to the collection (i.e. set the rating to be zero).
Sadly this work wasn't painless, despite the small size, as an unfortunate bug in Calibre meant my plugin method wasn't called. Happily Kovid Goyal helped me work through the problem, and he committed a fix that will be in the next Calibre release. For the moment I'm using today's git-snapshot and it works well.
Similarly I've recently started using extended file attributes to store metadata on my desktop system. Unfortunately the GNU
findutils package doesn't allow you to do the obvious thing:
$ find ~/foo -xattr user.comment /home/skx/foo/bar/t.txt /home/skx/foo/bar/xc.txt /home/skx/foo/bar/x.txt
There are several
xattr patches floating around, but I had to bundle my own in
debian/patches to get support for finding files that have particular attribute names.
Maybe one day extended attributes will be taken seriously. (
cp, etc will preserve them. I'm hazy on the compatibility with
tar, but most things seem to be working.)
Assuming this post shows up then I'll have successfully migrated from Chronicle to a temporary replacement.
Chronicle is awesome, and despite a lack of activity recently it is not dead. (No activity because it continued to do everything I needed for my blog.)
Unfortunately though there is a problem with chronicle, it suffers from a bit of a performance problem which has gradually become more and more vexing as the nubmer of entries I have has grown.
When chronicle runs it :
In the general case you rebuild a blog because you've made a entry, or received a new comment. There is some code which tries to use
memcached for caching, but in general chronicle just isn't fast and it is certainly memory-bound if you have a couple of thousand entries.
Currently my test data-set contains 2000 entries and to rebuild that from a clean start takes around 4 minutes, which is pretty horrific.
So what is the alternative? What if you could parse each post once, add it to an SQLite database, and then use that for writing your output pages? Instead of the complex data-structure in-RAM and the need to parse a zillion files you'd have a standard/simple SQL structure you could use to build a tag-cloud, an archive, & etc. If you store the contents of the parsed-blog, along with the
mtime of the source file you can update it if the entry is changed in the future, as I sometimes make typos which I only spot once Ive run
make steve on my blog sources.
Not surprisingly the newer code is significantly faster if you have 2000+ posts. If you've imported the posts into SQLite the most recent entries are updated in 3 seconds. If you're starting cold, parsing each entry, inserting it into SQLite, and then generating the blog from scratch the build time is still less than 10 seconds.
The downside is that I've removed features, obviously nothing that I use myself. Most notably the calendar view is gone, as is the ability to use date-based URLs. Less seriously there is only a single theme, which is what is used upon this site.
In conclusion I've written something last night which is a stepping stone between the current
chronicle2 which will appear in due course.
PS. This entry was written in
markdown, just because I wanted to be sure it worked.
I made damson jam, it's something I've not made for years. It's the first year we've had fruit from our damson tree. Sadly we were on holiday when the fruit was at it's prime so we lost some to birds and age. In the end we had about 1.5 kg of fruit that was okay, to which we added a further 1.5 kg of cooking apples we collected from a box in the village.
I gave everything a wash and picked out the past redemption fruit, and put the cleaned damsons and roughly chopped apples in the jam pan and added 400 ml of water. I then heated the mixture and gave it a good mashing until it was all broken up and soft. Previously I've tried making jelly but the yield is dreadful, so instead I forced the mush through a collander to hold back the stones, pips and coarse material.
Next I cleaned everything up and started on the jamimg process. I added 1.5 kg of sugar, 300 ml of water and the juice of a lemon into the jam pan and heated to boiling. I then poured in my 1.5 kg of apple and damson puree, and brought quickly to the boil. I let it have a full rolling boil for about three minutes (damsons and apples have a lot of pectin and if you boil for too long you just get rubber). I then potted it up and left it to cool.
From this batch I got 9.75 jars of jam, which isn't too bad at all. It's not as clear jelly, but it tastes plenty good and it's a lot easier to make than proper whole fruit jam. When I put it in the jars it was very runny and didn't lool jam at all, this morning when I inspected it had set well, I'll find out tonight if it's over cooked!
Personally I believe that any application packaged for Debian should neither phone home, attempt to download plugins over HTTP at run-time, or update itself.
On that basis I've filed #761828.
As a project we have guidelines for what constitutes a "serious" bug, which generally boil down to a package containing a security issue, causing data-loss, or being unusuable.
I'd like to propose that these kind of tracking "things" are equally bad. If consensus could be reached that would be a good thing for the freedom of our users.
(Ooops I slipped into "us", "our user", I'm just an outsider looking in. Mostly.)