Using a different tool for each job and linking them together with MQTT - part 4

Something needs to happen on the screen when you push the buttons you can see here. Except the buttons are connected to an Arduino not the Raspberry Pi that is connected to the screen.

The sequence goes something like this:

  1. Push the green button and the little trapdoor opens.
  2. Put a container in the revealed space and push the button again.
  3. It is drawn inside the machine.
  4. The trapdoor closes behind it.
  5. An 'analysis' of the contents of the container begins.

I did a little video while I was testing the mechanism. The mechanism is printed in natural translucent PLA and there are some Neopixels glued on the outside so that it can indicate the 'status' of the process. If you put in a container that has already been analysed it spits it out again while glowing red.

Going back to the start though, the Raspberry Pi needs to know you've pushed the green button. We've already established in parts 1 & 2 that the Arduino is plugged in to the Raspberry Pi over USB and that a small Python script looks for serial output from the Arduino, pushing anything it gets into an MQTT topic 'arduino/out/'. Push the green button and it sends 'openLid'.

The user interface is running as a set of web pages in Chrome browser. Unless something makes them they'll set there stoically until the user clicks on a link. What's great is that Paho have made a simple Javascript MQTT client library available and this can be used to automate interaction with these pages. The library uses Websockets to connect rather than a direct MQTT connection but we've already made sure the server supports this.

Using a different tool for each job and linking them together with MQTT - part 3

Things you have to think about when putting a Raspberry Pi inside a prop include things you don't normally have to worry about like graceful startup and shutdown. Also, getting enough power out of a battery for a Pi 3 and a 5" display is not totally inconsequential.

I'll deal with the latter first. I luckily had a 5V/3A Turnigy branded battery eliminator (BEC) kicking around in a parts drawer. These are designed for RC vehicles and nicely self-contained in a shrink-wrapped module with inline inductor to smooth things out. I paired this up with a big chunk of 12 AA NiMH cells wired series/parallel to give me ~7.2V which is about right to run the BEC. The more modern solution would be some LiPo cells but then you need to build a balance charger into the device or expose the connectors so you can charge the cells individually. I've got a bunch of stuff that runs off NiMH cells and a nice high current smart charger that automatically selects voltage/current and this also influenced my decision to go this way.

Then there's the switch. People expect things to just turn on/off at the flick or press of a switch but you can't give them easy direct access to the power or they'll switch things off when they aren't ready. The Raspberry Pi really doesn't like that. There are workarounds you can do with mostly read-only filesystems but at the end of the day the Pi really should be shut down cleanly.

Luckily I had some Polulu solid state power switches lurking from a temporarily shelved project. My house may be overflowing with stuff but sometimes it comes in useful. I wired this so the power switch was connected to the 'on only' contact. That gets everything powered up. When you flick the switch to off, the alternate contact takes a pin on the Arduino low and the Arduino initiates a shutdown sequence. You saw the code for that in part 2. Here's the Python script for the shutdown of the Pi, which is just a variant on the old chestnut about fitting a shutdown switch to the case of a Pi.

import time
import os
while 1:
  if os.path.isfile('/tmp/reboot'):
    os.system("sudo reboot")
  elif os.path.isfile('/tmp/poweroff'):
    os.system("sudo poweroff")
Not much to say about this, it just looks for files in '/tmp/' and reboots or shuts down depending on which file it sees. Stick it in /etc/rc.local again and the job's a goodun.

Things I didn't get right are I used a latching switch for on/off and I should have fitted a way to both charge the batteries and power the device externally.

The batteries lasted a couple of hours but then it had to be charged before it could be used again. We had mains power in the end so being able to use a PSU would have been good. Connecting the smart charger with it powered up made the charger shut down, otherwise that would have been an option.

The latching switch prevented me from making the box shut down when inactive. It would try to shut down (the Pi would shut down) but if the switch was physically in the 'on' position the power would stay on until you flicked the switch. I picked this up in time to fix it but couldn't find a nice momentary switch without resorting to mail order. Which would have come too late. It's a lesson learned for next time.

Using a different tool for each job and linking them together with MQTT - part 2

Now we have a working MQTT server, it's time to start making use of it.

As I was planning to use MQTT to broker messages between Web server CGI scripts, Javascript in Chrome web browser and an Arduino the obvious way to do this is push data at the USB serial port of the Arduino.

Normally this port is used for uploading the Arduino sketch to the board and sometimes people end up filling it with debug/status messages once the sketch is running. However there's a long history of it being used as an actual way to control stuff.

So I threw together a piece of 'middleware' that shuffles data between some MQTT topics and the USB serial port.

import time
import os
import mosquitto
import serial
# commands to the arduino are as follow...
# R red LEDs
# G green LEDs
# B blue LEDs
# N no LEDs
# O open/go
# P power off
arduinoBootupTime = 5
arduinoIsBooted = 0
debug = True
# Define the various MQTT callback functions as this is an event driven model
def on_connect(mosq, obj, rc):
if debug:
print("rc: "+str(rc))
def on_message(mosq, obj, msg):
if debug:
print(msg.topic+" "+str(msg.payload))
def on_publish(mosq, obj, mid):
if debug:
print("mid: "+str(mid))
def on_subscribe(mosq, obj, mid, granted_qos):
if debug:
print("Subscribed: "+str(mid)+" "+str(granted_qos))
def on_log(mosq, obj, level, string):
if debug:
# Connect to the local MQTT server to pass stuff to/from a browser
mqttc = mosquitto.Mosquitto()
mqttc.connect("localhost", 1883, 60)
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_publish = on_publish
mqttc.on_subscribe = on_subscribe
# Subscribe to the topic that sends commands TO the Arduino
mqttc.subscribe("arduino/in", 0)
while 1:
if debug:
print "Trying to connect to Arduino\n"
arduino = serial.Serial('/dev/ttyUSB0',115200,timeout=1)
while arduino.isOpen():
if arduinoIsBooted == 0:
if debug:
print( + " connected - Giving " + str(arduinoBootupTime) + "s for it to bootstrap"
arduinoIsBooted = 1
arduinoOut = arduino.readline()
if len(arduinoOut) > 0:
if debug:
print "Received " + arduinoOut.rstrip() + " from the Arduino"
if arduinoOut.rstrip() == 'powerDown':
powerOffFile = open('/tmp/poweroff', 'a')
mqttc.publish("arduino/out", arduinoOut.rstrip())

This is about as simple as things can be and mostly cribbed from example scripts. It waits a while to give the Arduino time to boot as the USB port attaching to Raspberry Pi will cause it to reset. Then it tries to connect and loops round forever passing messages back and forth.

The meat of this is in two functions, the first of which gets triggered as a callback when an MQTT message arrives...

def on_message(mosq, obj, msg):
This simply sends the message straight to the Arduino. Coming the other way is almost an exact reverse except we're using newlines to mark the end of messages so we can read them with 'readline'. This means we need to strip them off before publishing to MQTT. Python has a handy function for this in 'rstrip'.

arduinoOut = arduino.readline()
if len(arduinoOut) > 0:
if arduinoOut.rstrip() == 'powerDown':
powerOffFile = open('/tmp/poweroff', 'a')
mqttc.publish("arduino/out", arduinoOut.rstrip())
There's another little if/else in here to handle the shutdown sequence. I used the VERY simple method of writing a file to /tmp/ if the Raspberry Pi needs to shut down, which is controlled by the Arduino. I could have had a second script subscribed to the topic for this, but this keeps all the MQTT code in one place. Because the Pi uses memory for /tmp/ instead of saving to its SD card, this safely disappears once the machine is shut down.

That's about it, things coming out of the Arduino USB serial port end up in the topic '/arduino/out/' and stuff in the topic '/arduino/in/' get sent to the serial port. The test for the existence of the Arduino serial port '/dev/ttyUSB0' means it loops round forever trying to reconnect if the Arduino get unplugged or resets.

To make this run at startup I put it into /etc/rc.local which is another quick and dirty 'make it work' solution. It works. This is not a server it's a prop, so I'm being cavalier with scalable/secure ways of doing things.

Using a different tool for each job and linking them together with MQTT - part 1

I've recently finished a quite complicated prop that needed to have a 'user interface' and thought I'd put down my thoughts on how I built it.

At a high level what we've got is a Raspberry Pi doing the user interface using Chromium web browser in kiosk mode, an Arduino Nano doing the 'physical computing' and a Teensy microcontroller acting as a custom keyboard for interaction with the Raspberry Pi.

What I've done is in principle inefficient as I could have done it all with the Raspberry Pi. However using different technologies like this is a way to compartmentalise bits of the project and use the technology you're most comfortable with for each part.

I've already used MQTT to tie things together before so it was an obvious thing to use again. It is implemented with a very simple protocol that many things are capable of understanding, including diminutive memory-constrained microcontrollers like an Arduino or ESP8266. Also, the idea of bringing data into MQTT from a hodgepodge of sources to tie things together is not a foreign concept at all, it's pretty much designed for this.

What is MQTT?

In principle the MQTT Wiki is a good point to start but like a lot of documentation in the open source community assumes a chunk of pre-knowledge and is full of gaps. So I'm going to re-invent the wheel here and describe it again in fairly plain language...

MQTT is just a way to send and receive messages. These messages can be pretty much anything you want, text, images, sounds, any arbitrary binary data. In principle they can be as large as you like but if something listening on the other end doesn't have enough memory to receive it then it'll definitely fail and probably crash or lock up.

For MQTT to work it needs a server (called a broker) that everything connects to. The broker makes sure that messages get where they need to go. A commonly used server is Mosquitto and it's what I've used. Be aware that the version which installs by default on a Raspberry Pi (probably other Linux distributions too) is old and compiled without Websockets support. You should use the current version from the Mosquitto developers or you will be limited in which clients you can use. More on that in a bit.

Topics, subscribe, publish, LWT and QoS

There are a load of clients available, Python, Javascript, Arduino, NodeMCU/Lua etc. etc. and they all use the same terminology when you want to do something with them.


A topic is a 'channel' for messages. There can be an aribtrary number of these on a broker. Unless you do specific configuration on the broker to lock things down you can create/destroy these arbitrarily by sending to or listening for data on a topic. In my application I've got topics called 'arduino/in' and 'arduino/out' for sending to and receiving from the Arduino respectively. All topics can be 'bidirectional' ie. you send and receive on the same topic from a client, but that can complicate your code as you need to process your own messages coming back at you. Which is why I'm using topics in a 'unidirectional' manner.

The broker normally handles creating topics and tidying up afterwards automatically, again unless you want to control this.


When you want to receive messages from a topic, you 'subscribe' to it. Depending on the programming language you use it is likely this happens as a 'callback'. This means that when you subscribe you create a function that gets run every time a message comes in on the topic. Your code needs to be able to deal with being arbitrarily interrupted when this happens.


When you want to send a message to a topic you 'publish' it. There's very little more to be said, you publish and it appears.


The broker periodically checks to make sure any clients it has are contactable. You can optionally set a 'Last Will and Testament' when you connect the client, which publishes a message to the topic of your choice if the client is no longer contactable. This is a very simple way to check if a particular client is online. I did not use this in my application.


When you publish or subscribe to a topic you can specify the 'quality of service' on the connection. This comes as...
  • QoS 0: At most once, which is unreliable and the client should receive it but this is not guaranteed
  • QoS 1: At least once, which is reliable. However the client may receive duplicates
  • QoS 2: Exactly once, which is reliable.
Given I was dealing with two clients running on the same device as the broker I left things as QoS 0 for my application. Some clients such as the NodeMCU one only support QoS 0.

Installing Mosquitto on a Raspberry Pi

Do not be tempted to install it from the standard Raspbian repository with 'apt-get install mosquitto'. This will work but some clients will fail with unhelpful errors, particularly the NodeMCU and Javascript ones. The Javascript client requires Websockets support and NodeMCU needs MQTT v3.1.1 both of which are missing in the standard build.

There's a handy guide to installing a newer version with Websockets support here, but in case this goes away here's the potted recipe for the current version of Raspbian (Jessie) at time of writing.

sudo apt-key add mosquitto-repo.gpg.key
cd /etc/apt/sources.list.d/
sudo wget
sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install mosquitto
Once this is installed, have a look in the file  /etc/mosquitto/mosquitto.conf and add the following line below the default 'listener'. We don't need Websockets yet, but we will later.
listener 1883

listener 9001
protocol websockets
Then restart the service with...
 sudo service mosquitto restart

Testing Mosquitto works

This is fairly easy, but first you need to install some Mosquitto clients...
sudo apt-get install mosquitto-clients
Start one session to your server and run the following command. It'll sit there waiting for messages to come in on the topic called "test".
mosquitto_sub -t 'test'
In another session, run the following command to send a message to the same topic.
mosquitto_pub -t "test" -m "Hello world!"
You should get "Hello world!" come up in the first window. It really is that simple to send messages back and forth, the client defaults to connecting to the local machine.

There was a lot of work to achieve this, but now you've done this you could have two wirelessly connected NodeMCU microcontrollers sending messages to each other via the broker and make things happen remotely. Or as we'll see in the next part, by clicking a button on a web page you can make things move that aren't directly connected to the web server.

This is all without having to create your own protocol to do this because MQTT is widely supported. I've done that kind of thing in the past and it was significantly time consuming.

Rapid prototyping

Another piece of making for College of Wizardry. I was asked if I could make some badges from the logo on the IC website.

This neatly demonstrates how good a 3D printer is for 'rapid prototyping' even with a simple thing like this.

The first step was to take the source logo and put some spacing between the various element with a bitmap editor. This was so they would come out as separate objects once traced.

Then I used Inkscape to trace it into a series of paths. There are several ways to do it but I got the best results from choosing to trace on two different levels of brightness then selected the resulting objects one by one and tidied up/removed them manually.

Do not underestimate how long this takes, I spent something like four hours doing this. Then I turned this into an OpenScad file using an export filter I found on Thingiverse.

With an OpenScad file I was then free to hack the design around, altering which bits got printed or not, relative heights and so on. This is where the fast prototyping came in, as I could print it quickly and have a look at the result as a physical object. Although with hand editing of the OpenScad file it was still a slow process.

Eventually we settled on just the lion's head part of the logo and played with the size a bit.

Even with a decent looking design, it was still always going to look a bit plastic, but a quick dab of coloured varnish sorted them out. I wiped it off the highlights to create some basic shading and bring through the base colour. Then it was a case of sticking badge pins on the back of each one with hot glue and they're done.

The design process took a while but then I could batch produce them easily. So we ended up with one each for most of one of the player factions, limited only by running out of filament after one of the print runs went bad. Which still happens very occasionally, my 3D printer is definitely an enthusiast/hobby device rather than a consumer item.

Kiosk setup on a Raspberry Pi running Jessie and the Pixel Desktop

I've started playing around with embedding a Pi with a little 2.4" screen into a prop and I've noticed all the howtos for this kind of thing are old and don't apply circa March 2017.

Now we have the Pixel desktop it's very similar but you need to edit the following file...


#@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
#@xscreensaver -no-splash
@xset s off
@xset -dpms
@xset s noblank
@chromium-browser --kiosk --incognito

Make it look like this by commenting out the lines for lxpanel and xscreensaver then add an entry to start Chromium at the end opening whatever page you'd like Chromium to start with.


Magic Wand

My partner is off to the very high profile LARP College of Wizardry at the end of March and there's been a lot of gathering of costume and props for it. She suggested a wand with a bare LED that lights with a push / push switch but I decided to have a go at adding some more interactivity to it.

What's ensued is a lengthy exercise in making a prop from scratch using my 3D printer and trying to do as little hand crafting as possible, apart from final finishing. This is to keep it compact and fit all the components, especially the batteries and wiring, into as small a space as I can.

It's nothing spectacular on the tech front, an Arduino Nano, GY-521 6DOF accelerometer breakout board and 21 Neopixels. On that front it's very much blinkenlights project:101.

The entire thing ended up being scaled around getting four alkaline cells in the body and being able to print this on my printer which has a build volume of 20x20x20. In principle I could have done something with a single 18650 Li-ion cell but these are very chunky and a step-up PSU to drive the Neopixels would be a pain as they're hungry beasts that need 5V. Smaller cylindrical Li-ion cells seem to be unusual and I like the idea of easily swapped disposable cells.

The obvious starting point is making the shaft the place to hold the cells but AAs are too chunky and AAA cells make for a very long thin wand so I settled on N cells. These are easily available and a good compromise on size that matches up with the space I needed for the Neopixels in the tip. This has meant the wand ended up about 35cm long in total, pretty much exactly what I wanted from it. It's still a little chunky but short of swapping to alternative lighting, maybe the 3030 rather than 5050 Neopixels this is about as small as I can get it.

The Neopixels are beautifully bright, so much so that the translucent 'natural' PLA I used to print the tip doesn't really manage to diffuse them very well but as it's going to do very occasional effects I'm not worried about this.

I've used the accelerometer to do very basic gesture detection. Once the wand is awake, holding it level and twisting it changes the colours. Raise it up and you get a flame-like flickering up the tip, swipe it down and the effect rushes forwards and fades out. Leave it pointing down and it goes to sleep.

For finishing I hacked at the printed article with a Dremel, sandpaper and knife then covered it with some coloured varnish. The result is pretty wood-like from a distance.

She's off to the event soon, let's see how well it is received. I've done a very basic demo video of the working code, but given it's easy to reprogram and I'm barely using the features of the gyro I reckon there's scope to improve on this significantly.

More laser

Having bought a CNC engraver I got very caught up in messing around with the optional laser module.

I'd previously not considered a cheap diode laser would be worth having, especially due to their reputation for rubbish software.

However once I'd played with it and set up Laserweb this changed my mind and I ordered a 2.5W A3 engraver. The 5.5W models are twice the price and probably not twice as good, online discussion suggests they are far worse at intermediate power levels which are useful for engraving. If the laser modules come down in price later I can always upgrade.

It's a much simpler machine to build than the desktop CNC, with steppers directly driving a belt and dragging a carriage up and down the rails. So I got it put together over an hour or so and simply connected it to my existing Laserweb setup. The drive ratios needed setting but apart from that it 'just worked'. Now I need to tweak things like the maximum speed and acceleration as it draws quite slowly but straight away I started to get usable output from it.

Here's a little video of it engraving at max 25% power. This burns really well so I'm thinking 100% will cut some thin stuff quite nicely, just need to start experimenting.

My experience with the laser module in the desktop CNC has made me less paranoid around it, which is probably a bad thing. So I'm going to set this aside until I can find some time to make an enclosure with a safety interlock on the laser power. I'll just stick a normally open switch inline with the 12V power so the moment you lift the lid the steppers and laser get cut. The microcontroller onboard takes its power from the system driving it.

It'll have a Raspberry Pi inside to run Laserweb so hanging a Webcam off this to give an internal view when the lid is closed should be trivial. I quite fancy making a bit of a control panel on the front to control the Pi and also control power to the laser perhaps via a GPIO driven relay so the whole thing has a single on/off switch but this is verging on yak shaving.

Miniature particle accelerator

In a conversation with a friend about making a compact very retro computer thing, the old 'pocket' CRT TVs from the 80s sprang to mind as a good option for the display.

It seems they're available for little money on eBay and nothing says retro better than a genuine CRT. The display just has a different look to it, especially a monochrome tube. Monochrome tubes have no shadow mask to make red green and blue 'pixels'. It's just a uniform coating of phosphorescent material that the electron beam scans over uninterrupted and I always preferred this look. It's less tiring than a colour CRT display in much the same way the e-ink display of a Kindle is easier on the eyes than the standard screens in most tablets and laptops.

After about ten minutes of messing about, mostly taken up by finding suitable cables, I got a Raspberry Pi desktop displayed on it. No I don't really know what use this is yet but I love it.

Going fully open source with the CNC engraver

I've been playing with the CNC engraver and discovered a few things. First up is the controller is based off an Arduino Nano running open source code (grbl) but running an older, stable version (0.9). Another is that the z-axis was miswired, even for the software it shipped with, so I had to reverse the pins on one end of the z-axis cable. It's possible to fix this in software but it irked me.

Somebody from London Hackspace pointed me at LaserWeb as a good front end to the laser engraver side of things so I've started to investigate it.

Converting these Chinese laser engraver/cutter devices away from the slightly cheesy software they ship with to open source is a common question and an Australian woodworker has produced a breezy intro to doing this. It's for a different device but the fundamentals are the same.

The LaserWeb server will happily run on a Raspberry Pi and I've several sitting in a box. After a couple of hiccups going through the howto it's up and running. This needs a newer version of grbl so I've programmed the Nano in the controller with grbl 1.1e. It's easy to put the original software back on as part of the original CNC build is to program the Nano with a file they provide.

A big gotcha I did find is that the LaserWeb GUI needs ALL the components to be working. So if you try to use the interface and it's displaying error messages, particularly relating to WebGL, in the bottom right log window it just won't work at all. It looks like there is a server error because it says you can't connect but this can be down to the graphics library as it breaks a load of things in the app, not just the graphics previews. A very helpful person from the LaserWeb3 support comunity on G+ helped me work this one out.

Now I was down the rabbit hole of playing with the laser side of things I decided I wanted to be able to work safely while testing and sticking the whole thing in a box was going to make that irritating. I've got a load of cheap ~5mW laser pointer modules kicking around so I 3D printed a 'dummy' housing the same size as the 'real' laser module and fitted one of these.

This left me free to mess around without blinding myself or starting a fire. The laser may only pack a 500mW punch but that's enough to do both these things.

Having proved to myself that the thing wasn't going to smash into the end of its travel and LaserWeb turned the laser on and off reliably in a fashion that looked like it was going to produce sane activity instead of just melting something I didn't want melted I had a go at the real thing.

I found an .svg of the Coke logo on Wikipedia, scaled it appropriately in the LaserWeb interface and started burning. At first this had zero effect and I was wondering if the laser module was defective. As I had a box over the whole thing to prevent any chance of blinding myself I couldn't really see what was going on.

I don't really trust the cheap laser safety goggles I got from Banggood. I had a webcam pointed at it but the laser simply oversatured the picture so I couldn't see much.

Using the 'test laser' button in LaserWeb, which runs it very briefly at minimal power, with the box removed I could see the beam was totally unfocused. There wasn't any obvious way to fix this, but some experimental twisting of the module showed it can be focused. This is not obvious at all and not mentioned in the minimal documentation. Once adjusted carefully to make a tiny bright spot it engraves random wood quite nicely. This Coke logo was done by running it quite quickly doing multiple passes. I need to experiment somewhat with the engraving speed and at some point try to cut some thin material.

In principle LaserWeb has a 'CNC' mode too where it will handle CNC milling/engraving which I'll investigate as it would be nice to use the same software for both jobs. The more I play with this though the more I think I want some limit switches so it can home itself automatically and definitely prevent crashing into the end of travel this is a bit of modding that shouldn't be too hard to do.

A productive evening

I've had a good evening of mostly tightening M3 nylocs. Eventually after about three hours my desktop CNC engraver went from looking like the first picture to the second one.

There were only a couple of moments of retracing my steps. The set of picture instructions you download were OK. However they've changed the design slightly and substituted some fasteners so you run out of shorter ones if you use them everywhere they suggest.

I've no idea if it works at all as I haven't powered it on, let alone installed the software. This is a job for tomorrow evening.

Desktop CNC unboxing

It's been a while since I posted. Mostly I got involved in running a LARP, which sucked a lot of my energy through August and September then suddenly it was Christmas.

Speaking of which, my present to myself was one of these tiny desktop CNC mill/engraver/laser machines from Banggood.

It's taken me a couple of weeks to get around to ripping the box open and here it is. It's going to be a smaller, simpler project than my 3D printer to assemble but I suspect it'll still take a while.

It's a curiosity in that it can do both engraving using a cutting tool and with a laser that fits in the same mount. The laser is only 500mW so it's not going to be cutting anything but I reckon for surface marking stuff it'll be great.

Right now I'm building a 'disposable' PC to run the accompanying software as I don't trust it not to be flaky and/or virus infested.