Chindōgu - Raspberry Pint presentation
I did a presentation about my Raspberry Pi powered Sony Watchman project at Raspberry Pint, a London Raspberry Pi meetup.
Chindōgu - part 2
This weekend I spent a chunk of time building a second Watchman based device.
The initial plan was to add more resource by fitting a Raspberry Pi 3A+ giving it a big performance increase but after desoldering some of the headers from the board and poking around I just couldn't fit it in the case. It might be possible with extreme measures like removing the camera header and soldering the ribbon cable directly to the board but this feels like it would be a failure anyway.
I ended up doing the simple incremental change of fitting a camera and sticking with the Raspberry Pi Zero W and it can just about handle video calling with Ekiga.
As standard the Pi Zero camera comes with too short a cable for this to work at all and there's no way a full size Pi camera will fit in the space under the CRT. You can't buy Zero camera extension cables but you can convert them to a full size ribbon and back again so this is what I ended up doing. Three cables and two joints but it still works just fine. It's routed all the way round the left hand side of the case and emerges back on the right side. This is so it's possible to maintain the original slide on/off switch and volume control.
The tuner wheel has been converted to a 'rocker' type switch with a couple of tactile switches, epoxy and luck. I also made a better job of the power switch, fitting an actual slide switch rather than a momentary one and relying on a long cruddy lever to push it. This all works much better than my first prototype and I may go back and rework it in this style. I'll need to buy more camera cables/joints though.
I'm still yet to do anything about audio. I dug out a little USB microphone dongle and may see if I can slim it down and hardwire it to a USB cable plugged into the port on the Zero. I'm not keen on desoldering the USB port, just in case.
For audio output I need to test the PWM output method and build a suitable filter.
Oh and then there's all the software I need to tweak, while Ekiga works there's no realistic way to control it so I'm dialling in from a laptop to test.
The initial plan was to add more resource by fitting a Raspberry Pi 3A+ giving it a big performance increase but after desoldering some of the headers from the board and poking around I just couldn't fit it in the case. It might be possible with extreme measures like removing the camera header and soldering the ribbon cable directly to the board but this feels like it would be a failure anyway.
I ended up doing the simple incremental change of fitting a camera and sticking with the Raspberry Pi Zero W and it can just about handle video calling with Ekiga.
As standard the Pi Zero camera comes with too short a cable for this to work at all and there's no way a full size Pi camera will fit in the space under the CRT. You can't buy Zero camera extension cables but you can convert them to a full size ribbon and back again so this is what I ended up doing. Three cables and two joints but it still works just fine. It's routed all the way round the left hand side of the case and emerges back on the right side. This is so it's possible to maintain the original slide on/off switch and volume control.
The tuner wheel has been converted to a 'rocker' type switch with a couple of tactile switches, epoxy and luck. I also made a better job of the power switch, fitting an actual slide switch rather than a momentary one and relying on a long cruddy lever to push it. This all works much better than my first prototype and I may go back and rework it in this style. I'll need to buy more camera cables/joints though.
I'm still yet to do anything about audio. I dug out a little USB microphone dongle and may see if I can slim it down and hardwire it to a USB cable plugged into the port on the Zero. I'm not keen on desoldering the USB port, just in case.
For audio output I need to test the PWM output method and build a suitable filter.
Oh and then there's all the software I need to tweak, while Ekiga works there's no realistic way to control it so I'm dialling in from a laptop to test.
Obscure Arduino tips #2
This one may not be so obscure if you're a competent C++ programmer, but if you're somebody just finding their way in writing their own Arduino libraries it can be a major roadblock.
It is not uncommon for libraries that allow you to do work triggered by external events or that happen asynchronously to use callback functions. Your code will work fine but once you start turning that sketch into into a C++ class as a library, you won't be able to compile it.
The example I'll use is the ESP8266 WiFi scanning class, as this is where I encountered the problem.
While my code was a monolithic Arduino sketch I could kick off a scan with code like this.
Once you create a class, you can in principle have multiple instances of that class. Not to get into the detail of C++ classes (because I'm not a real C++ programmer) but when you need to refer to the specific instance of a class there is a 'magic' extra parameter 'this'.
A lot of the time you can ignore 'this' as a naive Arduino programmer when you make a library but you will bash up against it eventually, probably when you go to use a callback or function from another library.
You can use one of the standard C++ functional adaptors to help you, std::bind, which allows you to create a new function referring to the old, but change the arguments to the function. This allows you to pass 'this' into the callback without changing your original function at all.
So the code now looks like this.
For every argument in your original function you need a 'std::placeholder::_X' argument. So add extra parameters of 'std::placeholders::_2', 'std::placeholders::_3' etc. etc.
That's it. Unless you're actually trying to use a C library when it gets more convoluted.
It is not uncommon for libraries that allow you to do work triggered by external events or that happen asynchronously to use callback functions. Your code will work fine but once you start turning that sketch into into a C++ class as a library, you won't be able to compile it.
The example I'll use is the ESP8266 WiFi scanning class, as this is where I encountered the problem.
While my code was a monolithic Arduino sketch I could kick off a scan with code like this.
WiFi.scanNetworksAsync(myCallbackFunction,true);Where 'myCallbackFunction' is just the name of the callback function in my sketch. This is nice and easy.
Once you create a class, you can in principle have multiple instances of that class. Not to get into the detail of C++ classes (because I'm not a real C++ programmer) but when you need to refer to the specific instance of a class there is a 'magic' extra parameter 'this'.
A lot of the time you can ignore 'this' as a naive Arduino programmer when you make a library but you will bash up against it eventually, probably when you go to use a callback or function from another library.
You can use one of the standard C++ functional adaptors to help you, std::bind, which allows you to create a new function referring to the old, but change the arguments to the function. This allows you to pass 'this' into the callback without changing your original function at all.
So the code now looks like this.
WiFi.scanNetworksAsync(std::bind(&MyClass::myCallbackFunction,this,std::placeholders::_1),true);The callback function in WiFi.scanNetworksAsync is passed an integer telling it how many networks were found and to make sure that is there in the new function it needs a placeholder 'std::placeholders::_1'.
For every argument in your original function you need a 'std::placeholder::_X' argument. So add extra parameters of 'std::placeholders::_2', 'std::placeholders::_3' etc. etc.
That's it. Unless you're actually trying to use a C library when it gets more convoluted.
ESP-Now BATMAN data encapsulation
I've been working on other stuff recently but haven't totally forgotten my mesh network library.
Over the last couple of days I've been fiddling around with encapsulating data in a way that's very easy for somebody writing an Arduino sketch.
The first third of this is done in that I've written a set of overloaded functions for the common Arduino data types including both char arrays and the much maligned String type.
There's a single function 'add' that starts building a packet for you and works out how to pack the data based off the type of the single argument.
To add some data then send it (flooding the network) it's just a few lines of code.
Both functions return true or false depending on whether they are successful. You can add as many values as will fit in an ESP-Now packet and if there's no space left, 'add' returns false.
I've written a small sketch to send random amounts of random data of random types every ten seconds. It can fit about 30 values in each packet depending on exactly which types are involved.
This has been running for about eighteen hours without any hiccups so I'm happy with it.
My next task is to write a set of functions to give access to this data from a sketch when it arrives. I've written the logic to decode packets and print out the contents nicely but need to think through how to present it to a person writing their own code with the library.
It's not as easy as the 'add' function as I can't really overload it, unless I do something like make people send pointers to their variable. Which I don't think is very friendly, however efficient it might be.
There's also the matter of sending messages to specific nodes, which means a whole load more public functions to find out about the mesh and allow the retrieval of MAC addresses. Which again feels unfriendly because you're going to end up with hard coded MAC addresses unless somebody layers on their own way of mapping MAC addresses to their various nodes' identities or functions.
I might implement my idea of giving nodes names which are simple text strings. Who cares what the MAC address is after all, it's what code is running that matters.
So you could call one node 'Daylight sensor' and another 'Light switch' and the former tells the latter to switch on when it gets dark. I'm not expecting this library to be used for IoT type things like this but I think it's a good example of why having human readable names is desirable. I could just add the name to the 'status' protocol packets I already send.
It's slow discovery of requirements as I write that demonstrate why this isn't a professional piece of software engineering and also why it's taking me so long. :-)
Over the last couple of days I've been fiddling around with encapsulating data in a way that's very easy for somebody writing an Arduino sketch.
The first third of this is done in that I've written a set of overloaded functions for the common Arduino data types including both char arrays and the much maligned String type.
There's a single function 'add' that starts building a packet for you and works out how to pack the data based off the type of the single argument.
To add some data then send it (flooding the network) it's just a few lines of code.
mesh.add(value1);
mesh.add(value2);
mesh.add(value3);
mesh.add(value4);
mesh.add(value5);
mesh.add(value6);
mesh.send();
Both functions return true or false depending on whether they are successful. You can add as many values as will fit in an ESP-Now packet and if there's no space left, 'add' returns false.
I've written a small sketch to send random amounts of random data of random types every ten seconds. It can fit about 30 values in each packet depending on exactly which types are involved.
This has been running for about eighteen hours without any hiccups so I'm happy with it.
My next task is to write a set of functions to give access to this data from a sketch when it arrives. I've written the logic to decode packets and print out the contents nicely but need to think through how to present it to a person writing their own code with the library.
It's not as easy as the 'add' function as I can't really overload it, unless I do something like make people send pointers to their variable. Which I don't think is very friendly, however efficient it might be.
There's also the matter of sending messages to specific nodes, which means a whole load more public functions to find out about the mesh and allow the retrieval of MAC addresses. Which again feels unfriendly because you're going to end up with hard coded MAC addresses unless somebody layers on their own way of mapping MAC addresses to their various nodes' identities or functions.
I might implement my idea of giving nodes names which are simple text strings. Who cares what the MAC address is after all, it's what code is running that matters.
So you could call one node 'Daylight sensor' and another 'Light switch' and the former tells the latter to switch on when it gets dark. I'm not expecting this library to be used for IoT type things like this but I think it's a good example of why having human readable names is desirable. I could just add the name to the 'status' protocol packets I already send.
It's slow discovery of requirements as I write that demonstrate why this isn't a professional piece of software engineering and also why it's taking me so long. :-)
Chindōgu - part 1
A friend called this a Chindōgu and I had to look the term up. He called it dead right.
For some time I've been meaning to fit a Raspberry Pi inside a vintage Sony CRT portable TV, making the chunkiest most impractical yet still portable Raspberry Pi. I've been spending all my time coding my mesh network and last weekend needed a break so this practical project called out to me.
Squeezing the Pi inside was comparatively easy. Bigger than the more common 'oblong' models, this FD-250B has a rounded case, offering a bit of space round the edges, bigger screen and an AV in socket on the side making connecting the Pi trivial. No hacking into the tuning circuit, I just desoldered the AV socket and connected the Pi composite output up.
I'm using a Raspberry Pi Zero W, which has no sound output by default so at the moment this is silent, but there's a way to generate sound by re-allocating some PWM pins and feeding it through a low-pass filter. I may get back to this later but need to be clear if I'm sticking with the Zero or upgrading to a 3A+ with the sockets desoldered, which would give me audio.
I'm trying to keep this externally as original looking as I can so it's still running off the four AA batteries and uses the original slide on/off switch, but repurposed to trigger a smart power switch so the Pi can shut down gracefully when you switch off.
At the moment the tuning wheel is completely removed and I plan to turn it into a rocker switch for scrolling up/down, selecting things etc.
You can just about use the GUI with a paired Bluetooth keyboard and mouse. Web browsing sucks, not just because of the screen size and distortion, but also because the Zero doesn't really have enough horsepower. Here's a little video of it in use.
After this I tried playing a couple of 4:3 video clips and it excels at this. Which is right and proper given its original intended purpose. Right now you have to start omxplayer manually from the command line, but I can see how a simple file browsing GUI, probably written in Pygame, could turn this into a decent super-retro media player.
I have plans beyond this though, I want to squeeze one of the tiny Pi cameras inside, perhaps where the tuning pointer was, and it's occurred to me I might be able to fit a resistive touchscreen if I replaced the round bezel. Which I would like to do anyway if I fit the camera. This is to remove all trace of the tuning pointer.
Subscribe to:
Posts (Atom)