Home Automation and Voice Control

HAL-9000 (Space Odyssey), Mother (Alien), The Matrix, Jarvis (Iron Man), KITT – who doesn’t know them? And since a few days there’s Jasper, voice control for the Raspberry Pi.
An RPi, microphone, speaker and network connection is all you need (and the Jasper software package of course).

Interacting with computers by voice has always been a very appealing feature to have in my Home Automation System. There’s a button on the touchscreen in the living room which controls a light bulb – when you press that button, you hear Darth Vader saying “Yes, Master“. My son and I liked it; it was funny. But there had to be more…

So when 2 Princeton students released ‘Jasper’ a few days ago, I was triggered to revisit the subject ‘voice control’ once again.

My first thought was to give Jasper a try as soon as I had the time – but after I read some parts of the API documentation I became a bit hesitant – stuff like defining the words that the user is allowed to speak (or better: which will be recognized and processed further by Jasper) in the code is not how I’d like to do things. Another thing I didn’t like is that it would become a more or less isolated ‘sub-system’ to my HA system – answering questions, controlling Spotify and such. Create a module for every type of hardware here in our house? Neh. No chance.

Maybe it’s better to revisit Voicecommand, a tool developed by Steven Hickson and his PiAUISuite which I read about a year ago or so. Voicecommand (at least, the demo-videos are) seems to be made primarily to initiate actions (playing video, music, start the browser) on the local computer(/Raspberry). But why not try to extend it, remove some of the (local) action initiation parts of the code and replace that with a MQTT client?

That would make it a perfect fit for my HA system – this way I can let my rules engine receive the voice commands and let the rules engine be the definition of what has to be accepted as valid command and what actions should be executed.

So I ‘freed’ a Raspberry Pi and downloaded the PiAUISuite. The first problem was that I didn’t have a USB microphone – ahh, but our kids do, for things like Skype, online gaming and other things I never do. I found an old speaker set in the garage and I was good to go.

After some tinkering with the Voicecommand tool as-is, it’s configuration, trying different keywords and stuff like that, it was time to change some things.

First thing I wanted to change was the language. Voicecommand uses the Google Speech API, so using Dutch as language should not be a problem; all I had to do was change lang = “en” to lang = “nl”. Done! It improved the voice recognition quite a bit too! 😉

I also wanted to change the response (“Yes, Sir?”) in to a simple short beep. This would significantly shorten the duration of the whole conversation which was a bit too long for my taste. I searched for a ‘beep’ MP3 on the internet that was short and loud enough to be noticed, searched the Voicecommand code for Speak(response) and replaced that call with Play(beep), a new function that I added to the code.

Another thing I changed was the matching of spoken command with a list of predefined commands (and their associated actions) in the ~/.commands.conf. Right now, I just send every word to my HA system and let my system decide if the spoken command contains something useful.

The last thing I did to do get communication between Voicecommand and my HA system going was building the Mosquitto MQTT client on the Raspberry Pi and call that client (mosquitto_pub) with the right parameters from Voicecommand with a system() call. It’s a bit of a quick & dirty trick to get things going though; it would be much better to incorporate the MQTT protocol in the Voicecommand code, but that’s too much work for now – first I want to see how this works out in practice with a better microphone and some useful commands & rules…

The only rule I have right now is this one, for controlling a small night lamp in the office:

rule office_test_light {
  when {
    m1: Message m1.t == 'voice' && m1.contains('licht');
  }
  then {
    if (m1.contains('aan')) {
      publish("command",'{"address":"B02", "command":"ON"}');
    } else
    if (m1.contains('uit')) {
      publish("command",'{"address":"B02", "command":"OFF"}');
    } else {
      log('Snap het niet');
    }
  }
}

Voicecommand has, for as far as I can see now, one drawback: no Internet connection means no Voice Control. The (very!) big plus is that the TTS voice is superior to what I’ve heard with Jasper.

Future plans:

  • sending textual (MQTT) messages to Voicecommand and let it speak them;
  • returning an error message when the rules engine was not able to process the command;
  • adding the RPi hostname to the message that goes to my HA system, which can be useful when having multiple Voicecommand Rpi’s throughout the house – cause a “light off” command in the garage implies a different action than “light off” in the kitchen.. 😉

Right now, after a few hours of tinkering, I think I’ve got something that’s worth spending more time on. We’ll see! Here’s a video of what I’ve accomplished so far:

 

 

Home Automation with Node.JS & MQTT

Shutting down my old homebrew Windows-based Home Automation system and letting the new Node.JS/MQTT based HA system take full control last Saturday was done without a glitch! Better, faster, smoother than I expected.

I had planned to start with this on Saturday morning, so that I would have ~36 hours to fix any problems that would arise, but that didn’t go as planned – some other things I had to do on that Saturday made me postpone the big switch to Saturday evening.

Around 7 o’clock in the evening I told the rest of the family that they just had to pretend that I was away, unreachable, until I would tell them that I was back again – the ‘do not disturb’ door hanger 😉

The main concern in this whole exercise was not losing any historical data. First I did a test-run of copying all the historical data from MS SQL to MySQL and checked if this still worked like it should; it did. Checked the information in the MySQL database for consistency, correctness and so forth. Great. I ran the historical-data copy again, renamed some tables in MySQL, changed some configuration settings (database names), restarted some Node services and checked if storing the historical data worked as before, but now by Node.JS, directly into the production database. This was the point where I had to decide for the go/no-go; within just a couple of minutes I knew it was a GO!

I copied a file with the new configuration settings to the Raspberry Pi’s and after that, all I had to do was restart all the services so that they would switch from the MQTT broker running on the Windows VM to the broker on the Cubietruck. Great!

About an hour later, after testing some things by walking through the house, pushing some buttons, seeing lights being switched on by detected motion and other stuff like that, I knew I would have a huge amount of free time to do a lot of other things again! 🙂

The day after went just as smooth and relaxed – no issues, just a minor thing I forgot about and which was fixed in 5 minutes… Now, Wednesday evening and 96 hours later, everything’s still 110% OK.

As of now, my Node.JS/MQTT based HA system has the following ‘features‘:

  • Smart meter

The output of the so-called P1 port of our Smart meter is being parsed by a small script which publishes the relevant information (power usage, gas usage) to the MQTT broker.

Our Roomba 563 robot vacuum cleaner is monitored and controlled with a Thinking Cleaner module which is plugged into the SCI port of the Roomba.

  • RFXCOM receiver

Two RFCOM receivers are used to collect information from various sensors (mainly Oregon Scientific temp/humidity, Visonic door/window, motion).

This is one of the drivers that’s controlling a lot of lights in our house, but is also used for things like controlling the garage door opener. I use 2 PLCBUS controllers – one in the meter cabinet, the other one is located at the other end of the house.

  • EISCP protocol

The RS-232 port on our Onkyo AV Receiver enables us to control every aspect of the device – switching HDMI inputs, volume level, on/off, mute…

  • NMA

Notify My Android is being used to send notifications to my cell phone – stuff like new LED Bar messages, and warnings about things that might need my attention.

  • Mobotix

Our Mobotix D22 security camera has a great light sensor; i use this light sensor to determine when the time has come to switch some outdoor lights (front door, back door, garden, gazebo) on or off. The http interface of the camera enables me to take snapshots.

  • LED Bar

Just a funny gadget..

  • IRTrans

The IRTrans LAN module is used to control our UPC media box and to turn on our Dune media player.

The Dune IP Control protocol enables the system to control our Dune HD Max media player.

The HA7Net provides information about in- and outgoing temperatures of our 5 floor heating groups with 1-Wire sensors and about the amount of DHW (hot water) we use (water meter with pulse + Dallas 1-Wire DS2423 counter).

  • Remeha Calenta

One of the most informative devices… even things like the fan speed can be monitored with its ‘service‘ port! More interesting of course are things like modulation level, water pressure, operating mode (central heating, DHW).

  • Alphatronics

A great receiver to receive Visonic Keyfobs and sensor RSSI information;

Primarily used for the Philips Pronto TSU9600, our single remote solution for all our AV equipment.

  • Somfy RTS

An RS-485 Somfy RTS transmitter enables us to control 12 roller shutters.

  • Rules engine

The part of the system that does the real automation: based on inputs (sensors) this engine can initiate all kinds of actions with all the hardware (actors) which is connected to the system.

  • RGB LED

A DMX based RGB LED driver controls 6 RGB LED lights under our gazebo;

This great device enables us to set the room temperature to what we want it to be, without having to walk to the room thermostat or even being at home.

  • Nemef Radaris Evolution

This RFID lock that’s on our frontdoor, controlled by a Nemef RF controller, gives detailed status information about the RFID tags being used, access control and remote access.

  • Conrad MS-35 LED driver

A couple of these are used to control warm white LED strips. I made them wireless with small TTL-to-Wifi adapters.

  • Siemens M20T GSM modem

This GSM modem is being used to send SMS messages to my cell phone, but it’s task is gradually taken over by NMA.

  • Email

Emails are primarily used to notify me about sensors that need new batteries.

  • ELV MAX!

The ELV MAX! Radiator Thermostats are used to control the temperatures in all the rooms in our house: bedrooms, bathroom, etcetera.

Why? Because I can! 😉

  • Zigbee

I have several sensors based on a combination of JeeNodes with XBee ZB modules: motion, temperature, pressure. I also use those XBees to make 3 Chromoflex LED drivers wireless by connecting an XBee to the serial port of the chromoflex – works great.

  • 16-channel LED driver for staircase lighting

Homebrew LED driver to control 13 (or is it 12?) LED strips which light the stairs. The Node driver mainly controls how an Arduino sketch should behave.

  • RFXCOM transmitter

This RF transmitter has only one purpose: controlling 2 433 MHz door bell chimes.

  • Plugwise

12 Plugwise Circles are used for monitoring power usage and to detect whether the washing machine or dryer has finished its program.

  • Chromoflex

This service calculates the payload that has to be sent to the Chromoflex LED drivers to control the LED strips. This payload is then forwarded to an XBee radio to which the Chromoflex driver is connected.

  • Btraced

Btraced is an app for iPhone/Andriod and it enables you to send your location to your own server; this service adds some additional information (by reverse geocoding) so that the location can be displayed on our touchscreen with use of Google Maps.

  • Visonic PowerMax Plus

Our security system is connected too; we’re no longer limited to using keyfobs, panel or keypad anymore for controlling this security system. And additionally, all the sensor information (open, closed, motion, battery status, tampering) is available in my system.

In use since I ditched A-10/X-10 recently. It has been on the shelf for some time after doing some small tests with it, but now it’s an excellent replacement for the small amount of X-10 stuff I was still using.

  • Doorbell

Our Ethernet-enabled doorbell communicates with a Node script to report ‘rings’ and query daylight status (used for switching a LED on & off for visibility of the button).

Quite a list, if I may say so … and it’s all running great!

My ASP.Net website still uses a MS SQL Server for its data – this database is now kept up to date by a NodeJS script, just like the MySQL database on the Cubietruck.

So, what’s next? Well, first I’m gonna take a break, that’s for sure… I’ve done enough JavaScript in the past 8 months, so it’s time for something different; I’ve also neglected some other things that really do need my attention now. And of course I’ll have to start working on the User Interface, for which I’l have to learn a lot before I can start developing.. never a dull moment!

Onwards!

The All-in-1 solution for Btraced Track & Trace

Somehow the Btraced post I wrote some time ago was somewhat half-finished. That post explained how to use a Raspberry Pi as your private Btraced upload server and briefly shows what you can do with your GPS data. What I did not cover was how to get the Btraced information in a web page. Btraced on Google MapsFor me personally, getting the Btraced information to a MQTT broker was the only thing I needed to do to get it all working, because the rest was already covered – and I figured that creating a webpage that could receive the Btraced information and display it was a bit too trivial to mention. Nevertheless, last week I did a small ‘remake’ of the Btraced code I wrote, cause I thought it would be nice to also create an All-in-1 solution for Btraced for those that don’t use MQTT or don’t have a web-server (or both ;-)).

Combining the Btraced upload server, reverse geo-coding and a web-server in one ‘package’ running on a single Raspberry Pi, fully self-supporting, that was the idea. No need for another web-server anymore, nothing – just a Raspberry Pi and the Btraced app. And of course everything’s working real-time, cause nowadays you don’t have to settle for less anymore.

Btraced codeMost of the code was already finished, all that needed to be added was a web-server and a way to get the uploaded, parsed & extended Btraced information to the web clients (the browser(s)).

And of course some static content like a html page, style sheet and some Javascript.

And the result is great – a light-weight web page, served by the Raspberry Pi that gives you real-time access to the Btraced GPS information on any device with a web browser on board – and all with the power usage of a single Raspberry Pi. Saving the uploaded data in a database is not supported yet but should be very easy to accomplish. And there are more things that can be easily added once you have a Raspberry running..not just Btraced related; Pieter and I are currently discussing some options.

Interested? Let me know. In the mean time, happy tracking!

Staircase project, part 2: Raspberry Pi and i2c

After rediscovering the Dimmer Plug  a few days ago, the time had come to see if I could use the Dimmer Plug directly from a Raspberry Pi.

Before I could connect the Dimmer Plug to the Raspberry Pi, I had to enable i2c on the Raspberry Pi; because out of the box, i2c is disabled (because there’s not much to learn about i2c? ;-))

Here’s a short list of what had to be done:

  • Remove i2c from blacklist

The file /etc/modprobe.d/raspi-blacklist.conf had to be modified so that it looked like this:

#blacklist spi-bcm2708
#blacklist i2c-bcm2708

The ‘#’ turns the 2 lines into comment lines, so that those 2 modules are no longer blacklisted/disabled; I also enabled spi which is not really necessary for this.

  • 2 lines had to be added to /etc/modules:

i2c-bcm2708
i2c-dev

  • Installing i2c-tools:
sudo apt-get install i2c-tools
  • Sufficient access rights for user pi:
sudo adduser pi i2c && reboot

After the reboot, I had 2 new devices:

pi@rpi3 ~ $ ls -rtl /dev/i2c*
crw-rw---T 1 root i2c 89, 0 Sep 12 21:43 /dev/i2c-0
crw-rw---T 1 root i2c 89, 1 Sep 12 21:43 /dev/i2c-1

And after I connected the Dimmer Plug, a scan of the i2c bus resulted in the following:

pi@rpi3 ~ $ i2cdetect -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          03 -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: 40 -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --
pi@rpi3 ~ $

Well that wasn’t too hard… The Dimmer Plug with address 0x40 is being detected, so we’re ready to go… The next thing that had to be done was getting i2c working from Node.JS. Fortunately there’s a Node module for i2c, so that part is already covered. A driver for the PCA9635 IC wasn’t hard either – all I had to do was sending the same bytes to the PCA9635 as the JeeLib driver did.

The things I want to do with the LED strips involves sending certain ‘patterns’ to the LED driver at specific intervals. Will the Raspberry be able to do this in a reliable way? I don’t know yet, cause right now the Raspberry which I’m using to test this LED driver, has (almost) nothing else to do than run a single Node app that uses this LED driver. But what if more drivers are running, all consuming CPU cycles, or what if the OS feels it’s time to do something else for a change, just when the LED strips need to be adjusted in brightness? We’ll see.. maybe not now, but too much delay or other irregularities should be visible right away, so I decided to just wait and see how this turns out in practice.

The PCA9635 driver code is quite small and simple, actually:

/**
 * PCA9635 LED driver I2C library for Node.JS
 * Based on the JeeLib Dimmer Plug driver
 * (https://github.com/jcw/jeelib)
 */

i2c = require('i2c');

var modes = {
        MODE1:0, MODE2:1,
        PWM0:2, PWM1:3, PWM2:4, PWM3:5, PWM4:6, PWM5:7, PWM6:8, PWM7:9,
        PWM8:10, PWM9:11, PWM10:12, PWM11:13, PWM12:14, PWM13:15, PWM14:16, PWM15:17,
        GRPPWM:18, GRPFREQ:19,
        LEDOUT0:20, LEDOUT1:21, LEDOUT2:22, LEDOUT3:23,
        SUBADR1:24, SUBADR2:25, SUBADR3:26, ALLCALLADR:27};

function DimmerPlug(device, address) {
  this.device = device || '/dev/i2c-1';
  this.address = address || 0x40;
}

DimmerPlug.prototype.initialize = function() {
  this.i2cdev = new i2c(this.address, {device : this.device});
  this.setReg(modes.MODE1, 0x00); // normal
  this.setReg(modes.MODE2, 0x14); // inverted, totem-pole
  this.setReg(modes.GRPPWM, 0xff); // set group dim to max brightness
  this.setMulti(modes.LEDOUT0, [0xff, 0xff, 0xff, 0xff]); // all LEDs group-dimmable
}

DimmerPlug.prototype.setReg = function (reg, value) {
  this.i2cdev.writeBytes(reg, [value]);
}

DimmerPlug.prototype.setMulti =  function(reg, values){
  this.i2cdev.writeBytes(reg | 0xe0, values);
}

module.exports = DimmerPlug;
module.exports.modes = modes;

Now the Node app… I decided to follow the dimmer_demo sketch, cause if I’d get the same results with the RPi as with the JeeNode I knew everthing was OK. And now the code for the Node app becomes  a bit awkward… it’s a demo, constantly changing the brightness of all the LEDs, but there are no real events, so there’s nothing to trigger on.

This app does its job totally isolated from the rest of the world.. and things get even ‘uglier’, cause the JeeLib demo contains a few delay() statements – which (for the obvious reasons) is not available in Node – yes, there’s setTimeout(), but that doesn’t pause execution! This was the first time I had to think about how to do something in Node.. I needed something to control the program flow; normally (in my case) a driver gets its events from either MQTT messages or incoming data from the hardware, but both are not the case here (yet). I decided to use the async module for that and create a series of functions that should be executed after each other. This is how the Node version of the dimmer demo looks like:

var dimmerplug = require('./dimmerplug');
var async = require('async');

var level = 0x1fff;
var dimmer = new dimmerplug();

async.series(
  [
    function(callback) {
      dimmer.initialize();
      dimmer.setMulti(dimmerplug.modes.PWM0, [255, 255, 255, 255,
                                        255, 255, 255, 255,
                                        255, 255, 255, 255,
                                        255, 255, 255, 255]);
      // set up for group blinking
      dimmer.setReg(dimmerplug.modes.MODE2, 0x34);
      // blink rate: 0 = very fast, 255 = 10s
      dimmer.setReg(dimmerplug.modes.GRPFREQ, 50);
      // blink duty cycle: 0 = full on, 255 = full off
      dimmer.setReg(dimmerplug.modes.GRPPWM, 100);
      // let the chip do its thing for a while
      setTimeout(function(){callback(null, '1');},10000);
    },
    function(callback) {
      // set up for group dimming
      dimmer.setReg(dimmerplug.modes.MODE2, 0x14);
      // gradually decrease brightness to minimum
      for (i = 100; i < 255; ++i) {
          dimmer.setReg(dimmerplug.modes.GRPPWM, i);
      }
      setTimeout(function(){callback(null, '2');},2000);
    },
    function(callback) {
      while(true){
        brightness = ++level;
        if (level & 0x100){
            brightness = ~ brightness;
        }
        r = level & 0x0200 ? brightness : 0;
        g = level & 0x0400 ? brightness : 0;
        b = level & 0x0800 ? brightness : 0;
        w = level & 0x1000 ? brightness : 0;
        // set all 16 registers in one sweep
        dimmer.setMulti(dimmerplug.modes.PWM0, [w, b, g, r,
                                      w, b, g, r,
                                      w, b, g, r,
                                      w, b, g, r]);
      }

    },
  ],
  function(err, response) {
    console.log(response);
  }
);

A bit weird, but it works. The Node demo worked just as well like the JeeNode version, so another part of this staircase project is done – the power MOSFETs have already been ordered, so as soon as they arrive I can connect real LED strips the the Dimmer Plug – can’t wait to see the results!

Dimmer Plug connected to Raspberry Pi

Oh, and in the meantime, it might be good to start working on the real renovation job as well.. 😉

Staircase project, part 1: the Dimmer Plug

Renovation is always a good opportunity to think about some nice automation projects, cause renovating almost always means (partly) breaking down some things and then rebuilding it in a somewhat different way. That’s the right time to do some extra – drilling holes, pulling wires, cable ducts and so forth.

So when we decided to renovate our staircase, I immediately had a great plan: mounting LED strips under the steps to light the stairs. Automatically of course, and with some cool gadgets thrown in as well 😉

I also got the idea to use 2 light barriers at both ends of the stairs to detect that someone is walking up or down the stairs and control all the 12 LED strips independently; so if someone goes upstairs, the LED strips increase in brightness with an interval: starting with the lowest LED strip, going up. And if some goes downstairs, the strips go on from top to bottom. Sounds nice..

OK, but how do I control 12 LED strips independently? And not just on/off, but also in brightness. I didn’t like the idea of using 4 RGB LED controllers, so I started searching for a >12 channel LED controller. After some days I finally got the inspiration I needed – a JeeLabs Dimmer Plug! The Dimmer Plug uses a PCA9635 IC to drive and dim up to 16 LEDs independently.. that should do it! The Dimmer Plug uses I2C to communicate with the outside world, so that shouldn’t be a problem either.

Yesterday evening I soldered a JeeNode, finished the Dimmer Plug (headers etc.), connected 4 3mm LEDS to the Dimmer Plug, uploaded the dimmer_demo sketch from the JeeLib library to the JeeNode and I saw all 4 LEDs doing their own ‘thing’ independently; so far so good.

JeeNode + Dimmer Plug

But why should I use a JeeNode to control the Dimmer Plug? I mean, I have a couple of Raspberry Pi‘s running here, I could just as well use a Raspberry Pi (RPi) for the I2C communication, right? That saves me the hassle of talking from a RPi to the JeeNode, which in turn talks to the Dimmer Plug… so I decided to leave the JeeNode out and connect the Dimmer Plug directly to the Raspberry Pi.

Connecting the Dimmer Plug to the Raspberry Pi will be done in the next couple of days…  part 2 of the ‘staircase project‘ !

Migrating to the future has begun

I think I’ve got it. For now… Almost a year ago I realized that something had to change; my Domotica system grew too fast, became too big to keep it all in a single executable: stability and flexibility were the two main issues that had to be addressed.

During the last 12 months I tried several things to find a better solution on how to gradually rebuild my system; ZeroMQ, SimpleCortex, MQTT, FEZ, Raspberry Pi, Node.JS, Python, Netduino, they were all tried and tested for some time. And (for me) the winners are: Raspberry Pi, MQTT and Node.JS.

The power of Node.JS enables me to very quickly develop a stable hardware driver and combining this with a very small, low power & cheap Linux box like the Raspberry Pi to run those drivers is absolutely unique in my opinion; and MQTT is the super-glue that makes all the different parts of the system talk to each other, from Arduino sketch to VB.Net application.

The last weeks have been quite busy for me, there was very little time was left for working on Domotica, but with some hours here and there I still managed to write 5 drivers for some parts of the hardware I’m using in my Domotica system. So since a week or two I have a Raspberry Pi here that has been running those 5 replacement-drivers flawlessly – for my RooWifi (Roomba), RFXCOM RF receiver, Mobotix security camera light sensor, HA7Net with 10 1-Wire sensors attached to it and for my Remeha Calenta boiler. The last one mentioned is one of the most CPU & I/O intensive drivers I have, but the Node-version of all those drivers work perfectly on a single RPi:

uptime

Still enough processing power left to add some more drivers, don’t you think?

I made some changes to my monolithic Domotica system so that it would accept ‘raw’ device information from the outside world by means of MQTT, automated starting the drivers after booting the RPi and everything has been running great so far. I still have a lot of things to do though, mostly regarding maintenance & ease of use, of which some issues have already been addressed and others need some more time to find the best solution:

  • backup procedure for the RPi;
  • remotely controlling the RPi and its drivers;
  • supervising the RPi and its drivers;
  • storing global configuration data elsewhere.

So I still have some things to do before I can concentrate on migrating all my hardware drivers to the RPi, but I think I should be able to do one driver per week (that includes developing, testing and accepting it as a reliable replacement). The advantage I have is that I already have thoroughly tested Delphi code with all the workarounds for hardware specific peculiarities in the code; for example, I know that the Remeha Calenta sometimes doesn’t respond to a query, so I already know the new driver on the RPi needs to able to handle that – knowing all those peculiarities will certainly speed up things.

Another advantage is that all my hardware interfaces are connected to my LAN, so I don’t have to worry about RS-232, -485 or USB stuff, an IP address is enough for the RPi to communicate with the hardware interface.

So if all goes well, my Domotica system will be stripped of all its built-in drivers in about 30 weeks or so (cause that’s about the number of drivers I’m using right now) and all those drivers will be migrated to the RPi. Sounds doable, and I hope this will still leave some time to do other things as well, like adding more hardware, software and features to my system.

Yeah I know, I’m always too optimistic… 😉

RooWifi, Node.js and MQTT working together

As I wrote yesterday, the RooWifi has both a Web Interface and it can also connect to a remote TCP server. The RooWifi prioritizes the TCP server connection to the Web Interface, so if there’s a remote TCP server running and accepting connections, this TCP Server is in full control. The IP address and port number can be set from the RooWifi Web Interface:

RooWifi TCP server settings

This afternoon I tried to get the RooWifi to connect to a TCP server. I decided to use Node.js this time. I’ve used Node.js before and I wanted to see if I could get it running on one my Raspberry Pi‘s, create a TCP server, accept connections, parse a JSON payload and send the results to a MQTT broker (mosquitto).

I remembered a set of commands on JeeLabs to install Node.js, so I used that:

sudo usermod -aG staff pi && sudo reboot
v=v0.10.7
cd
sudo curl http://nodejs.org/dist/$v/node-$v-linux-arm-pi.tar.gz | tar xz
cp -a node-$v-linux-arm-pi/{bin,lib} /usr/local/

The first script I made was a TCP server that kept track of the clients that connected and disconnected and wrote the incoming data to the console:

net = require('net');
var connections = [];

net.createServer(function (socket) {

socket.name = socket.remoteAddress + ":" + socket.remotePort
 connections.push(socket);

socket.on('data', function (data) {
 var newDate = new Date();
 var time = newDate.toLocaleTimeString();
 console.log(time + " | " + socket.name + " < " + data);
 var obj = eval('('+data+')');
 console.log(time + " | temp=" + obj.roomba.temp);
 console.log(time + " | dirt=" + obj.roomba.dirt);
 });

socket.on('end', function () {
 var newDate = new Date();
 var time = newDate.toLocaleTimeString();
 console.log(time + " | " + socket.name + " ended the connection");
 connections.splice(connections.indexOf(socket), 1);
 });
}).listen(8001);

console.log("Server listening on port 8001n");

That’s it?? Yep.. 26 lines of code, wow. The output looked like this:

19:57:29 | 192.168.10.201:1561 < { "roomba": { "status": "3", "cleaning": "0", "battery": ":0", "temp": "36", "dirt": "0" } }
19:57:29 | temp=36
19:57:29 | dirt=0
19:57:34 | 192.168.10.201:1561 ended the connection

I also installed the MQTT package from adamvr:

npm install mqtt

I added a mqtt client to the script so I could publish all the information inside the JSON data and made some preparations to receive commands from the outside world as well:

net = require('net');
var mqtt = require('mqtt');
var connections = [];

var mqttc = mqtt.createClient(1883, '192.168.10.17', {
 keepalive: 30000
 });

mqttc.on('connect', function() {
 mqttc.subscribe('command/roomba');
 mqttc.on('message', function(topic, message) {
 console.log('topic: ' + topic + ' payload: ' + message);
 });
});

net.createServer(function (socket) {
 socket.name = socket.remoteAddress + ":" + socket.remotePort
 connections.push(socket);

socket.on('data', function (data) {
 var newDate = new Date();
 var time = newDate.toLocaleTimeString();
 console.log(time + " | " + socket.name + " < " + data);
 var obj = eval('('+data+')');
 for(var key in obj.roomba){
 console.log(time + " | "+key+" "+obj.roomba[key]);
 mqttc.publish('/value/roomba/'+key, obj.roomba[key]);
 }
 });

socket.on('end', function () {
 var newDate = new Date();
 var time = newDate.toLocaleTimeString();
 console.log(time + " | " + socket.name + " ended the connection");
 connections.splice(connections.indexOf(socket),1);
 });
}).listen(8001);

console.log("Server listening on port 8001n");

Still a very tiny script! I started the script in the background and let it run for a while (it’s still running):

node tcpmqtt.js &

Now let’s see if I can get the RooWifi information visible on another machine, my Win7 PC for instance:

RooWifi data published

Bingo… Now it’s easy to write a webpage to display the information, make certain values historic in my SQL database, control the Roomba from any User Interface (Touchscreen, Smart phones, tablets) – whatever you can think of!

Of course, I still have to add the code to control my Roomba, but that’s just a matter of time and will probably add just a couple of lines of code to the script.

What I’ve learned today is that the Remote TCP server feature of the RooWifi is the best way to monitor & control your Roomba and that Node.js is very powerful and most important: it works like I think: event-driven.

Time-lapse Video with the Raspberry Pi

Last Thursday I taped the Raspberry Pi Camera Board to the glass, pointed it towards the sky and just let it run for 24 hours… the result is at the bottom of this post.

Camera taped to the window

I updated the RPi to the latest firmware, because I read that this would bring back the feature of turning off the camera LED with the ‘disable_camera_led=1′ setting in config.txt. And since I had never done a firmware update before, I thought this was a good reason to give it a try.

sudo apt-get update && sudo apt-get -y dist-upgrade
sudo apt-get install rpi-update
sudo rpi-update
sudo reboot

There’s another workaround for disabling the LED though, which is a small Python script. I mounted a NAS share on my RPi so that I didn’t have to worry about the size of all the stills that would be generated by this time-lapse adventure, for that I had to edit /etc/fstab and add the following line:

192.168.10.1:/volume1/exch /mnt/exch nfs nouser,atime,auto,rw,dev,exec,suid 0 0

… and of course I had to add the appropriate privileges on my NAS for the RPi.

On Wednesday around 23:00 I started the time-lapse run:

cd /mnt/exch/stills
raspistill  -o still%06d.jpg -t 999999999 -tl 10000 &

This will start raspistill and put it in the background. I closed the connection to the RPi, shut down my PC and went to bed. The next morning I checked the ‘stills’ directory on my NAS from my Android phone with ES File Explorer and saw a couple of thousand files – it’s still working, great 😉

Thursday evening, after 10 o’clock or so, I killed the raspistill process and I had about 8000 stills. But unfortunately no time to do something with them, for that I had to wait until Friday evening.

For creating the time-lapse video from the stills I used Sony Vegas Movie Studio HD and opened the huge set of stills this way (Project > Import Media):

Opening the stills

This way you’ll see all the stills as 1 single ‘file’ in the Project Media. I dragged the media to the Video track, set the rendering quality I wanted to use and about 8 minutes later the video was ready, 547 MB in size. Uploading to Youtube took a bit longer though… about 5 hours…

It was very cloudy that day, so the video is not really that entertaining.. but it was fun to do; let’s hope the weather this summer will give me the opportunity to make some nicer videos than this one… 😉

The Raspberry Pi Camera Board

More for fun than anything else, I bought a Raspberry Pi Camera Board about 10 days ago. Yesterday it arrived. It’s time to play 😉

The first thing I did was connecting the Camera Board to one of my RPi’s. It’s relatively easy to do. The connector that has to be used is the one between the HDMI- and Ethernet-connector. Gently pull it up and insert the cable so that the silver-colored side is facing the HDMI connector. When the cable is inserted, push down the connector again. (there’s a video here explaining it all in more detail).

I started with a fresh SD card (I have to start labeling them!) and opened an SSH session to the RPi. The following sequence of commands are necessary to get your RPi into shape (i.e. fully upgraded) so that it can handle the Camera Board:

sudo apt-get update

This will synchronize the package index files on your RPi, so that once this command has completed, your RPi ‘knows’ all about the available packages. This is necessary to perform the next command:

sudo apt-get upgrade

This will install the newest versions of all packages currently installed on the RPi, based on the information retrieved with the previous command. You can get yourself a cup of coffee now, this will take a while 😉

sudo raspi-config

To enable camera support, which is not enabled by default, you’ have to run raspi-config, enable camera support (menu option available in the main menu) and do a reboot after that.

From here you can use the Camera Board, capture jpeg images, videos and so on. However, assuming that would all work out of the box and because I wanted to stream video to my PC, I also installed VLC:

sudo apt-get install vlc

OK, all is still going smooth… I found a nice How-To that helped me to get raspivid & cvlc produce a video steam on port 8090 of the RPi:

raspivid -o – -t 99999999999 -hf -fps 25|cvlc -vvv stream:///dev/stdin –sout ‘#standard{access=http,mux=ts,dst=:8090}’ :demux=h264

Now all that was left to do was starting VLC player on my PC, tell VLC where to get the video stream (http://<RPi-IP-address>:8090) and there it was:

VLC snapshot

The image above was created with the snapshot-feature of the VLC player; clicking the image will show the full-res HD snapshot of 1920 x 1080 pixels, converted to jpeg to reduce the file size a bit. Not bad… I mean, I’ve seen worse on >100 Euro IP cameras.

So, what’s next? Setting up the Camera Board was easy and I like what I see, so what am I going to use it for… cause this camera board is to neat to end up on a shelf! But first I have a couple of things I want to explore further. For instance, I would like to have a somewhat longer ribbon cable between the board and the RPi; how to reduce the lag in the video stream; and I also want to be able to control the Rpi/Camera remotely (start/stop streaming, taking snapshots triggered by whatever goes on in and around our house, etcetera); adding PT (Pan & Tilt) control to the camera would be nice too, like I did some years ago with Arduino & XBee modules. Easy access to the images and captured video from my network would be handy too. Oh, and a time-lapse video is something I’d also like to try.

But before all that, I first need to find a way to protect the Camera Board from wet cat noses, cause with 3 curious cats running around in our house I know that leaving it unprotected will result in a short life of the Camera Board. In short: enough things to keep me busy for quite some time!

Backing up Raspberry Pi to Synology NAS

With what could have happened 2 weeks ago if I wouldn’t have had a good backup plan, I realized that I had to do something about those Raspberry Pi’s that have started invading our house and how I’m backing up the code running on those small Linux computers.

Cause what happens in practice is that you open a telnet/ssh session on the RPi, start coding, debugging & testing until it works. And by the time you’re finished with that and made sure that everything starts automatically after a reboot etcetera, you close the session and you’re done – on to the next adventure; right? Wrong.

This leaves all the code on the RPi, on a 4GB SD card with no backup at all… sounds dangerous! I just had to do something about that, before I would lose something.

The first thing that popped up in my mind was rsync, a tool to sync files and directories to another machine. And since I have 2 Synology NAS devices, I thought it would be smart to backup my RPi’s to one of those instead of making images of the SD cards. Cause the latter option would result in downtime each time I want to do a backup; yuck. So rsync it will be.

All the examples I found that worked with rsync made use of ssh, so that was the first hurdle I had to take – I’ve never done that much with public & private keys, key generators and so on. But with some help I managed to get things going. First you have to generate a key with ‘ssh-keygen -t rsa‘ on the NAS. The resulting file with the public key (default file name id_rsa.pub) has to be added to the authorized_keys file on the RPi. Normally you should be able to use ssh-copy-id command for that, but for some reason that didn’t work on my NAS, so I had to manually copy the file to the RPi and do a ‘cat id_rsa.pub >> authorized_keys’ to make that happen.

After that I could ssh from my NAS to the RPi without the need of entering a password (which is needed for the script to run unattended!).

Ok, next item on the list was making a rsync script.

After some tries I came up with this:

/usr/syno/bin/rsync -avz --delete --exclude-from=/volume1/backup/_scripts/rsync-exclude.txt -e "ssh -p 22" root@raspdev.hekkers.lan:/ /volume1/backup/raspdev/ >> /volume1/backup/raspdev_backup.log 2>&1

And it worked! In about 10 minutes the rsync script completed its work:

RPi backup on Synology

Last but not least, scheduling the backup job. It seems that since DSM 4.2 the Synology Task Scheduler has an option to run user-defined scripts. Just what I need:

Synology Task Scheduler

Cool.. so now my ‘master’ Synology NAS (a DS209) takes care of backing up the RPi and 1 hour later a Network Backup job makes a copy to my other NAS (DS109) ; so now I have 3 versions – the original and 2 copies. Try to beat that, Mr. Murphy! 😉

It’s roughly the same way how I backup my Windows servers – robocopy copies the important directories to the first NAS and Network Backup does the rest.

Now some statistics on how rsync is performing. Here’s a part of the initial rsync backup log:

sent 972675 bytes
received 1363869597 bytes
2347106.23 bytes/sec
total size is 1360002151 speedup is 1.00

It does take some processing power to perform the back up though:

Backup process load

70% CPU for process-id’s 3914 and 3921 (3rd and 4th line), that’s quite a lot. Removing the ‘z’ option (=compression during transfer) from the rsync command resulted in lower CPU usage and I might try to make the backup somewhat ‘nicer’ to reduce the CPU usage even more – backup is not that time critical but other processes running on the RPi might be… I’ll have to do some tests to see what’s best practice.