Making a scene

Being able to define scenes is one of the most important aspects in a Home Automation (HA) system. Not being able to define your own scenes, things you want to happen automatically, degrades a HA system to nothing more than a big expensive monitoring system or just a remote control. That’s not Home Automation, that’s Home Control!

ReteScenes let you automate certain things. Lowering the roller shutters so that the temperature inside doesn’t become unpleasantly high, switching lights on or off based on motion, sunset, sunrise, sending notifications to your smart phone, etcetera. Anything is possible, as long as your HA system is equipped with the right sensors (information) to trigger that specific scene you’d like it to execute for you.

When I started (re)developing my system in NodeJS, one of the first items on my to-do list was finding some sort of starting point to create my own scenes. Preferably developed for Node.JS, with its own DSL (Domain Specific Language), the ability to embed simple if/else statements and calculations inside the scene definitions and crontab-ish triggered scenes were my most important conditions to look for. When I looked at HomA (a NodeJS based framework for building smart homes) when I started exploring whether NodeJS was a good choice for rewriting my HA system, I noticed that it uses nools as a rules engine; Nools is a rete based rules engine and when I read the documentation it looked like a good starting point. HomA also provided a good starting point in terms of combining MQTT & nools.

In fact, I shamelessly copied this file from the HomA repository and added the things I missed and changed what I wanted to do different. Here’s a couple of things worth mentioning about how I integrated nools in my HA system and how I got scenes to actually automate things.

Lets have a quick look at how easy it is to use nools as the scene engine or rules engine. I’ll take some of my own scenes as example.

rule testrule {
  when {
    m1: Message m1.t == 'testsensor/status' && m1.changedTo('open');
  }
  then {
    unchange(m1);
    log('Execute rule Office testlight on');
    execute('command','{"address":"testactor", "command":"ON"}');
 }
}

Easy right?… when the status of the testsensor (in this case a Visonic door/window sensor) on my desk changes to ‘open’ , the testactor (a PLCBUS on/off module) which is also on my desk, will switch on a light.

But with this scene, when I close the sensor again, nothing happens. I would need a second rule to switch off the light when the sensor changes back to ‘closed’, in a similar way as with the rule above.

But that won’t be necessary, cause I can add some code to the rule, like this:

rule testrule {
  when {
    m1: Message m1.t == 'testsensor/status' && m1.changed;
  } then {
    unchange(m1);
    log('Execute rule Office test open');
    if (m1.p === 'open') {
      execute('command','{"address":"testactor", "command":"ON"}');
    } else {
      execute('command','{"address":"testactor", "command":"OFF"}');
    }
  }
}

With this scene, the light ‘follows’ the door/window sensor. And it’s totally immune to a particular brand or technology – cause all those are virtually interconnected by my HA system.

Here’s another one; this one switches off the pump (with a Plugwise Circle) of the floor heating when the temperature in the living room exceeds the thermostat setpoint on my Honeywell Chronotherm :

rule floorpump_off {
  when {
    c: Clock isTrue(c.inMinutes([0,30]));
    m1: Message m1.t == 'otgw/roomtemp';
    m2: Message m2.t == 'otgw/roomsetpoint';
  } then {
    unchange(c);
    log('Execute rule Floor pump off?');
    if(m1.p > m2.p){
      execute('command','{"address":"236D7E", "command":"off"}');
    }
  }
}

Especially the ‘code’ part is very nice to have – you can even call functions defined elsewhere in the rules file; nice!

As I said earlier, there were some things I had to do to make nools work; here are some of the things I had to take care of.

The amount of topics

I’ve got a MQTT root topic called value/. This topic contains all the so-called device values available in my system. They’re all retained, so by letting the rules engine subscribe to the value/# topic, the rules engine will always have access to all the last-known device values. With device values I also mean a great deal of virtual device values, values not produced by physical devices, but mostly calculated ones. Examples of those virtual device values are: GPS locations of our house and ourselves, todays usages of power and gas, position of sun, moon, house mode (eco, deepsleep, awake) and so forth.

All this adds up to a total of more than 1000 device values – so in practice there’s always something changing, almost each second – a temperature, usage value, motion, location, whatever. But nools was just too busy with all those ever-changing values it had subscribed to. So the first task for me was to reduce the number of topics it would subscribe to: no longer subscribing to the value/# MQTT topic, but only those that nools really needs, based on the contents of my rules (scenes) file. So I wrote a small routine that parses the rules file and extracts those topics being actively used in the rules file. This reduced the number of topics nools had to subscribe to 15. This was a big relief for the Raspberry Pi on which nools initially ran, cause it drastically reduced the amount of CPU cost for nools ;-)

Time related stuff

In Homa, the matching of the rules with the facts was only performed when a new MQTT message was received. This made it hard to implement rules that had a time trigger.
Taking into account that I’m doing my best to keep the workload for nools as small as possible, I added some additional code so that the minimal interval of matching is set to 1 minute.
Now I could do things like shown below, which is executing a script every 4th minute of the hour:

rule bwiredxml {
  when {
    c: Clock isTrue(c.isMinute(04));
  } then {
    forget(c);
    log('Execute rule Bwired XML');
    execute('command','bwiredxml');
  }
}

Triggers & conditions

A scene is triggered by something – a door opening or closing, motion, temperature, position of the sun, your smart phone, whatever. But that’s just a small part of the story – 90% of the scenes will also need one or more conditions that need to be met for the scene to be allowed to execute. Think of cases like lowering a roller shutter while the window is open – a temperature triggers the roller shutter going down, but when the (outward opening) window is open, you don’t want that to happen of course.

Or take the sunset scene for example; it will probably turn on some lights for you (front- and back door, garden), make a couple of roller shutters go down and do some more stuff you’d otherwise do manually. Using the calculated sunset time as a trigger is not good enough, cause that won’t take into account whether it’s cloudy or not (and therefore the brightness outside & inside). OTOH, what if there’s a heavy storm with lots of dark clouds  in the middle of the day? You don’t really want the sunset scene to execute either; so just a simple light sensor won’t suffice either.

Only when the measured light outside dropping below a certain minimal value and the current time being within a certain margin of the calculated sunset must allow a scene to be executed – for that, conditions come to the rescue. This is what  a sunset rule could look like:

rule sunset {
    when {
        m1: Message m1.t == 'mbtxls1/light' && m1.droppedBelow(170);
        c : Clock;
    } then {
      if(c.hoursIsBetween(15,22)) {
       ...
       ...
       ...
      }
    }

That’s better; now the scene will only be executed when it’s getting dark and time is between 15:00 and 22:59.

Even better would be this:

rule sunset {
    when {
        m1: Message m1.t == 'mbtxls1/light' && m1.p < 170;
        c : Clock;
    } then {
      if ((abs(sunset - c) < 1800) | (c - sunset > 1800)){
       ...
       ...
       ...
      }
    }

With this rule, the sunset scene would only be executed when dusk set in and:

  • the current time is within half an hour (1800 seconds) of the calculated sunset,
  • or
  • the calculated sunset is more than half an hour ago.

That should do the trick ;-)

Creating scenes that will always do the right thing is probably the hardest thing to do, cause every time I think of a new scene which should automate things for us, the first thing I realize is that I should have more sensors to make it really work – always when it should, but never when it shouldn’t; that last part is the hardest and most important, cause there’s nothing more irritating then scenes getting in your way, for example switching off lights when you don’t want that to happen…

Sticking to the facts

Homa retracts a fact (if you’ve come this far reading and don’t know what this means, start reading the nools documentation now) after it has triggered a rule; but that’s not that handy actually, cause you might wanna use that same fact (e.g. that door sensor) as a condition as well – in that case, retracting will break things. So instead of doing

  when {
    m1: Message m1.t == 'testsensor/status' && m1.p == 'open';
  } then {
    forget(m1);
    ...
  }

it’s better to do it this way:

  when {
    m1: Message m1.t == 'testsensor/status' && m1.changedTo('open');
  } then {
    unchange(m1);
    ...
  }

This will no longer retract (forget) the fact, but just disable it to trigger more than once (unchange) and keep the fact available to be used in other scenes as a condition.

One small disadvantage (the way it’s working now) is that it’s still not possible to define multiple rules with the same trigger(s) – after triggering the first fact, the fact is being modified to not trigger again, but that’s being done while more rules are waiting to be evaluated; I’ll have to come up with some sort of solution for that.

If you want to see how powerful nools really is, have a look at these (non-Home Automation related) examples – they show how great nools really is; have fun!

 Phew, it was really hard to finish a post with all that soccer on TV ;-)

Adding a smart meter to the Fibaro HC2

This is (sort of) a follow-up on my previous post about the cheap Serial to Ethernet converter. It has kept me busy for a couple of evenings, for several reasons:

  • I wanted to use the Arduino Nano as a web server;
  • Make the Nano return a JSON object containing the smart meter data;
  • Learn more about the Fibaro HC2 on the job;
  • The new sketch was unstable.

It was mostly that last item that kept me busy for some time – well, waiting describes it better actually, cause the sketch I initially wrote was doing what it should, but wasn’t stable – it stopped working after 2 hours, the next time after 20 hours.. totally unpredictable. I also saw the Nano was sometimes missing incoming serial P1 data. And of course, these sort of things never happen while you’re there to witness it: I was always either at work or asleep (or both ;-) ).

So for a few days I tried to improve things and had to wait until the next day to see if the changes fixed the issue. It wasn’t lack of free RAM, it wasn’t the USB hub misbehaving, it wasn’t RF interference, nor was it a grounding issue. After I found a solution, the cause was very obvious actually: the Nano doesn’t do multitasking. Somehow taking care of receiving and processing serial data and handling an Ethernet shield interfered with each other – I still don’t know why, but after I denied the web server part of the sketch of servicing HTTP requests while the smart meter data was being received, this problem was solved. Detecting the beginning and end of a P1 packet is easy: it starts with ‘/’ and ends with ‘!’. So now the loop() looks like this:

void loop() {
  c = mySerial.read();
  switch (c) {
    case '/':
      receiving = true;
      ...
      break;
    case '!':
      receiving = false;
      ...
      break;
  }
  ...
  if(!receiving) {
    if (ether.packetLoop(ether.packetReceive())) {
      ...
      ether.httpServerReply(readingsPage());
    }
  }
}

With a P1 packet being about 500 bytes in size and 9600 baud this would ‘stall’ the response for about half a second (max.), which should not be a problem.

After I finally had a sketch that kept working for >48 hours it was time to have a look a the Fibaro HC2 and try to get the smart meter information (visibly) available in there.

Because I’ve spent very little time with the Fibaro HC2, please don’t put too much weight in what I have to say about the HC2, but: there’s one word that’s now associated with the HC2 in my head: impressive. The HC2 case feels solid, looks good, gets warm, the UI looks good, the power consumption is about 13.5 W doing nothing (I had no Z-Wave hardware to add), and the HC2 has something called Virtual Devices. For me, that’s the thing I’m interested in the most – using the HC2 is not my thing, but exploring what I can do to add my own things to it is..

So after I hooked up the HC2 to my LAN, performed some upgrades and changed some settings, I immediately focused on those Virtual Devices. P1 Smart meter in Fibaro HC2I know I’ve just scratched the surface so far, but being able to create a Virtual Device within minutes and being able to get the smart meter data from the Nano web-server into Virtual Device properties and being displayed nicely as well with just a lua script of ~20 lines of code – that’s impressive! Not bad for a beginner ;-) And for a price <20 Euro! Add a database solution, charting and I think the HC2 has a great future ahead.

 

Although I liked what I’ve seen so far, I do have some things that might be not so handy.

The lua script I wrote, was only 20 lines or so. Editing the code from within the browser, in a small window is not really as comfortable as editing code can/should be. And more important, everything you need in a script has to be in there (AFAIK) – so suppose you have to calculate a CRC, the CRC code has to be inside the script that needs it – no way of including source code files which will probably be used in more ‘main loops’ than just 1. I’d really like to see some sort of ‘Plugin’-ish way to add support for exotic hardware; a bunch of files with code, UI pages that can be embedded in the HC2 UI (e.g. for configuration stuff), creating Plugin specific triggers etcetera. In other words: really embed your own creations into HC2.

If Fibaro can accomplish that, then the HC2 can become the killer solution for Home Automation for a lot of people; with Z-Wave as the base technology, yet still expandable with support for other popular hardware.

Oh my, almost forgot the lua script, here it is:

--[[
%% properties
%% globals
--]]
fibaro:debug("start")
if (nano == nil) then
  nano = Net.FHttp("192.168.10.191", 80)
end
response, status, errorcode = nano:GET("/")
fibaro:debug(response)
local data
data = json.decode(response)
-- not 0 based!
jso = data[1]
v181  = jso.v181
v182  = jso.v182
v170  = jso.v170 * 1000
v2421 = jso.v2421
-- deviceid 7
fibaro:call(7, "setProperty", "ui.lbl181.value", v181.." kWh")
fibaro:call(7, "setProperty", "ui.lbl182.value", v182.." kWh")
fibaro:call(7, "setProperty", "ui.lbl170.value", v170.." W")
fibaro:call(7, "setProperty", "ui.lbl2421.value", v2421.." m3")
fibaro:sleep(10*1000)

Have fun!

Flexible Serial to Ethernet for less than 20 Euro

Nano v3 with Ethernet Shield

Wired connections have always been my favorite for connecting hardware to my Home Automation System. This week I found a new way of doing that – costing only 17 Euros, purchased at AliExpress.

What you see in the image above is a ENC28J60 based Ethernet shield with a Arduino Nano 3 on top of it. Measuring about 70 x 20 x 37 mm (with the lower pins cut to half their length). Very small and programmable ;-)

Last Friday 2 sets (of shield & Nano) arrived and I just couldn’t resist giving them a try and since I’ve still got a Raspberry Pi near the smart meter for the sole purpose of collecting the data it produces, it looked like a good idea to see if I could replace the RPi with this Ethernet-Nano.

Finding a library for the ENC28J60 based shield wasn’t hard – I had already worked with Jean-Claude Wipplers Ethercard before and which is still in use as our doorbell controller, so that wouldn’t be any problem. But first I had to solve the problem I had when I connected the Nano to my PC- ‘Device enumeration failed’ is what USBView told me. It took a while before I got the idea to place a powered USB hub in between the two…. problem solved.

This Ethernet shield uses pin 10 for CS (Clock Select) but this was easy to change in the code; instead of

if (ether.begin(sizeof Ethernet::buffer, mymac) == 0)

all I had to do was supply an extra parameter and it all worked instantly:

if (ether.begin(sizeof Ethernet::buffer, mymac, 10) == 0) 

There’s one small problem I still have to fix, which is doing a DNS query on the Nano. It times out and I have no clue why; for now I added a ‘fallback’ IP address in the code until it’s fixed.

EthernetNano test setupI used a 2nd Arduino to play the smart meter ‘role’ by sending a P1 datagram to the Nano over a serial connection with an interval of 10 seconds – that would be as close to reality as it could get. I connected the TX pin of the Arduino Ethernet (the bottom one)  to the RX pin on the EthernetNano and all that was left to do was writing a sketch.

Sometimes it looks like everything has been developed before already, the only thing you have to do is find it or remembering where you saw it – well, that was the case here also. With the Ethercard library come a couple of examples which I could use for the Ethernet part of the sketch. And I knew that Jean-Claude Wippler had blogged about P1 data a couple of times – strip the RF12 code and replace it with Ethernet code and I would be ready to go… and that’s exactly what I did ;-)

Here’s the code for my first EthernetNano handling the smart meter data and uploading it (with HTTP) to my HA system! Bye bye RPi…

/// @dir p1scanner
/// Parse P1 data from smart meter and send as compressed packet over RF12.
/// @see http://jeelabs.org/2013/01/02/encoding-p1-data/
// 2012-12-31 <jc@wippler.nl> http://opensource.org/licenses/mit-license.php

// Changed to work with Ethernet shield, 2014-06 Robert Hekkers

#include <SoftwareSerial.h>
#include <EtherCard.h>

#define DEBUG 1   // set to 1 to use fake data instead of SoftwareSerial
#define LED   0   // set to 0 to disable LED blinking

SoftwareSerial mySerial (7,17 /*, true*/); // rx, tx, inverted logic
#define NTYPES (sizeof typeMap / sizeof *typeMap)
// list of codes to be sent out (only compares lower byte!)
const byte typeMap [] = {181, 182, 281, 282, 96140, 170, 270, 2410, 2420, 2440};

byte Ethernet::buffer[700];
static byte mymac[] = { 0x74,0x69,0x69,0x2D,0x00,0x01 };
const char website[] PROGMEM = "devbox.hekkers.lan";
uint8_t hisip[] = { 192,168,10,179 };
Stash stash;

byte type;
uint32_t value;
uint32_t readings[NTYPES+1];

static bool p1_scanner (char c) {
  switch (c) {
    case ':':
      type = 0;
      value = 0;
      break;
    case '(':
      if (type == 0)
        type = value; // truncates to lower byte
      value = 0;
    case '.':
      break;
    case ')':
      if (type)
        return true;
      break;
    default:
      if ('0' <= c && c <= '9')
        value = 10 * value + (c - '0');
  }
  return false;
}

static void collectData (bool empty =false) {
  if (!empty) {
    for (byte i = 0; i < NTYPES; ++i) {
      Serial.print("@ ");
      Serial.print(typeMap[i]);
      Serial.print('=');
      Serial.println(readings[i]);
    }
    byte sd = stash.create();
    stash.print("0,");
    stash.println(millis() / 1000);
    for (byte i = 0; i < NTYPES; ++i) {
      stash.print(typeMap[i]);
      stash.print(",");
      stash.println(readings[i]);
    }
    stash.save();
    // generate the header with payload - note that the stash size is used,
    // and that a "stash descriptor" is passed in as argument using "$H"
    Stash::prepare(PSTR("GET http://$F/p1/ HTTP/1.0" "\r\n"
                        "Host: $F" "\r\n"
                        "Content-Length: $D" "\r\n"
                        "\r\n"
                        "$H"),
            website, website, stash.size(), sd);
    // send the packet - this also releases all stash buffers once done
    ether.tcpSend();
  }
}

void setup () {
  if (DEBUG) {
    Serial.begin(115200);
    Serial.println("\n[p1poster]");
  }
  mySerial.begin(9600);
  // digitalWrite(7, 1); // enable pull-up
  collectData(true); // empty packet on power-up

  delay(2000);
  if (ether.begin(sizeof Ethernet::buffer, mymac, 10) == 0)
    Serial.println( "Failed to access Ethernet controller");
  if (!ether.dhcpSetup())
    Serial.println("DHCP failed");

  ether.printIp("IP:  ", ether.myip);
  ether.printIp("GW:  ", ether.gwip);
  ether.printIp("DNS: ", ether.dnsip);  

  if (!ether.dnsLookup(website)) {
    Serial.println("DNS failed");
    ether.copyIp(ether.hisip, hisip);
  }
  ether.hisport = 80;
  ether.printIp("SRV: ", ether.hisip);
}

void serialLoop() {
  byte c;
  while(mySerial.available()){

    c = mySerial.read();
    if (c > 0) {
      //c &= 0x7F;
    }
    switch (c) {
      case '/':
        break;
      case '!':
        collectData();
        memset(readings, 0, sizeof readings);
        break;
      default:
        if (p1_scanner(c)) {
          for (byte i = 0; i < NTYPES; ++i)
            if (type == typeMap[i]) {
              readings[i] = value;
              break;
            }
        }
    }
  }
}

void loop () {
  ether.packetLoop(ether.packetReceive());
  serialLoop();
}

I’m sure I’m going to use this EthernetNano more in the future – the very small size, price, built-in flexibility make this a great solution for a lot of things!

 

 

 

 

 

Meet the RFXCOM RFXtrx433E

 

RFXCOM RFXtrx433EHere’s the newest addition to my ever-growing list of hardware that make my HA system to what it is: the RFXCOM RFXtrx433E USB 433.92MHz Transceiver. The grey enclosure measures 82 x 58 x 22 mm which is really small, compared to the other RFXCOM transmitters and receivers I bought back in 2007. But size doesn’t matter at all in this case – the list of supported actors and sensors seams to have exploded in the last 7 years! The same goes for the number of Home Automation Systems that support the RFXCOM products. And if the Home Automation system you use is not listed, there’s an Open Source SDK to make your own plugin/driver for RFXCOM.

RFXCOM RFXtrx433EThe RFXCOM has a USB-B connector and 2 LEDs: the left (red) one is lit while the RFXtrx is booting while the other (yellow) one will light up when the RFXtrx has received an RF transmission it could decode.

Along with the hardware comes a range of documentation and software to test, configure and use the RFXCOM products; RFXMngr is probably the most important one for most – look here for a complete list of all the available downloads.

So let’s connect this small yet very powerful transceiver to one of my SBCs and see how it works.

Since I’m into Node.JS since a year or so, the first thing I did after unpacking the transceiver was searching for a NodeJS module for it and I found this one, made by Kevin McDermott. Well, life just can’t get easier than this; install the module and a script of <100 lines of code will suffice to receive all your sensors!

RFXCOM TH sampleHere you see the information from a received RF packet, transmitted by one of my Oregon Scientific Temperature/Humidity sensors – all stored in a convenient JSON object and ready for further processing. What more can you wish for? It just doesn’t get better than this :-)

 

Well… there is a special reason why this new RFXtrx433E is very interesting in my case; that’s because this RFXtrx433E supports the Somfy RTS protocol. That would mean I can combine an old RFXLAN receiver, an old RFXLAN transmitter and my expensive Somfy RTS485 transmitter in a single product! That would be really nice.

But for that, the reliability of the Somfy RTS protocol is crucial, so I focused on that during my first hours working with the RFXtrx433E. Today I ‘paired’ our 12 Somfy roller shutters with the RFXtrx433E so that I could use it to control those roller shutters with it. I sent a ‘PROG’ command to the Somfy RTS485 transmitter, followed by a RFXCOM RFY ‘program’ command after the roller shutter responded to the ‘PROG’ command by 2 small movements.

    switch(command.toLowerCase()){
      case 'up':
        rfy.up(deviceID);
        break;
      case 'down':
        rfy.down(deviceID);
        break;
      case 'stop':
        rfy.stop(deviceID);
        break;          
      case 'program':
        rfy.program(deviceID);
        break;          
      default:
        console.log('Command unknown: '+command);        
        break;
    }

After that I tested whether the roller shutters obeyed the ‘up’ and ‘down’ commands sent by the RFXtrx433E and 2 hours later I was done programming all 12 roller shutters. Now I can use my new RFXtrx433E as a remote for all of ‘em – what’s left to do is ‘embed’ the RFXtrx433E in my HA system and see what happens during the next couple of weeks.

Exciting, although I think I already know what the result will be – cause RFXCOM products have never let me down before!

IRTrans: CCF on ARM is a no-go

irtransIt’s very easy to forget about certain components of your Home Automation system – especially those that never give any problem.. fire and really forget :-)

An example of that is my IRTrans LAN infrared transceiver which I’m using since 2009. The IRTrans needs a so-called irserver program, which is the gateway between your application and the IRTrans hardware. Some time ago I found out that I still have irserver running on my Windows server as a service instead of one of my Cubietrucks. And since I want to get rid of the Windows server (a VM running on Hyper-V) I had to find a way  to move irserver to something Linux based – either a Raspberry Pi, Cubietruck or Odroid.

So I downloaded the Linux source code for irserver and tried to build it – no luck. Hmm, what’s wrong here. It didn’t take long to find out that I was missing a file: ccf.o. The only thing that worked was ‘make irserver_arm_noccf‘, but that didn’t sound very hopeful – no CCF? So I searched the IRTrans forum and found a post in which I read that there is a ccf.o available, although not suitable for all types of ARM processors (I assume). So I posted a question on the IRTrans forum, waited for about 7 weeks, but no (useful) response from the IRTrans support department. Strange …

Conclusion: CCF on ARM is a no-go. So I decided to create my own solution. Irserver without CCF meant I had to convert all my Philips Pronto CCF codes to hex. Luckily I found an easy way to do that with the IRTrans GUI Client:

IRTrans CCF to Hex

By not selecting a command in the Command combobox shown above, all the CCF codes for the selected remote are converted to hex. OK, now we’re getting somewhere… I copied all the output to a text file and repeated this for all the remotes. Now I have a single file with all the remotes, commands and hex codes which looks like this:File with hexcodesThis  file is very easy to parse by my Node.JS IRTrans driver, so with some additional changes in the code this should work.

In the CCF-enabled irserver situation I queried irserver with Agetremotes and Agetcommandlist to find out what commands were available (instead of defining those myself somewhere), but that method is useless with a CCF-disabled irserver in combination with CCF codes. So now that I have to work with a somewhat crippled irserver, I’m gonna use it as a simple hexcode transmitter.

And where I used to send commands to irserver with the Asnd command, I now have to use Asndhex.

My IRTrans now parses the new ‘hex code’ file, saves it in memory and sends the appropriate hex codes based on the commands it receives. So if some UI transmits “upc,yellow”, the driver sends a “Asndhex  H3E01000000…” command to irserver instead of “Asnd upc,yellow”.

No changes needed in the User Interfaces, everything is still working as before and all the hex codes I tested do what they’re supposed to do – I haven’t tested them all yet, but the most important ones are working.

Another reason to keep my Windows server ‘up’ is gone – on to the next one!

FTTH arriving soon!

Fiber optic

OK, it will still take some time before we can really start using it, but it’s irreversible now – FTTH (Fiber to the Home) is coming!

And I must say that I’m very pleased about that, because an 8/1 Mbps ADSL internet connection just isn’t enough for us anymore. Since a year or so our son seems to be constantly downloading games from Steam and/or PSN, our daughter likes to Skype and both of them behave like YouTube addicts… they’re able to generate so much traffic that everything else (my website, weblog, … ) sometimes take minutes to load :-( . And I just can’t handle that, it’s becoming very annoying – sometimes it looks as if we’re still living in the 64 kbps ISDN era… But that’s gonna change, soon – with a 50/50 or 100/100 Mbps internet connection those problems should be gone, right? ;-)

I know I could have ‘upgraded’ our internet connection a long time ago by using cable, but when I read statements from our cable company UPC like “Consumers don’t need much upload bandwidth, they only want download” I know that’s not going to work – I want both ways!

Last month the fiber company that’s responsible for connecting the village in which we live to fiber, started with the job. We however, will have to wait until the last quarter of this year before we’ll be connected. Can’t wait!

 

Home Automation and Voice Control

HAL-9000 (Space Odyssey), Mother (Alien), The Matrix, Jarvis (Iron Man), KITT – who doesn’t know them? And since a few days there’s Jasper, voice control for the Raspberry Pi.
An RPi, microphone, speaker and network connection is all you need (and the Jasper software package of course).

Interacting with computers by voice has always been a very appealing feature to have in my Home Automation System. There’s a button on the touchscreen in the living room which controls a light bulb – when you press that button, you hear Darth Vader saying “Yes, Master“. My son and I liked it; it was funny. But there had to be more…

So when 2 Princeton students released ‘Jasper’ a few days ago, I was triggered to revisit the subject ‘voice control’ once again.

My first thought was to give Jasper a try as soon as I had the time – but after I read some parts of the API documentation I became a bit hesitant – stuff like defining the words that the user is allowed to speak (or better: which will be recognized and processed further by Jasper) in the code is not how I’d like to do things. Another thing I didn’t like is that it would become a more or less isolated ‘sub-system’ to my HA system – answering questions, controlling Spotify and such. Create a module for every type of hardware here in our house? Neh. No chance.

Maybe it’s better to revisit Voicecommand, a tool developed by Steven Hickson and his PiAUISuite which I read about a year ago or so. Voicecommand (at least, the demo-videos are) seems to be made primarily to initiate actions (playing video, music, start the browser) on the local computer(/Raspberry). But why not try to extend it, remove some of the (local) action initiation parts of the code and replace that with a MQTT client?

That would make it a perfect fit for my HA system – this way I can let my rules engine receive the voice commands and let the rules engine be the definition of what has to be accepted as valid command and what actions should be executed.

So I ‘freed’ a Raspberry Pi and downloaded the PiAUISuite. The first problem was that I didn’t have a USB microphone – ahh, but our kids do, for things like Skype, online gaming and other things I never do. I found an old speaker set in the garage and I was good to go.

After some tinkering with the Voicecommand tool as-is, it’s configuration, trying different keywords and stuff like that, it was time to change some things.

First thing I wanted to change was the language. Voicecommand uses the Google Speech API, so using Dutch as language should not be a problem; all I had to do was change lang = “en” to lang = “nl”. Done! It improved the voice recognition quite a bit too! ;-)

I also wanted to change the response (“Yes, Sir?”) in to a simple short beep. This would significantly shorten the duration of the whole conversation which was a bit too long for my taste. I searched for a ‘beep’ MP3 on the internet that was short and loud enough to be noticed, searched the Voicecommand code for Speak(response) and replaced that call with Play(beep), a new function that I added to the code.

Another thing I changed was the matching of spoken command with a list of predefined commands (and their associated actions) in the ~/.commands.conf. Right now, I just send every word to my HA system and let my system decide if the spoken command contains something useful.

The last thing I did to do get communication between Voicecommand and my HA system going was building the Mosquitto MQTT client on the Raspberry Pi and call that client (mosquitto_pub) with the right parameters from Voicecommand with a system() call. It’s a bit of a quick & dirty trick to get things going though; it would be much better to incorporate the MQTT protocol in the Voicecommand code, but that’s too much work for now – first I want to see how this works out in practice with a better microphone and some useful commands & rules…

The only rule I have right now is this one, for controlling a small night lamp in the office:

rule office_test_light {
  when {
    m1: Message m1.t == 'voice' && m1.contains('licht');
  }
  then {        
    if (m1.contains('aan')) {
      publish("command",'{"address":"B02", "command":"ON"}'); 
    } else 
    if (m1.contains('uit')) {
      publish("command",'{"address":"B02", "command":"OFF"}');           
    } else {
      log('Snap het niet');
    } 
  }
}

Voicecommand has, for as far as I can see now, one drawback: no Internet connection means no Voice Control. The (very!) big plus is that the TTS voice is superior to what I’ve heard with Jasper.

Future plans:

  • sending textual (MQTT) messages to Voicecommand and let it speak them;
  • returning an error message when the rules engine was not able to process the command;
  • adding the RPi hostname to the message that goes to my HA system, which can be useful when having multiple Voicecommand Rpi’s throughout the house – cause a “light off” command in the garage implies a different action than “light off” in the kitchen.. ;-)

Right now, after a few hours of tinkering, I think I’ve got something that’s worth spending more time on. We’ll see! Here’s a video of what I’ve accomplished so far:

 

 

AngularJS and Primus, a perfect couple

Some time ago I shut down my old Home Automation system and the current one is doing just fine. All User Interfaces have had their updates and are working better than before after I started using Primus. Now the time has come to give my website a face-lift.

And as the title of this post suggests, the combination of AngularJS and Primus seemed like a good choice to accomplish that. But first I’d like to see it working with my own data – closer to reality, without the data that has to be displayed defined inside the Controller but preferably delivered by Primus with my Home Automation system as source.

I’m using Primus for a couple of months now and it’s working great. An example of that is a very cheap Android tablet that’s located on the 2nd floor as a User Interface (UI) to control the usual stuff like the roller shutters, lights, front door and security system from there. This tablet loses its Wifi connection about 2 or 3 times a week resulting in a disconnected websocket and hence all the buttons on the UI are ‘dead’ when this happens. Refreshing the page brings back the websocket connection of course, but it was annoying having to do that. Since I implemented Primus, its built-in reconnect feature makes this tablet ready for use 24/7 without having to refresh. Cool. Couldn’t have done it better ;-)

I also switched to another reverse proxy in the process. This used to be Apache running on a Linux VM but since a week or so I’m using nginx, currently running on a Raspberry Pi.

On to AngularJS. AngularJS “lets you extend HTML vocabulary for your application” as the website says. It came to my attention in the summer of 2013 and it has been on the to-do list ever since. I saw some examples and immediately knew I had to learn how to use it.

The last couple of days I tried to do so. After initially ‘wrestling’ with some new terminology like Controllers, Services, Providers and Directives I bought the ng-book and made my first (almost) self-made web-page. Great.

But as already mentioned above, I wanted to see Primus and AngularJS working with my data and I wanted to see some ‘building blocks’ (like grids, charts, labels) of my website being turned into dynamically updating components – without any refresh triggered by a button or time interval. Yuck… what I see, must always be the latest information available.

Now all I needed was some way to make Primus, the real-time data transporter and AngularJS cooperate. For that I found angular-primus. And I had some extra demands: I should be able to create a chart ‘pre-loaded’ with the history of x minutes/hours and grids should also contain all the items right away. And I should be able to highlight changing values to may them more noticeable. And ….

After fiddling with Angular, Primus, Providers, Directives, Controllers for a couple of evenings I came up with this (click the image to go to the live web-site) :

ng-primus

Brilliant… every value displayed is being updated automagically – the Smart Meter data, the line chart (with the help of HighCharts, BTW), the Temperature column values in the upper grid and new events being added to the lower grid. Just take a peek and see everything changing & moving.. just what I always wanted! The first item I built was the hardest, the ones after that were done much quicker.

Now that I finally see what AngularJS and Primus can do with my data, I think it’s safe to say that those 2 make a perfect couple for me!

Onwards!

 

Meet the Odroid U3

Last Monday a new ARM based SBC mini computer arrived; the Odroid U3 (Community Edition). Some highlights:

  • 1.7 GHz Quad-Core Cortex-A9 Samsung Exynos4412 processor
  • 2 GByte RAM
  • 3D Accelerator Mali-400 Quad Core 440 MHz
  • 10/100 Mbps Ethernet
  • Storage: MicroSD Card Slot, eMMC socket
  • HDMI
  • Power: 5V/2A
  • 3 x High speed USB 2.0 Host ports
  • PCB Size 83 x 48 mm
  • XUbuntu or Android OS

And that’s not all – the rest of the details can be found here.

I ordered the following set, a week before they arrived:

  • Odroid U3 Community Edition;
  • 5V/2A power supply;
  • 8GB eMMC module with Linux on it;
  • Case.

Odroid U3The total amount, including shipping, was $ 118.50 (~ EUR 85). I just couldn’t resist buying one to give it a try ;-) I read a comparison between a Raspberry Pi and the Odroid U3 and if the Odroid really outperforms the Raspberry that much, I’d be better off with an Odroid.

However, that (performance) is not the primary reason to ‘broaden my horizon'; the real reason is that I’ve had 3 SD card crashes since June last year with an average of 2 Raspberry’s running 24/7. That’s too much for me – I just can’t handle that. I have 4 RPi’s running now and this hardware overkill makes it very easy to just move some tasks to the remaining 3 RPi’s when 1 of them breaks down, but I don’t want that to happen… apparently the SD card is the weak point of the RPi concept (yes, all my RPi’s are up to date), so looking for alternatives was my primary reason for buying this Odroid U3.

While unpacking the Odroid shipment, the first 2 remarkable things were the PCB size (smallerOdroid U3 then a RPi and also smaller then half a Cubietruck) and the huge heatsink – which is there for a reason, but more on that later. Normally, the first thing I do while preparing for the 1st boot is connecting a HDMI screen and a keyboard – but I totally forgot to buy a micro-HDMI adapter, so all my HDMI cables were useless. Oops… well, lets just boot the thing and hope that ssh is installed and working properly out of the box.. and it did. So I could use the Odroid headless right away, like I do with all my mini computers.

After doing the usual stuff like expanding the file system, disabling the graphical interface and so on (“sudo odroid-config“) I shut it down and hooked up my Power Monitor to get some information about the power usage. During boot, the power usage is around 3~4 W with peaks of 5 W but after that it settles down to a measly 1.9~2.0 W. That’s comparable to the power usage of the Cubietruck which uses 2.3 W (including an SSD).

Odroid U3Another thing I noticed was that the Odroid boots very fast – push the power button, start Putty and the first login attempt is successful! Try that with a RPi…wow..

As I mentioned earlier, I was very curious why the Odroid had such a big heatsink, so I tried to find a way to get some more information about CPU temperature, cause even while running idle I could feel that the heatsink was already a bit warmer then the ambient temperature. Not much, but noticeable. ‘cat /sys/devices/virtual/thermal/thermal_zone0/temp‘ gave me the information I needed: 51 degrees C, idle.

So, what’s going to happen when the Odroid gets really busy?? On the Odroid forum I found a way to stress the CPU to the max: ‘sudo openssl speed -multi 4‘ – that made the CPU usage go through the roof and the temperature as well – I saw the temperature reach 102 degrees Celsius!! And power usage went up to 7.5-8 W… oh my! Fortunately some thermal protection kicked in, because I saw the temperature and power usage dropping automatically and increasing, dropping, increasing.. some throttling mechanism must be doing that. While stressing the CPU to the max, the heatsink became unpleasantly warm – after 10 seconds or so I really wanted to remove my finger from the heatsink. Now I know why it’s there – for a very good reason ;-) Now I’m wondering, how much CPU power can be used constantly before throttling kicks in? Don’t know yet. I did see that while compiling Node.JS the temperature went up considerably as well (in the 70’s, with just one core busy? didn’t check that though)

Next thing I did was downloading, compiling and installing some additional software I’ll be playing with in the next couple of weeks: Node.JS with a lot of modules and nginx (Engine-X, for the static content, reverse proxy and stuff like that). And since I like comparing things (especially apples and oranges), I compared how Odroid/nginx compared to a (virtualized) W2k3/IIS6.

I changed the standard nginx.conf and added the following:

worker_processes  4;
worker_cpu_affinity 1000 0100 0010 0001;

This will create 4 worker processes, each running on a dedicated core. I copied a 2KB html file to the Odroid and to the IIS server and used Openwebload to see what numbers both would produce (results after 20 seconds, IIS first):

URL: http://domoticaserver.hekkers.lan:80/smart.html
Clients: 5
MaTps 20.68, Tps 24.85, Resp Time 0.229, Err 0%, Count 438

URL: http://odroid1.hekkers.lan:80/smart.html
Clients: 5
MaTps 1136.02, Tps 1078.00, Resp Time 0.005, Err 0%, Count 23347

Eehh.. this can’t be true? Testing this while logged in on the IIS server gave these numbers:

URL: http://localhost:80/smart.html
Clients: 5
MaTps 144.42, Tps 137.72, Resp Time 0.036, Err 0%, Count 2953

A lot better, but still about 7 times slower than Odroid… I’ll have to do some more testing to find out what’s causing this, because I don’t believe this… maybe I’m just running too much VM’s on a modest CPU (Intel Core i3 530) ?

Again the CPU temperature of the Odroid went up to 74 degrees Celsius (board in closed case, ambient temperature 24.8 degrees)  during this test, with a system load of about 0.40 and power usage between 3 and 3.5 W.

Now, almost a week later, the Odroid is still running fine, running a test website based on Node.JS, Primus and AngularJS, without any throttling; I think I like the Odroid U3, but time will tell whether I really do …

Home Automation with Node.JS & MQTT

Shutting down my old homebrew Windows-based Home Automation system and letting the new Node.JS/MQTT based HA system take full control last Saturday was done without a glitch! Better, faster, smoother than I expected.

I had planned to start with this on Saturday morning, so that I would have ~36 hours to fix any problems that would arise, but that didn’t go as planned – some other things I had to do on that Saturday made me postpone the big switch to Saturday evening.

Around 7 o’clock in the evening I told the rest of the family that they just had to pretend that I was away, unreachable, until I would tell them that I was back again – the ‘do not disturb’ door hanger ;-)

The main concern in this whole exercise was not losing any historical data. First I did a test-run of copying all the historical data from MS SQL to MySQL and checked if this still worked like it should; it did. Checked the information in the MySQL database for consistency, correctness and so forth. Great. I ran the historical-data copy again, renamed some tables in MySQL, changed some configuration settings (database names), restarted some Node services and checked if storing the historical data worked as before, but now by Node.JS, directly into the production database. This was the point where I had to decide for the go/no-go; within just a couple of minutes I knew it was a GO!

I copied a file with the new configuration settings to the Raspberry Pi’s and after that, all I had to do was restart all the services so that they would switch from the MQTT broker running on the Windows VM to the broker on the Cubietruck. Great!

About an hour later, after testing some things by walking through the house, pushing some buttons, seeing lights being switched on by detected motion and other stuff like that, I knew I would have a huge amount of free time to do a lot of other things again! :-)

The day after went just as smooth and relaxed – no issues, just a minor thing I forgot about and which was fixed in 5 minutes… Now, Wednesday evening and 96 hours later, everything’s still 110% OK.

As of now, my Node.JS/MQTT based HA system has the following ‘features‘:

  • Smart meter

The output of the so-called P1 port of our Smart meter is being parsed by a small script which publishes the relevant information (power usage, gas usage) to the MQTT broker.

Our Roomba 563 robot vacuum cleaner is monitored and controlled with a Thinking Cleaner module which is plugged into the SCI port of the Roomba.

  • RFXCOM receiver

Two RFCOM receivers are used to collect information from various sensors (mainly Oregon Scientific temp/humidity, Visonic door/window, motion).

This is one of the drivers that’s controlling a lot of lights in our house, but is also used for things like controlling the garage door opener. I use 2 PLCBUS controllers – one in the meter cabinet, the other one is located at the other end of the house.

  • EISCP protocol

The RS-232 port on our Onkyo AV Receiver enables us to control every aspect of the device – switching HDMI inputs, volume level, on/off, mute…

  • NMA

Notify My Android is being used to send notifications to my cell phone – stuff like new LED Bar messages, and warnings about things that might need my attention.

  • Mobotix

Our Mobotix D22 security camera has a great light sensor; i use this light sensor to determine when the time has come to switch some outdoor lights (front door, back door, garden, gazebo) on or off. The http interface of the camera enables me to take snapshots.

  • LED Bar

Just a funny gadget..

  • IRTrans

The IRTrans LAN module is used to control our UPC media box and to turn on our Dune media player.

The Dune IP Control protocol enables the system to control our Dune HD Max media player.

The HA7Net provides information about in- and outgoing temperatures of our 5 floor heating groups with 1-Wire sensors and about the amount of DHW (hot water) we use (water meter with pulse + Dallas 1-Wire DS2423 counter).

  • Remeha Calenta

One of the most informative devices… even things like the fan speed can be monitored with its ‘service‘ port! More interesting of course are things like modulation level, water pressure, operating mode (central heating, DHW).

  • Alphatronics

A great receiver to receive Visonic Keyfobs and sensor RSSI information;

Primarily used for the Philips Pronto TSU9600, our single remote solution for all our AV equipment.

  • Somfy RTS

An RS-485 Somfy RTS transmitter enables us to control 12 roller shutters.

  • Rules engine

The part of the system that does the real automation: based on inputs (sensors) this engine can initiate all kinds of actions with all the hardware (actors) which is connected to the system.

  • RGB LED

A DMX based RGB LED driver controls 6 RGB LED lights under our gazebo;

This great device enables us to set the room temperature to what we want it to be, without having to walk to the room thermostat or even being at home.

  • Nemef Radaris Evolution

This RFID lock that’s on our frontdoor, controlled by a Nemef RF controller, gives detailed status information about the RFID tags being used, access control and remote access.

  • Conrad MS-35 LED driver

A couple of these are used to control warm white LED strips. I made them wireless with small TTL-to-Wifi adapters.

  • Siemens M20T GSM modem

This GSM modem is being used to send SMS messages to my cell phone, but it’s task is gradually taken over by NMA.

  • Email

Emails are primarily used to notify me about sensors that need new batteries.

  • ELV MAX!

The ELV MAX! Radiator Thermostats are used to control the temperatures in all the rooms in our house: bedrooms, bathroom, etcetera.

Why? Because I can! ;-)

  • Zigbee

I have several sensors based on a combination of JeeNodes with XBee ZB modules: motion, temperature, pressure. I also use those XBees to make 3 Chromoflex LED drivers wireless by connecting an XBee to the serial port of the chromoflex – works great.

  • 16-channel LED driver for staircase lighting

Homebrew LED driver to control 13 (or is it 12?) LED strips which light the stairs. The Node driver mainly controls how an Arduino sketch should behave.

  • RFXCOM transmitter

This RF transmitter has only one purpose: controlling 2 433 MHz door bell chimes.

  • Plugwise

12 Plugwise Circles are used for monitoring power usage and to detect whether the washing machine or dryer has finished its program.

  • Chromoflex

This service calculates the payload that has to be sent to the Chromoflex LED drivers to control the LED strips. This payload is then forwarded to an XBee radio to which the Chromoflex driver is connected.

  • Btraced

Btraced is an app for iPhone/Andriod and it enables you to send your location to your own server; this service adds some additional information (by reverse geocoding) so that the location can be displayed on our touchscreen with use of Google Maps.

  • Visonic PowerMax Plus

Our security system is connected too; we’re no longer limited to using keyfobs, panel or keypad anymore for controlling this security system. And additionally, all the sensor information (open, closed, motion, battery status, tampering) is available in my system.

In use since I ditched A-10/X-10 recently. It has been on the shelf for some time after doing some small tests with it, but now it’s an excellent replacement for the small amount of X-10 stuff I was still using.

  • Doorbell

Our Ethernet-enabled doorbell communicates with a Node script to report ‘rings’ and query daylight status (used for switching a LED on & off for visibility of the button).

Quite a list, if I may say so … and it’s all running great!

My ASP.Net website still uses a MS SQL Server for its data – this database is now kept up to date by a NodeJS script, just like the MySQL database on the Cubietruck.

So, what’s next? Well, first I’m gonna take a break, that’s for sure… I’ve done enough JavaScript in the past 8 months, so it’s time for something different; I’ve also neglected some other things that really do need my attention now. And of course I’ll have to start working on the User Interface, for which I’l have to learn a lot before I can start developing.. never a dull moment!

Onwards!