AngularJS and Primus, a perfect couple

Some time ago I shut down my old Home Automation system and the current one is doing just fine. All User Interfaces have had their updates and are working better than before after I started using Primus. Now the time has come to give my website a face-lift.

And as the title of this post suggests, the combination of AngularJS and Primus seemed like a good choice to accomplish that. But first I’d like to see it working with my own data – closer to reality, without the data that has to be displayed defined inside the Controller but preferably delivered by Primus with my Home Automation system as source.

I’m using Primus for a couple of months now and it’s working great. An example of that is a very cheap Android tablet that’s located on the 2nd floor as a User Interface (UI) to control the usual stuff like the roller shutters, lights, front door and security system from there. This tablet loses its Wifi connection about 2 or 3 times a week resulting in a disconnected websocket and hence all the buttons on the UI are ‘dead’ when this happens. Refreshing the page brings back the websocket connection of course, but it was annoying having to do that. Since I implemented Primus, its built-in reconnect feature makes this tablet ready for use 24/7 without having to refresh. Cool. Couldn’t have done it better ūüėČ

I also switched to another reverse proxy in the process. This used to be Apache running on a Linux VM but since a week or so I’m using nginx, currently running on a Raspberry Pi.

On to AngularJS. AngularJS¬†“lets you extend HTML vocabulary for your application” as the website says. It came to my attention in the summer of 2013 and it has been on the to-do list ever since. I saw some examples and immediately knew I had to learn how to use it.

The last couple of days I tried to do so. After initially ‘wrestling’ with some new terminology like Controllers, Services, Providers and Directives I bought the ng-book and made my first (almost) self-made web-page. Great.

But as already mentioned above, I wanted to see Primus and AngularJS working with my data and I wanted to see some ‘building blocks’ (like grids, charts, labels) of my website being turned into dynamically updating components – without any refresh triggered by a button or time interval. Yuck… what I see, must always be the latest information available.

Now all I needed was some way to make Primus, the real-time data transporter and AngularJS cooperate. For that I found angular-primus. And I had some extra demands: I should be able to create a chart ‘pre-loaded’ with the history of x minutes/hours and grids should also contain all the items right away. And I should be able to highlight changing values to may them more noticeable. And ….

After fiddling with Angular, Primus, Providers, Directives, Controllers for a couple of evenings I came up with this (click the image to go to the live web-site) :

ng-primus

Brilliant… every value displayed is being updated automagically – the Smart Meter data, the line chart (with the help of HighCharts, BTW), the Temperature column values in the upper grid and new events being added to the lower grid. Just take a peek and see everything changing & moving.. just what I always wanted! The first item I built was the hardest, the ones after that were done much quicker.

Now that I finally see what AngularJS and Primus can do with my data, I think it’s safe to say that those 2 make a perfect couple for me!

Onwards!

 

Home Automation with Node.JS & MQTT

Shutting down my old homebrew Windows-based Home Automation system and letting the new Node.JS/MQTT based HA system take full control last Saturday was done without a glitch! Better, faster, smoother than I expected.

I had planned to start with this on Saturday morning, so that I would have ~36 hours to fix any problems that would arise, but that didn’t go as planned – some other things I had to do on that Saturday made me postpone the big switch to Saturday evening.

Around 7 o’clock in the evening I told the rest of the family that they just had to pretend that I was away, unreachable, until I would tell them that I was back again – the ‘do not disturb’ door hanger ūüėČ

The main concern in this whole exercise was not losing any historical data. First I did a test-run of copying all the historical data from MS SQL to MySQL and checked if this still worked like it should; it did. Checked the information in the MySQL database for consistency, correctness and so forth. Great. I ran the historical-data copy again, renamed some tables in MySQL, changed some configuration settings (database names), restarted some Node services and checked if storing the historical data worked as before, but now by Node.JS, directly into the production database. This was the point where I had to decide for the go/no-go; within just a couple of minutes I knew it was a GO!

I copied a file with the new configuration settings to the Raspberry Pi’s and after that, all I had to do was restart all the services so that they would switch from the MQTT broker running on the Windows VM to the broker on the Cubietruck. Great!

About an hour later, after testing some things by walking through the house, pushing some buttons, seeing lights being switched on by detected motion and other stuff like that, I knew I would have a huge amount of free time to do a lot of other things again! ūüôā

The day after went just as smooth and relaxed – no issues, just a minor thing I forgot about and which was fixed in 5 minutes… Now, Wednesday evening and 96 hours later, everything’s still 110% OK.

As of now, my Node.JS/MQTT based HA system has the following ‘features‘:

  • Smart meter

The output of the so-called P1 port of our Smart meter is being parsed by a small script which publishes the relevant information (power usage, gas usage) to the MQTT broker.

Our Roomba 563 robot vacuum cleaner is monitored and controlled with a Thinking Cleaner module which is plugged into the SCI port of the Roomba.

  • RFXCOM receiver

Two RFCOM receivers are used to collect information from various sensors (mainly Oregon Scientific temp/humidity, Visonic door/window, motion).

This is one of the drivers that’s controlling a lot of lights in our house, but is also used for things like controlling the garage door opener. I use 2 PLCBUS controllers – one in the meter cabinet, the other one is located at the other end of the house.

  • EISCP protocol

The RS-232 port on our Onkyo AV Receiver enables us to control every aspect of the device – switching HDMI inputs, volume level, on/off, mute…

  • NMA

Notify My Android is being used to send notifications to my cell phone – stuff like new LED Bar messages, and warnings about things that might need my attention.

  • Mobotix

Our Mobotix D22 security camera has a great light sensor; i use this light sensor to determine when the time has come to switch some outdoor lights (front door, back door, garden, gazebo) on or off. The http interface of the camera enables me to take snapshots.

  • LED Bar

Just a funny gadget..

  • IRTrans

The IRTrans LAN module is used to control our UPC media box and to turn on our Dune media player.

The Dune IP Control protocol enables the system to control our Dune HD Max media player.

The HA7Net provides information about in- and outgoing temperatures of our 5 floor heating groups with 1-Wire sensors and about the amount of DHW (hot water) we use (water meter with pulse + Dallas 1-Wire DS2423 counter).

  • Remeha Calenta

One of the most informative devices… even things like the fan speed can be monitored with its ‘service‘ port! More interesting of course are things like modulation level, water pressure, operating mode (central heating, DHW).

  • Alphatronics

A great receiver to receive Visonic Keyfobs and sensor RSSI information;

Primarily used for the Philips Pronto TSU9600, our single remote solution for all our AV equipment.

  • Somfy RTS

An RS-485 Somfy RTS transmitter enables us to control 12 roller shutters.

  • Rules engine

The part of the system that does the real automation: based on inputs (sensors) this engine can initiate all kinds of actions with all the hardware (actors) which is connected to the system.

  • RGB LED

A DMX based RGB LED driver controls 6 RGB LED lights under our gazebo;

This great device enables us to set the room temperature to what we want it to be, without having to walk to the room thermostat or even being at home.

  • Nemef Radaris Evolution

This RFID lock that’s on our frontdoor, controlled by a Nemef RF controller, gives detailed status information about the RFID tags being used, access control and remote access.

  • Conrad MS-35 LED driver

A couple of these are used to control warm white LED strips. I made them wireless with small TTL-to-Wifi adapters.

  • Siemens M20T GSM modem

This GSM modem is being used to send SMS messages to my cell phone, but it’s task is gradually taken over by NMA.

  • Email

Emails are primarily used to notify me about sensors that need new batteries.

  • ELV MAX!

The ELV MAX! Radiator Thermostats are used to control the temperatures in all the rooms in our house: bedrooms, bathroom, etcetera.

Why? Because I can! ūüėČ

  • Zigbee

I have several sensors based on a combination of JeeNodes with XBee ZB modules: motion, temperature, pressure. I also use those XBees to make 3 Chromoflex LED drivers wireless by connecting an XBee to the serial port of the chromoflex Рworks great.

  • 16-channel LED driver for staircase lighting

Homebrew LED driver to control 13 (or is it 12?) LED strips which light the stairs. The Node driver mainly controls how an Arduino sketch should behave.

  • RFXCOM transmitter

This RF transmitter has only one purpose: controlling 2 433 MHz door bell chimes.

  • Plugwise

12 Plugwise Circles are used for monitoring power usage and to detect whether the washing machine or dryer has finished its program.

  • Chromoflex

This service calculates the payload that has to be sent to the Chromoflex LED drivers to control the LED strips. This payload is then forwarded to an XBee radio to which the Chromoflex driver is connected.

  • Btraced

Btraced is an app for iPhone/Andriod and it enables you to send your location to your own server; this service adds some additional information (by reverse geocoding) so that the location can be displayed on our touchscreen with use of Google Maps.

  • Visonic PowerMax Plus

Our security system is connected too; we’re no longer limited to using keyfobs, panel or keypad anymore for controlling this security system. And additionally, all the sensor information (open, closed, motion, battery status, tampering) is available in my system.

In use since I ditched A-10/X-10 recently. It has been on the shelf for some time after doing some small tests with it, but now it’s an excellent replacement for the small amount of X-10 stuff I was still using.

  • Doorbell

Our Ethernet-enabled doorbell communicates with a Node script to report ‘rings’ and query daylight status (used for switching a LED on & off for visibility of the button).

Quite a list, if I may say so … and it’s all running great!

My ASP.Net website still uses a MS SQL Server for its data Рthis database is now kept up to date by a NodeJS script, just like the MySQL database on the Cubietruck.

So, what’s next? Well, first I’m gonna take a break, that’s for sure… I’ve done enough JavaScript in the past 8 months, so it’s time for something different; I’ve also neglected some other things that really do need my attention now. And of course I’ll have to start working on the User Interface, for which I’l have to learn a lot before I can start developing.. never a dull moment!

Onwards!

Some last changes before Node.JS takes full control

X-10 LM15The actor in the picture to the left was probably one of the first real¬†Home Automation related pieces of hardware I bought. Everything I did before I had this LM15 installed was only about monitoring. It must have been sometime in 2007. This LM15 device, used for the outside light at the front door, was controlled by a ACT TI-213 X-10/A-10 controller. But now, 7 years later, those items have to go. Not because they’re broken, but because the A-10 signals put on the power-line by the controller mentioned above don’t always reach 2 other A-10 modules anymore (which are located at the other end of our house). Result: garden lights being on all day. So it’s time to say goodbye to my A-10 controller and 3 A-10 actors. I didn’t want to use a technology that only works 70-80 % of the time anymore – so why spend time on writing a new driver for it; it will be replaced with PLCBUS modules and Insteon LED bulbs.

Simplecortex

Another thing I did during the last weekend was replacing 2 Simplecortex boards. One was used for my Opentherm Gateway and the other one for the smart meter. I changed my mind about the MQTT topics these boards had to publish their information to, so to change that I had to install the CooCox CoIDE again, install libraries, change the code, build, flash, … – too much work!

So I decided to connect the Opentherm Gateway to a Raspberry Pi (RPi) using an old FTDI breakout board  and do the same with the smart meter using a USB-cable with a RX-inverted FTDI chip inside. Now all I had to do was write 2 small Node.JS scripts Рwhich I can change on the fly in every aspect with just a simple text editor Рnotepad will do just fine, although using a more advanced editor like NotePad++ or UltraEdit would be a better choice. Long live convenience!

So why am I doing all this? It’s all for the upcoming “big shutdown” of my old Home Automation system, of which the next-gen version is eagerly waiting to take over:

HA system overview

The image above shows the status of what has become of what I started in June last year.

40 services in total, running on 4 RPi’s and a Cubietruck. Based on about 15,000 lines of Javascript code.¬†A part of those services (processes) just share their information (sensor drivers like for RFXCOM, HA7Net), others also accept commands¬†(PLCBUS, AV equipment, ELV MAX!, Zigbee) and a few others are just facility services like taking care of historical data, metadata etcetera.

All this should be able to run on a single Raspberry Pi regarding CPU usage as you can see, but that won’t work in my case, because all those separate Node.JS processes use quite a lot of memory, so I had to use a few more RPi’s to get them all up and running.

Right now, both HA systems are operational – one a bit more than the other though; almost all tasks in my Windows system have been deactivated and all that’s left of it is that it’s still storing historical data in a MS-SQL database. The other one (the Node.JS based one) does it all and even more, cause it’s also feeding the MS-SQL database with historical data, but puts it into a ‘shadow’ database for now.

So now the time has come to click that close button in the upper right corner…
close

What will happen, I don’t know. Hopefully nothing serious…

Immediately after I’ve shut down my old system, I’ll have to restart a script on one of my RPi’s so that it starts storing the historical data in the ‘live’ MS SQL database instead of the shadow version – my website needs this MS SQL database because it uses it to create charts and read the current device statuses. This will stay that way until I’ve developed a new website.

What more is there to do? Nothing I can think of right now, actually… However, I still find shutting the old system down a bit scary; I don’t want to find out after a few days that I forgot something!

So what I will do is plan this action for the next weekend, on Saturday, early in the morning. That will give me time to monitor everything and, more important, immediately respond if something does go wrong… exciting!

MySQL pushing metadata changes with Node.JS and MQTT

From day 1 that I started using a database to store the device metadata, I had to restart my Home Automation system to refresh¬†the metadata. Now, with a more distributed system with multiple processes running on multiple systems, this became a “problem”. So lets automate this distributed Home Automation system.

The most common changes to the metadata are changes in hardware addresses (when one of my Oregon Scientific sensors got new batteries) or completely new devices that have been added to the database. ¬†And with multiple processes running somewhere I felt the need for ‘event driven propagation of altered metadata’ – which means that when I’m finished modifying the metadata, 1 single action should suffice to automatically update the whole system.

I’ve already developed a metadata provider for that – a Node.JS script that can query the MySQL server and that creates and publishes JSON objects of the metadata. But I didn’t like the prospect of having to manually trigger this metadata provider after I finished changing the metadata.¬†The first thing that popped up in my mind were triggers on changing data in the database. MS SQL has them as well as MySQL. I never used triggers with MS SQL though; from what I can remember it was too much hassle to get it working, so I never implemented it – maybe it’s time to revisit triggers… I keep track of the changes I make in the metadata by updating a timestamp in the database, so that would be a good ‘trigger’, right?

I once read about User Defined Functions (UDF) being able to make system calls – let’s see if we can get this working!

The first thing I did was creating a new VirtualBox Ubuntu Server 12.04 LTS Virtual Machine, cause I didn’t want to fiddle with this on my semi-production MySQL instance. After the VM was up and running and Node.JS was installed from source, MySQL was installed, I found an article that explained pretty much what I had to do.

First I needed to install the lib_mysqludf_sys library. The accompanying install script told me I had to install libmysqlclient15-dev, so I did. I also read that the C source had to be compiled with the -DMYSQL_DYNAMIC_PLUGIN parameter so I changed the Makefile as well. Still no luck – and then I saw that the resulting .so file was copied to the wrong directory. MySQL¬†show variables like ‘plugin_dir’ showed me that the file should be in /usr/lib/mysql/plugin but it wasn’t, so I copied it there manually. After that, the install script (which also executes the MySQL CREATE FUNCTION statements) ran fine.

Next: a trigger on the metadata version table/row that should be executed after the UPDATE:
BEGIN
DECLARE cmd CHAR(255);
DECLARE result int(10);
SET cmd=CONCAT('/home/robert/local/bin/node /home/robert/signal_metadata.js');
SET result = sys_exec(cmd);
END

BTW, I use Toad for MySQL – very nice & handy tool!

Fine… now a small Node.JS script to make it complete:
var mqtt = require('mqtt');
client = mqtt.createClient(1883, '192.168.10.13');
client.publish('metadata', 'changed');
client.end();

Done! Unfortunately it took me some time to find out that AppArmor was bugging me:

Jan 22 23:17:18 ubuntuvm kernel: [ 59.408794] type=1400 audit
(1390429038.266:9): apparmor="DENIED" operation="exec" parent=1039
profile="/usr/sbin/mysqld" name="/bin/dash" pid=1331 comm="mysqld"
 requested_mask="x" denied_mask="x" fsuid=106 ouid=0

Okay… typically one of those things you only forget once in your life ūüėČ After disabling it everything worked like it should.

So now the metadata which is stored in the MySQL database will be automatically propagated throughout the whole system: the MySQL trigger on the ‘metadata version’ table executes a Node.JS script, this script publishes to my MQTT broker, the metadata provider picks up this message and starts recreating all the JSON objects based on the new metadata stored in the MySQL server and publishes those objects so that all the processes (wherever they are) that need metadata to do their job, will be updated automagically¬†– nice!

Metadata message

Btraced, Raspberry Pi, Node.js, MQTT to build your own GPS tracking system

Btraced-2

The Btraced app is a great tool, especially because it allows you to upload the GPS data to your own server. Once you’ve got this aspect covered, the things you can do with the GPS data is unlimited – you can do whatever you want with it!
Personally, I’m not that interested in the ‘trip’ functionality of Btraced; I’d rather use the app to allow me to do positioning: constantly letting my Domotica system know where I am. And not just my location, soon other family members will be added as well.

Although uploading the GPS data to your own server is very convenient, accomplishing this might sound scary to some people, cause “I’ve never done that”, “don’t know how”, “don’t have a server”, “too difficult”.¬†This post will show you that it’s none of those..

During the last few days I’ve been playing with a Raspberry Pi, Node.js and a webpage on my web server. The situation before I started with this, was that I had an ASP.Net web form on my IIS web server; this page was used for uploading the Btrace GPS data to my server and storing it in my SQL Server. After a few days I started wondering why I stored all this GPS information in a database – all I did with it was SELECT-ing the newest record and display my last-known location. What a waste of disk space..

So I added MQTT publishing to the ASP.Net page and let the SQL stuff the way it was. A few days later I completely removed the SQL stuff, and realized that using an ASP.Net page on my IIS was a bit overkill for what I was doing.¬†There must be an easier way! And there is: I’ll show you how to create your own Btraced-upload-server without a big web server and minimal cost.

Hardware

All you need is a Raspberry Pi (model B, the one with Ethernet interface) and a LAN cable to connect it to your router or LAN. I prefer to run my Raspberry’s without keyboard, video & mouse so you’ll need a way to connect to the RPi over the LAN – my favorite tool for that is Putty. Connect your RPi to the network, power it up and find out with which IP address the RPi is using.

Network

Your gateway to Internet (modem, router, firewall) should be able to do port forwarding, cause we’ll have to forward the Btraced data to the RPi. Choose a port (e.g. 8000, 8080, 8081 but preferably not 80 cause maybe you want to run a website on your RPi one day…) and forward that port to the RPi. In the Btraced App, you’ll have to change the setting for the upload server, like this:¬†http://ww.xx.yy.zz:pppp¬†(assuming ww.xx.yy.zz is your external IP address and pppp is the port number).

Software

You’ll need to install Node.js and an additional Node module: xml2js. Optional: mqtt. And some extra code, which will be provided below.¬†Just follow one of those excellent guides on ‘How to install Node.js’ that can be found on the web. Also install npm, the Node Package Manager, just to make life easier. Not comfortable with using an editor on Linux? Install an Node.js and a FTP server on your PC, edit the files on your PC, test & debug them there, install an FTP client on the RPi and ‘ftp get’ the files to your RPi once the scripts are finished.

Some code

The Btraced app uploads the data as xml over http. “Oops, so I need a http server and work with XML?” Yes you do, but it only takes 75 lines of code, so don’t be scared! Keep on reading.. here’s the code that can ‘receive’ and parse the uploaded Btraced XML data:

http = require('http');
mqtt = require('mqtt');
var parseString = require('xml2js').parseString;
var ydate = '';
var position = {};

mqttClient = mqtt.createClient(process.env.DDMC_BROKER_PORT, process.env.DDMC_BROKER_HOST, {clientId: 'Btraced', keepalive: 30000});

function processPoint(p){
  if(p.date > ydate){
    ydate = p.date;
    position.date = parseFloat(p.date);
    position.latitude = parseFloat(p.lat);
    position.longitude = parseFloat(p.lon);

    position.speed = parseFloat(p.speed);
    if(position.speed > 0){
      position.speed = Math.round(position.speed * 36)/10;
    } else {
      position.speed = 0;
    }

    if(p.course.substring(0,1) == "-"){
      position.angle = 0;
    } else {
      position.angle = parseFloat(p.course);
    }
    position.batt = Math.round(parseFloat(p.bat)*100);
  }
  return parseInt(p.id);
}

http.createServer(function (req, res) {
  if(req.method == 'POST') {
    var body='';
    var travelid = '';
    var ids=[];

    req.on('data', function (data) {
      body +=data;
    });

    req.on('end',function(){
      // explicitArray false for NOT getting all nodes as arrays
      parseString(body, {explicitArray:false}, function (err, result) {
        bwiredtravel = result.bwiredtravel;
        username = bwiredtravel.username;
        travelid = parseInt(bwiredtravel.travel.id);
        ydate = '';
        position = {};
        position.username = username;
        var points = bwiredtravel.travel.point;
        if(typeof points.length === 'undefined')
        {
          ids.push(processPoint(points));
        } else {
          for(var i=0;i<points.length;i++){
            ids.push(processPoint(points[i]));
          };
        }
        console.log(JSON.stringify(position));
        mqttClient.publish('raw/'+username+'/location/gps', JSON.stringify(position));
      });
      body = '';

      response = {};
      response.id = 0;
      response.tripid = travelid;
      response.points = ids;
      response.valid = true;

      console.log(JSON.stringify(response));
      res.end(JSON.stringify(response));
    });
  }
}).listen(8000);

That’s it? Yep.. and all that needs to be done to get this code running is entering ‘node btraced’ @ the command prompt and it’s running… If you didn’t install the mqtt module mentioned earlier then just delete all the lines containing the string ‘mqtt’ and the script will keep on working, but now the information will only be displayed on the console and not published anymore – you’ll have to find yourself another way to get the information there where you want it.

With MQTT it’s very easy to create a web page that displays the information in text and also draws a map, all real-time.¬†¬†I added some reverse geocoding to complement the information with an address, et voila:

Btraced-1

I haven’t tried it, but it cannot be hard to run the web page shown above on a RPi as well.

So there you have it: your own, private, fully customizable GPS tracking system for the price of a Raspberry Pi!

Raising the bar with the IRTrans driver

The last few days I’ve been busy developing a driver for my IRTrans LAN IR transmitter/receiver. I chose the IRTrans because it’s not being used that much (anymore), but still enough to detect bugs within a few hours. And another nice thing is that the IRTrans LAN Gateway accepts multiple connections, so testing a new driver is easy – I won’t have to completely shut down my previous (and still actively used) driver developed in Delphi.¬†While working on this driver, I took the time to also improve some other things (which I should have done earlier).

Settings

I didn’t like the way some drivers have IP-addresses of hardware interfaces hard-coded, so I fixed that. Mosquitto (the MQTT broker I’m using) looks like the best place to store settings like the IP-address & port the driver has to connect to, poll intervals and other parameters that influence how a driver behaves. But first, I needed a way to find out where my broker is – for that I created 2 system-wide environment variables by adding the following to the file /etc/environment on the Raspberry Pi:

DDMC_BROKER_HOST=192.168.10.13
DDMC_BROKER_PORT=1883

If you’re using a Windows PC for development, you’ll have to add those environment variables on that machine as well, or add them to the nodevars.bat that starts the Node.js environment on Windows:

@echo off
rem Ensure this Node.js and npm are first in the PATH
set PATH=%APPDATA%npm;%~dp0;%PATH%
rem settings specific to DDMC
set DDMC_BROKER_HOST=192.168.10.13
set DDMC_BROKER_PORT=1883

A simple change in how the Node.js MQTT client connects and this issue was fixed:


var host = process.env.DDMC_BROKER_HOST;
var port = process.env.DDMC_BROKER_PORT;
mqttClient = mqtt.createClient(port, host);

When the connection has been established, the rest of the required settings can be retrieved from the MQTT broker by subscribing to a topic and wait for a message to arrive:


// get required settings
tools.mqttClient.on('connect', function(packet) {
  tools.settings.require('host');
  tools.settings.require('port');
  tools.settings.require('subdelay');
});

Of course, the script has to wait for the required settings to arrive; this is done by keeping track of all the required settings and ‘pause’ execution until all required settings are ‘set’ by the incoming messages. I picked up this idea by browsing the HomA source code which was brought to my attention in a comment recently. Now I can remove all the hard-coded stuff from the driver code and clean them up a bit.

OK, back to the IRTrans driver. During all the years I’ve been working with the IRTrans, I added some ‘features’ which I didn’t want to lose:

Channels.

I’m used to working with TV station names instead of channel numbers for my cable STB. An example: I have a page on my Philips Pronto TSU9600 with icons for all the TV station we’re able to receive. Let’s say that each icon (a button, actually) has on onClick() method which looks like this: SetDevice(‘upc’, ‘EUROSPORT’);

Why? Because I can’t remember that EuroSport is on channel 401, but I can remember the ‘short-codes’ like ‘EUROSPORT’, ‘NED1’, ‘BBC1’, ‘WDR’,’DISCOVERY’…

But the IRTrans doesn’t know what ‘EUROSPORT’ means, so I made a ‘mapping’ table on my SQL Server to translate ‘EUROSPORT’ to ‘401’. But because the Node driver has no way to connect to my SQL Server, I added an extra setting to the IRTrans driver, so I don’t have to change all the UI’s when something changes and I can keep on using these ‘short-codes’ for ever:

{"EUROSPORT":201,
"BBC1":50,
"BBC2":51,
"ANIMAL":21,
"NATGEO":18,
....
"CNN":401}

The JSON formatted data shown above is parsed and used as an associative array.

Delays

Let’s take the example above a little further. Eurosport, channel 401. All the buttons on the remote of the Cable set-top box (STB) were learned with the Philips Pronto (PEP1) and added to a IRTrans .rem file. That means that I have an IR code for the button ‘1’, another one for ‘2’, ‘3’ and so forth. But I don’t have an IR code for ‘401’. So, to mimic selecting channel 401 with the remote, I have to send the IRTrans 3 IR codes: the code for ‘4’,’0′ and ‘1’. And I have to add a delay between those 3 IR codes because if I don’t, the STB doesn’t always recognize what’s being sent!¬†Hmm… delay, event-driven, timeouts… challenge.

Adding a delay is not such a problem, because the Node.js¬†setTimeout() command allows you to add a minimum(!) delay before something’s being executed. But no matter what I tried, instead of sending the ‘4’,’0′,’1′ sequence to the IRTrans, my driver code sent ‘1’,’1′,’1’… Okay, those who are familiar with Javascript will probably know what is causing this, but for me this ‘scope’ problem was new!

After I read this excellent page, I knew what to do… well, that’s the down-side of immediately start coding instead of learning the language first.

Toggle codes.

A more detailed explanation of the problem with our current STB is described here¬†as well as how I handled it, although the issue can also be fixed with newer IRTrans firmware. The new Node.js based IRTrans driver has to automatically handle this toggle problem as well of course, as if toggle codes don’t even exist. That’s because I don’t want to send up#1, up#2, up#1 from my UI’s to go from channel X to X+3, but just up, up, up – it’s ridiculous and it wouldn’t work anyway, with more than 1 UI –¬†capiche?

So the IRTrans driver should keep track of the last-used IR code for ‘up’. ¬†The ‘old’ Delphi driver already retrieved a lot of information from the IRTrans gateway – the identifications of all the remotes (upc, onkyo, hdmax, …) and all the commands (up, down, vol+, poweron,..) of those remotes, so I did the smae thing in my Node driver.

So the physical STB remote has 2 different IR codes for the ‘up’ button, which I named ‘up#1’ and ‘up#2’. I use the ‘#’ in the command name to detect that we’re dealing with a so-called toggle code for the ‘up’ command. So all I had to do was collect all the commands starting with ‘up#’ into a single object instance, add an index counter and all goes automatically…

During startup of the new IRTrans driver I query the IRTrans driver with ‘Agetremotes‘ and ‘Agetcommands‘ and store the information in an array. Here’s the code of the class that ‘hides’ the toggle code issue:

var InfraRedCommand = function(remotecommand) {
  this._command = remotecommand;
  this._index = 0;
  this._toggleCodes = [];
}
InfraRedCommand.prototype.addCode = function(code){
  this._toggleCodes.push(code);
}
InfraRedCommand.prototype.getCode = function(){
  if(this._toggleCodes.length > 0){
    this._index++;
    if(this._index >= this._toggleCodes.length) this._index = 0;
    return this._toggleCodes[this._index];
  } else {
  	parts = this._command.split(",");
    return parts[parts.length-1];
  }
}

module.exports = InfraRedCommand;

Now the toggle codes are hidden, in just a few lines of code ūüôā

From time to time I test this new IRTrans driver by sending an ‘off’ command to the TV or STB, or switching to ‘CNN’. Judging by the amount of uproar coming from the living-room, this new driver is working very well!

 

Migrating to the future has begun

I think I’ve got it. For now… Almost a year ago I realized that something had to change; my Domotica system grew too fast, became too big to keep it all in a single executable: stability and flexibility were the two main issues that had to be addressed.

During the last 12 months I tried several things to find a better solution on how to gradually rebuild my system; ZeroMQ, SimpleCortex, MQTT, FEZ, Raspberry Pi, Node.JS, Python, Netduino, they were all tried and tested for some time. And (for me) the winners are: Raspberry Pi, MQTT and Node.JS.

The power of Node.JS enables me to very quickly develop a stable hardware driver and combining this with a very small, low power & cheap Linux box like the Raspberry Pi to run those drivers is absolutely unique in my opinion; and MQTT is the super-glue that makes all the different parts of the system talk to each other, from Arduino sketch to VB.Net application.

The last weeks have been quite busy for me, there was very little time was left for working on Domotica, but with some hours here and there I still managed to write 5 drivers for some parts of the hardware I’m using in my Domotica system. So since a week or two I have a Raspberry Pi here that has been running those 5 replacement-drivers flawlessly – for my RooWifi (Roomba), RFXCOM RF receiver, Mobotix security camera light sensor, HA7Net with 10 1-Wire sensors attached to it and for my¬†Remeha Calenta boiler. The last one mentioned is one of the most CPU & I/O intensive drivers I have, but the Node-version of all those drivers work perfectly on a single RPi:

uptime

Still enough processing power left to add some more drivers, don’t you think?

I made some changes to my monolithic Domotica system so that it would accept ‘raw’ device information from the outside world by means of MQTT, automated starting the drivers after booting the RPi and everything has been running great so far. I still have a lot of things to do though, mostly regarding maintenance & ease of use, of which some issues have already been addressed and others need some more time to find the best solution:

  • backup procedure for the RPi;
  • remotely controlling the RPi and its drivers;
  • supervising the RPi and its drivers;
  • storing global configuration data elsewhere.

So I still have some things to do before I can concentrate on migrating all my hardware drivers to the RPi, but I think I should be able to do one driver per week (that includes developing, testing and accepting it as a reliable replacement). The advantage I have is that I already have thoroughly tested Delphi code with all the workarounds for hardware specific¬†peculiarities in the code; for example, I know that the Remeha Calenta sometimes doesn’t respond to a query, so I already know the new driver on the RPi needs to able to handle that – knowing all those peculiarities¬†will certainly speed up things.

Another advantage is that all my hardware interfaces are connected to my LAN, so I don’t have to worry about RS-232, -485 or USB stuff, an IP address is enough for the RPi to communicate with the hardware interface.

So if all goes well, my Domotica system will be stripped of all its built-in drivers in about 30 weeks or so (cause that’s about the number of drivers I’m using right now) and all those drivers will be migrated to the RPi. Sounds doable, and I hope this will still leave some time to do other things as well, like adding more hardware, software and features to my system.

Yeah I know, I’m always too optimistic… ūüėČ