User Interface on a breadboard

The end is near – shutting down my old Home Automation system comes very close now; this calls for a look ahead – a glimpse of the end result, of what the user will see of all the work that has been done in the last six months.

I’ve started testing one of the last big drivers which still have to be moved from my Delphi/Windows based Home Automation system to the new Raspberry Pi/Node.JS based one: Zigbee. I was lucky to find a really great xbee-api which takes care of all the low level stuff with all the different frame types and all; all I had to do was add an additional layer between the xbee-api and my system (e.g. add MQTT functionality, parsing the sensor data inside the type 0x90 frames) and it has been running flawlessly for the last 72 hours.

I’ve been tinkering with a lot of different things since the last 1½ years or so, only to find what suits me best – regarding hardware, programming languages, development environments and tools to use. Six months ago I knew I had found my ideal combination and gradually started to replace parts of my old (and still operational) current system with parts running on Raspberry Pi with Node.JS and MQTT as the main building blocks. Gradually, one step at a time. No rush. The users (i.e. me & the rest of the family) didn’t even notice those replacements – service level is just as good as before and the amount of human intervention to keep the system healthy has decreased to what I regard as the least possible. For instance, I can now replace a LAN switch without any problem – after the new switch is operational, the system will recover itself within 2 minutes, all by itself. Before, the best thing to do was a complete restart… Replacing the batteries in the sensors is something I still have to do myself though… and fixing bugs in the code of course – but that’s about it 🙂

Some of the choices I made were not the ones I anticipated. And some of them should even be a warning for others to not be guided by what is hot – take your time to select your own ideal mix of tools.

Back to the topic now…. the last phase in this whole transition will be rebuilding the User Interface(s). A summary of what is being used to monitor & control everything and their current status:

  • VB.Net application running on a touchscreen: still working fine, migration from UDP broadcast to MQTT has been completed a long time ago.
  • Philips Pronto TSU9600: done – all I had to do was change an IP address, from the old Delphi XMLRPC server to the one of the Raspberry Pi which runs the new XMLRPC server.
  • Website: not started yet.
  • Mobile UI: no modifications needed, yet.

Being able to gradually transform my Domotica system from Delphi/Windows to Node.JS/Linux (and from a rather high hardware/power usage demand to a minimalist one) without any noticeable hick-up in terms of ‘service level’ is quite an achievement if I may say so. Every week another component of my old system is being disabled and its role taken over by the version developed in Node – seamless. But will I be able to do the same with the last part that hasn’t been done yet, the website I’m running to show what Home Automation is about? Rebuilding parts of it while keeping it fully functional looks like an even bigger challenge.. but let’s first have a look at what I want for the future.

  • As real-time as possible.

The era in which I (as a user) had to wait for the next ‘poll’ to see what is happening in our house is long gone – if the back door of our house is opening or closing, I want to be able to see that immediately, wherever I am.

  • One interface for all.

As I said, I’m running a website as some sort of showcase, just to show what Domotica/Home Automation is all about, what you can do with it etcetera. Being lazy by nature, I don’t like the prospect of maintaining multiple websites – the showcase and another one for us, our family with control added to it.

  • Responsive

One site for all; whether it’s a smartphone, tablet, laptop or a desktop PC with 2 HD-resolution screens attached to it. The UI should adapt to what the device has to offer.

  • Easy to extend.

The road from newly added functionality working technically to being fully deployed to the user is one of those things I tend to forget; making things work is what makes me tick, actually being able to use it is something I just seem to ‘forget’, too often. That’s why we can listen to voice-mails on the touchscreen in the living-room, but not on our smartphones. So I need to find a way to easily extend a user interface with new functions; like some sort of new widget.

So I’ve been searching the net for some time to find the right tools to build that User Interface I want and came up with the following list of components that might be of help.

Express

From the Express site: “Express is a minimal and flexible node.js web application framework, providing a robust set of features for building single and multi-page, and hybrid web applications.” I’ve done some small things with Express during the last few weeks and I think it can be useful – cause I know now how fast a web site/-application can grow once you’ve got things going.

Primus

The easy switching between real-time frameworks (do you know how they will evolve?) and the auto reconnect feature is what I like the most about Primus. I already have some sort of auto reconnect, but it doesn’t always work that well – so far, Primus does it better than me 😉

AngularJS

A Model–view–controller (MVC) framework with a lot of features to build feature rich and easily maintainable web applications. Once you’ve made your first working example with AngularJS, you’ll know what I mean… why did I ever do things different!

HighCharts

Or maybe Highstock, I haven’t really chosen between those 2 yet. Gas-, power- & water-usage, temperatures , just to name a few, are all very good candidates for creating charts. Line charts, column charts, you name it. Fully customizable, very flexible. Developed in Javascript. I’ve already created a real-time chart of the current power-usage with it – as soon as our smart meter produces new information the chart updates itself with the new data (a new point is added to the line chart, the oldest one ‘disappears’) – brilliant!

I could now present a link to what I’ve created so far (a mess, lol), but that would be a bit premature; I still have too many things to learn, test and think through before I can say that I’ve again found the ideal mix for myself. So things can still change.. Until then, I’ll keep all those ‘site snippets’ to myself and keep on developing some of the most essential building blocks until I can safely assume I picked the right tools.

Historical data: LevelDB versus MySQL

So, I’ve got me a Cubietruck with an 60GB SSD attached to it. On the SSD there’s a LevelDB database with historical data for 121 device values. The total number of keys stored in this database is about 1,050,000. It takes about 8 minutes to fill this database from scratch with a MS SQL Server table as ‘source’.

That means that I can put (write) more than 2,000 entries to the database per second. That will do. Just kidding – that’s more than I’ll ever need of course.. but what about the get (read) performance? Being able to read 2,000 values per second from a database is not that much, so I hope the read performance is even better. A small test showed me that LevelDB could produce about 4500 values per second. Would it be possible to improve this? A lot, please?

So I decided to do some testing, so that I won’t end up with a user interface with charts that take ages to load. Therefore I wrote a small script that would query the database to retrieve everything for all the device values in the database. The result set, retrieved with the createReadStream() function of LevelUP, can have a size in the range of 4 to ~50,000 values. By counting the number of returned values and the time that was needed to get those (and push them to an array for further processing), I could get an indication of how fast LevelDB really is, based on a real database (not just a bunch of “123456abcdef” keys…) and based on result set ranges from ‘very small’ to ‘very large’. As in real life 😉

And why not do the same test with MySQL and see how that one performs as well? With a MySQL installation ‘out of the box’ (so no performance-enhancing tricks) I created a database with a single table in it. It contains the same data as the LevelDB version: device value ID, timestamp and value (and some more). And an index of course:

CREATE TABLE `data` (
`TIME` datetime NOT NULL,
`LGDEVICEID` varchar(25) NOT NULL,
`START` decimal(16,6) DEFAULT '0.000000',
`VALUE` decimal(16,6) DEFAULT '0.000000',
`COUNT` int(11) DEFAULT '1',
UNIQUE KEY `1` (`LGDEVICEID`,`TIME`),
KEY `time` (`TIME`,`LGDEVICEID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

Filling this table with the same data as the LevelDB database took a bit longer:  56 minutes vs. 04:19 for LevelDB..  I hope this isn’t indicative for the read performance 😉

Well, here are the numbers:

LevelDBvsMySQL

The numbers presented show the average number of rows per second each database can deliver in the specified size range of the result. An example for LevelDB: 100 createReadStream() calls resulted in a total of 62606 values in 13122 milliseconds, i.e. an average of 4771 values/second. MySQL: the same number of query() calls resulted in a total of 62606 values in 5996 milliseconds, i.e. an average of 10141 values/second.

The size ranges with which I tested may seem a bit strange, but that’s because I just didn’t have any historical data for a single device between 5k…10k – the gap between historical data that’s being stored temporarily and the historical data which is stored forever.

The numbers show a clear winner: MySQL. I guess that using a ‘traditional’ RDBMS for my historical data is not such a bad idea after all…

This is not what I expected actually. All those hyper-new database engines must have some benefits – cause what other reason would make them so popular lately? Well, I didn’t find them… at least not with my hardware, my historical data characteristics and my priorities … and since every second counts, I’m going for MySQL!

Meet the Cubietruck (aka Cubieboard3)

Last Tuesday a new board arrived: the Cubietruck, from the Dutch dealer embeddedcomputer.nl. Some specs:

  • AllWinner A20 ARM Cortex-A7 Dual-Core
  • 2GB DDR3 RAM
  • HDMI & VGA connector
  • 10M/100M/1G Ethernet
  • SATA 2.0 interface
  • Storage: NAND,MicroSD, TSD+ MicroSD or 2 x MicroSD
  • 2 x USB HOST
  • Power DC5V @ 2.5A
  • 54 I/O pins with I2C, SPI, CVBS, UART, PWM, IRDA and more

And this is what it looks like with an 60GB SDD on top:

Cubietruck with SSD on top

Although the ‘case‘ that was included (3 acryllic plates) won’t help much to protect the board from dust, it does look kinda cool 😉 The package also contains a SATA cable, DC jack, heat sink and some USB cables. All you need is a 5V power supply, screen, keyboard and off you go.

Cubietruck idle powerBTW, power usage with the SSD attached is about 2.3~2.4W; not bad! The Cubietruck comes with Android pre-installed, but that’s not what I bought this board for; Linux is much more suitable for what I’ll be using this board for, so I downloaded Lubuntu Server and PhoenixSuit. The latter one is a Windows tool to flash the Cubietruck. Flashing the Lubuntu image was a breeze and within half an hour or so I had Lubuntu running. It’s all relatively easy and there are tutorials in case you get stuck. Furthermore, the Youtube channel of ProgramOften contains some very good footage: high quality video and very informative; they helped me moving the operating system from NAND to SSD.

After installing ssh I could remove the keyboard and screen and continue my work with Putty. It seems that some (not all, cause mine doesn’t) Cubietrucks change MAC address after a reboot, which can result in a different IP address when the interface is set to DHCP – it’s better to configure a static IP address in that case.

The Lubuntu server OS comes with MySQL & Apache installed and those 2 services are automatically started at boot. I disabled the automatic part and went on installing the stuff I’ll need: primarily Node.JS and a whole bunch of modules for it, some tools (Monit), prepared backup to my NAS and started tinkering with it.

You can really notice the difference between a Raspberry Pi and the Cubietruck (duh) ; compiling Node from source, installing packages, it all runs much faster. For example, where it took the Raspberry Pi half an hour to fill a LevelDB database on the SSD with >1M entries, the Cubietruck only needed 8 minutes for that.

So, why this Cubietruck, aren’t Raspberry Pi’s good enough anymore? Yes they still are, but I think I’ll need a more capable machine to host my website, maybe weblog too, take care of the historical data and still get a speedy response. For that I wanted to have a board with SATA and the Cubietruck seemed like the right choice. Time will tell..

In the next couple of weeks I’ll be doing some tests to see if I made the right choice: I’ll have a look how LevelDB performs on the Cubietruck, I’ll build some webpages with Node & AngularJS, produce some charts on those webpages with Highcharts, things like that. Just to see where all this is going to; did I make all the right choices, any bumps down the road I didn’t foresee? Will I manage to get my system running this way, or even better than it runs now? On those ‘scary little boxes’ as my wife called them a few weeks ago? (she’s used to seeing big-tower sized servers…)

First item on the list: LevelDB. I’m going to perform some tests in which I’ll try to push it to the limit (within the boundaries set by what I’ll be using it for, of course). Exciting!