Migrating to the future has begun

I think I’ve got it. For now… Almost a year ago I realized that something had to change; my Domotica system grew too fast, became too big to keep it all in a single executable: stability and flexibility were the two main issues that had to be addressed.

During the last 12 months I tried several things to find a better solution on how to gradually rebuild my system; ZeroMQ, SimpleCortex, MQTT, FEZ, Raspberry Pi, Node.JS, Python, Netduino, they were all tried and tested for some time. And (for me) the winners are: Raspberry Pi, MQTT and Node.JS.

The power of Node.JS enables me to very quickly develop a stable hardware driver and combining this with a very small, low power & cheap Linux box like the Raspberry Pi to run those drivers is absolutely unique in my opinion; and MQTT is the super-glue that makes all the different parts of the system talk to each other, from Arduino sketch to VB.Net application.

The last weeks have been quite busy for me, there was very little time was left for working on Domotica, but with some hours here and there I still managed to write 5 drivers for some parts of the hardware I’m using in my Domotica system. So since a week or two I have a Raspberry Pi here that has been running those 5 replacement-drivers flawlessly – for my RooWifi (Roomba), RFXCOM RF receiver, Mobotix security camera light sensor, HA7Net with 10 1-Wire sensors attached to it and for my Remeha Calenta boiler. The last one mentioned is one of the most CPU & I/O intensive drivers I have, but the Node-version of all those drivers work perfectly on a single RPi:

uptime

Still enough processing power left to add some more drivers, don’t you think?

I made some changes to my monolithic Domotica system so that it would accept ‘raw’ device information from the outside world by means of MQTT, automated starting the drivers after booting the RPi and everything has been running great so far. I still have a lot of things to do though, mostly regarding maintenance & ease of use, of which some issues have already been addressed and others need some more time to find the best solution:

  • backup procedure for the RPi;
  • remotely controlling the RPi and its drivers;
  • supervising the RPi and its drivers;
  • storing global configuration data elsewhere.

So I still have some things to do before I can concentrate on migrating all my hardware drivers to the RPi, but I think I should be able to do one driver per week (that includes developing, testing and accepting it as a reliable replacement). The advantage I have is that I already have thoroughly tested Delphi code with all the workarounds for hardware specific peculiarities in the code; for example, I know that the Remeha Calenta sometimes doesn’t respond to a query, so I already know the new driver on the RPi needs to able to handle that – knowing all those peculiarities will certainly speed up things.

Another advantage is that all my hardware interfaces are connected to my LAN, so I don’t have to worry about RS-232, -485 or USB stuff, an IP address is enough for the RPi to communicate with the hardware interface.

So if all goes well, my Domotica system will be stripped of all its built-in drivers in about 30 weeks or so (cause that’s about the number of drivers I’m using right now) and all those drivers will be migrated to the RPi. Sounds doable, and I hope this will still leave some time to do other things as well, like adding more hardware, software and features to my system.

Yeah I know, I’m always too optimistic… 😉

Tagged , , . Bookmark the permalink.

9 Responses to Migrating to the future has begun

  1. Frank says:

    Hi Robert,

    I just stumbled upon your website and blog. Wow! Great stuff! Thanks for sharing your project and thoughts.

    I am running a Linux/mosquitto based home monitoring/automation system myself. Currently not as big as yours, but maybe one day 🙂 I am using sqlite for persistence and php as glue. I am thinking about redoing parts of my setup as I’m starting to look into visualization and if-this-then-that rules. Node.js might be a better choice than php?

    How are you planning to handle persitence after migration? For example, daily and monthly totals on your website? Do you fetch historic or aggregated values through MQTT as well? How does your GUI query the topic tree? Do you have a naming convention in place for topics? In your code examples I see topics like ‘raw/…’, ‘value/…’ and ‘control/’. Maybe something along the lines of what homA uses?

    Frank

    • Hi Frank,

      The choice between Node.js or php is very personal, I’ve done some things with php in the past but I’m not that experienced that I can just start coding what pops up in my mind, far from that 😉 I know someone who has built a very large Domotica system in php, so php can’t be a bad choice; that’s about all I can say about it. Use what fits you best I’d say…

      Historical data will always stay in a database, even when the migration that’s going on right now has finished. Currently I’m using SQL Server 2005 but maybe I’ll make a switch to something else like MySQL and run it on my NAS. We’ll see about that in the near future..

      I use my own topic naming conventions; like the raw for raw sensor data; for example a door sensor could be topic raw/7823/open with possible values 0 or 1. 0 would mean closed and 1 would mean open. But “closed” and “open” are too specific because it’s localized; therefore I also use a value/ topic for the human readable values like “open|closed”, “open|gesloten” or “geöffnet|geschlossen”, depending on the language the user prefers. That’s not being done in the driver code but ‘somewhere else’, based on a table full of value/text pairs per information-type (e.g. door sensor, motion, …) and language. And I use topics like logging/ etcetera. I try to follow the DomoMQTT standard where it’s useful and possible.

      • Frank says:

        Hi Robert,

        I’ll try node.js, will see if it makes things easier than in PHP. If not, at least I learned a little bit of node.

        Regarding naming conventions… Are you re-posting ‘raw’ topics into a semantic tree with a topic- and language-specific value mapping, all derived from your database? Or do you translate topic names and values when you fetch data for the GUI?

        Thanks for the DomoMQTT link – another piece of the big puzzle I didn’t know about. I just wish the standard was a little more verbose. It’s a good starting point though.

        Thanks,
        Frank

        • Yes, I re-publish the raw values as soon as they’re available. This pageis a nice example – the listening mode of the AV receiver is a raw numeric value that can have about 60 different values; they’re all stored in a table with value/text pairs. The webpage displays the text; at this time it’s “Neo6:Cinema”, while the numeric (raw) value is 160.
          The raw values are much easier (behind the scenes, so to speak) to use when you start creating events or scenarios, cause they (i.e. what they stand for) never change, no matter the textual representation you give them. That’s how I’ve been working since day 1. There are just a few pages on my website that use MQTT right now, but those who do are working fine, displaying text from the ‘value’ tree 🙂

          • Frank says:

            Thanks. I really appreciate your feedback. BTW, I’ve got a Calenta myself. Can’t wait to solder a cable and start monitoring live heating status :-).

            I am still struggling with my persistence architecture. So, drivers connect to real world devices, push raw data to MQTT. Some MQTT-everything-subscriber receives all raw topics, writes them to a database, maybe with a timestamp, maybe aggregates min/max/sum/count/by day/by month etc., maybe reposts into a semantic tree.

            So far so good. Now, a UI (for example a REST based web server) might get certains values from the MQTT persistence database (assuming persistence flag was set at publishing). However, for historic data the UI would need to talk to the database directly since MQTT doesn’t natively support a query scheme and I don’t think it’s appropriate to store millions of historic samples in MQTT persistence. The UI might as well just query historic AND current values from the database instead of MQTT. Wouldn’t this render MQTT useless? Instead of publishing to MQTT, drivers might just post their raw data to a node.js persistence server which in turn updates the database or does whatever aggregation/renaming/reposting is required. Drivers may fetch their configuration from the node.js persistence server using the same methods a UI would use for querying current states, historic data, waiting for events etc. If node.js can handle multiple connections as efficiently as they say, then what’s the value of using MQTT in between? On top of persistence/reposting, driver configuration and commands, what other MQTT subscribers are needed?

            Please forgive me if these are stupid beginner’s questions. I’m just trying to re-think the big picture.

          • Well, I don’t have all the answers either, and asking questions is never stupid 😉

            My (single) database table with all the historic data contains >880000 rows and is >500 MB in size.
            I don’t think it’s good to ‘burden’ an MQTT broker with this.
            And I don’t aggregate my historic data (for example, as in creating rows with monthly power usage or so). That;s being done in the queries.
            So for displaying charts with historic data, I think It’s best to keep on using the database.

            For displaying current values however, MQTT will be used. And because I’d like to keep the number of technologies/mechanisms that are going to be used to transport data to a minimum, I’ll use MQTT wherever possible. Even an Arduino can ‘do’ MQTT and still have some RAM for what it’s doing. That’s why I chose MQTT to be the main transport mechanism of data.

            And why use MQTT for current values anyway? Cause I do have a ‘devicestatus’ table in my database, so why bother?
            Because displaying my Homepage (http://www.hekkers.net/domotica) takes about 50 SQL queries to display all the information; but that information becomes ‘old’ very quickly: the moment the queries have finished! 😉
            You’ll have to refresh the page or use some sort of scripting to refresh the page periodically (yuck!).
            With MQTT the information you see is always real-time. I know, there are other ways to accomplish that same goal, but right now I don’t see any reason to introduce an ‘other way’.

            But as I said, I don’t have all the answers and maybe, when I start working on the website, UI’s and such, I’ll come to the conclusion that doing it differently is better.
            We’ll see…

          • Frank says:

            Thanks for your insights.

            I spent the bigger part of this week’s spare time thinking about the architecture of my to-be-redesigned “domotica” system. For sure I will use a message bus and there will be independent (=decoupled) drivers for the very reasons you explain in your fantastic blog.

            Currently, MQTT is my #1 contender for the message bus. It’s light weight, proven and has clients/bindings in almost every programming language (albeit the lighter-weight clients only seem to support unreliable MQTT QOS 0 aka “fire and forget”).

            My #2 contender is JSON over multicast UDP. With JSON over UDP I can design the messaging scheme I really need (versus being bound by MQTT pub/sub limitations). A UDP packet carrying a JSON object {“id”:”component/item”,”set”:”on”} would control an actor, another UDP packet with a JSON object {“id”:”component”,”query”:”*”} would enumerate all items and their status from a component. Sensor status changes would be posted as JSON objects {“id”:”component/item”,”status”:”27.3″}, optionally with extra fields such as timestamp, etc pp. Because it’s multicast I don’t need a broker which could be a bottleneck (unlikely for home automation, but still) and would be a single point of failure. Any embedded board acting as an MQTT client today should happily send and receive JSON over UDP.

            On the other hand, MQTT has a very handy last-will feature, supports SSL and passwords (looking at recent news I will want that rather sooner than later).

            #3 would be MQTT carrying JSON objects as MQTT message content (in this case less the id field which would move to the MQTT topic).

            I’m not sure yet which one I’ll go with. Guess at the moment #3 is my favorite.

            Frank

  2. Mark says:

    I’m wondering what is your reason to choose for mqtt instead of zeromq. I read you where looking into zeromq a year ago.

    Are you using a broker? or do you send direct messages?

    • I think the primary reason is that MQTT has all the things I need and it (the protocol) is easy to work with and lightweight. And the code base is small – you can even implement it on an Arduino if you want to. Windows source code for 0MQ is a >2300 KB zip file, while MQTT source code stays below 40 KB (because it’s just the client code of course). And since we’re talking about a broker that handles about 8 outbound and 6 inbound messages/second, I don’t think I’ll have to worry about the cons of MQTT compared to 0MQ. When I discovered MQTT, 0MQ looked like a messaging system of which the size just didn’t match with the system I wanted to use it for…
      I use a broker, named Mosquitto

Leave a Reply

Your email address will not be published. Required fields are marked *