ESP8266 in deep sleep

With the ESP-12 modules on a breadboard adapter I was finally ready for some tinkering. The plan for today was very simple: flash NodeMcu firmware, start programming the ESP8266 in Lua and try deep sleep mode.

I put an ESP-12 on a breadboard, used a FTDI-ish thing to connect the ESP-12 to my Windows PC and used a 3.3/5V breadboard power supply (set to 3.3V) to power the ESP-12.

ESP-12 in deep sleep

Flash!

Time to flash the thing! I knew that sometime in January MQTT was added to the NodeMCU firmware so I searched for a recent firmware version that contained the MQTT code. I read some rumors that MQTT seemed to be broken in the latest NodeMcu firmware releases – on the ESP8266 forum I read that v20150127 was the latest release where MQTT still worked; yesterday I read that it was due to the addition of MQTT v3.1.1 support.

Tools and other things I downloaded to get started with the ESP-12 were:
NodeMcu firmware
NodeMcu flasher
LuaUploader
LuaLoader

The latter 2 have some overlap in functionality – it looks like LuaLoader will be my favorite. OK; now that I have a flash tool and the v20150127 firmware – what’s next? After some trial and error I found out that I had to change something on the ‘Config’ page of the flash tool:

NodeMCU Flasher

I unchecked items 2, 3 and 4 and let the first item point to the right firmware image I wanted to use. Figuring this out took me longer than soldering the breadboard adapter … A wire from GPIOØ to GND followed by a cold boot set the ESP-12 in firmware upload mode, clicking “Flash” on the Operation tab was enough to flash the firmware.

After some playing around with “Hello World”- and “Blink”-like Lua scripts it was time to do something that would be a bit more exciting – things like interrupts, deep sleep and some MQTT of course.

First I wanted to know everything about deep sleep; I found this forum post and read about another mode the ESP8266 could be in – zombie mode. I wanted to avoid that mode of course so I took the suggested zombie counter measures which is pulling up GPIOØ & GPIO2 to VCC with ~5kΩ. And for using the deep sleep mode RST & GPIO16 have to be connected to each other and also pulled up to VCC; and of course CH_PD as usual.

Boot loop protection!

And of course, the first script I made with a node.dsleep() in it didn’t work .. well, it did what it was supposed to do, but not what I meant it to do! Some error in the code caused a reboot within a few seconds and there was no way I could stop this boot loop; nothing helped. Only after re-flashing the firmware I regained control over my ESP-12… So the first thing I did was searching for a workaround/solution for this and found one here, so now my init.lua (the NodeMcu autoexec.bat 😉 looks like this:

FileToExecute="printtext.lua"
l = file.list()
for k,v in pairs(l) do
  if k == FileToExecute then
    print("*** You've got 5 sec to stop timer 0 ***")
    tmr.alarm(0, 5000, 0, function()
      print("Executing ".. FileToExecute)
      dofile(FileToExecute)
    end)
  end
end

Yeah I know,  this adds an extra delay of 5 seconds after a restart, but this is much, much better than the need to re-flash each time you make a mistake – and since my experience with Lua is like 1~2 hours, I think that this will be my init.lua for a loong time.

Results for this evening: Deep sleep seems to be working… onwards!

New ESP8266 (ESP-12) modules ready

Today, while my son and I were visiting the NMM, the breadboard adapters for my new ESP8266 type ESP-12 modules arrived. Finally …

ESP12_on_breadboardI was a bit surprised by their size though; the adapter covered the whole middle area of the breadboard, leaving no room to plug in wires as you can see. So instead of using the headers that came with the adapter, I used headers with a length of 18 mm. This way the adapter board is still held at its place on the breadboard and I can use female wires to connect to the ESP-12.

Now let’s see what has happened in the ESP8266 scene during the time I was away 😉

WordPress on Banana Pi?

WordPress is one of those applications of which I was not sure whether a small couple-of-watts computer like a Raspberry Pi, Banana Pi or Odroid could handle it. WordPress always felt a bit sluggish… Well, there’s only one way to find out, right? Just do it 😉BananaPi

So this afternoon I moved my WordPress site from a Hyper-V Fedora Linux VM to a Banana Pi that didn’t have that much to do yet – and right now, you’re looking at it! (WordPress on a Banana PI, that is)

Voice control revisited: the Web Speech API

After exploring some things last April it became quiet regarding Voice control. I also played with the Web Speech API for some time but never finished it. But last weekend (while still waiting for my ESP-8266’s (ESP-12) to arrive) I decided to give it another try, even though I backed the Homey project on Kickstarter so I probably won’t even need to spend time on this – this is just for fun.

Home screenVoice control page

The Web Speech API documentation is not that hard to understand and there are dozens of good examples to be found – just search for “Web Speech API demo” or something similar and you’ll find plenty of good examples.

I had already made a small ‘voice’ button on the Home page of our Web-app so all I had to was finish the page behind that button. A large start/stop button to control the voice recognition and an area in which the results of the speech recognition could be displayed. Speech recognition works very good, impressive stuff!

The code is very short and simple actually:

<div data-role="page" id="pgstt" data-theme="a" data-content-theme="a">
  <div data-role="header"><h2>Spraak commando</h2></div><!-- /header -->
  <div data-role="header"><h2 id="sttstatus"></h2></div><!-- /header -->
  <div data-role="content">
  <button id="sttbutton" onclick="toggleStartStop()"><img id="sttbuttonimg" src="icons/micbut.png" /></button>
  <div style="border:dotted;padding:10px">
    <span id="interim_span" style="color:grey"></span>
  </div>

  <script type="text/javascript">
    var recognizing;
    var recognition = new webkitSpeechRecognition();
    recognition.lang = "nl-NL";
    recognition.continuous = true;
    recognition.interim = true;
    reset();
    recognition.onend = reset;

    recognition.onresult = function (event) {
      var interim = "";
      var last = event.results[event.results.length-1][0].transcript;
      interim_span.innerHTML += last;
      cmdPublish('speech', last);
    }

    function setLS(t) {
      sttstatus.innerHTML = t;
    }

    function reset() {
      console.log('Stopped');
      recognizing = false;
      $("#sttbuttonimg").attr("src","icons/micbut.png");
      setLS("Gestopt");
    }

    function toggleStartStop() {
      if (recognizing) {
        $("#sttbuttonimg").attr("src","icons/micbut.png");
        setLS("Gestopt");
        recognition.abort();
        reset();
      } else {
        recognition.start();
        recognizing = true;
        $("#sttbuttonimg").attr("src","icons/micbutl.png");
        setLS("Luisteren ...");
        interim_span.innerHTML = "";
      }
    }
  </script>
  </div><!-- /content -->
  <div data-role="footer" data-position="fixed">
  </div><!-- /footer -->
</div><!-- /page -->

That’s it. Primus takes care of delivering the text to the server-side NodeJS script which passes it on to the Nools rules engine which I use to automate things. I can now makes rules like this:

//---------------------------------------------------------
rule hobbytestopen {
    when {
      or(
        m1: Message m1.t == 'sensor/value' && m1.changedTo('open'),
        m1: Message m1.t == 'speech' && m1.contains('test licht aan')
        );
    }
    then {
        unchange(m1);
        log('Execute rule Office test open');
        publish('command/plcbus','{"address":"B02", "command":"ON"}');
    }
}

Now this rule can be triggered by either a sensor or a speech command which contains the words ‘test’ licht’ and ‘aan’ (for the non-Dutch: “test light on”). The only restriction yet is that those words need to be in the order as specified in the condition.

That’s not good enough of course, cause not only saying “test licht aan” would trigger this rule but saying “blaastest wijst uit dat ik lichtelijk ben aangeschoten” would also … not really intelligent 😉 But those are just small issues that can easily be handled.