Time based sampling vs signal duration

Not being ‘hindered’ by any knowledge about Manchester decoding and signal processing, I came up with the following idea. Rereading the OpenTherm Protocol 2.2 documentation, the following is said about the bitrate and timing:

Bit rate                           : 1000 bits/sec
Period between mid-bit transitions : 900 .. 1150 μs (nominal 1 ms)

Furthermore, there’s a built-in time margin where a transition can take place; When TØ is the start of a bit period of ≈1000 μs, the transition must take place between TØ+400 μs and TØ+650 μs. If not, the transition wouldn’t be conform OT protocol specs. This in fact means that the time window in which the transition can take place is rather large, namely 250 (100+150) μs. That’s 25% of the total bit period, which sounds like a lot to me, actually.

I want to try to code something that’s just as flexible – maybe even more flexible, considering the fact that it’s not unusual for things to be out of spec. Wouldn’t it be better to search for other things that can be measured just as well but which are less dependent on time? Determining long vs. short periods should be enough, maybe? So lets forget about the timing and have a look at the highs and lows for a change. One thing that could be useful is the fact that the signal must always be stable for a period of at least 250 μs; cause if not, it would be out of OT protocol specs.

You could also say that a short period should be between 400 and 650 μs and a long period between 750 and 1250 μs. That means there’s a gap of 100 μs between the longest ‘short period’ and the shortest ‘long period’, but all still within specs. I should be able to determine whether a signal has been stable during a short period or a long one… right?

Update:

The rest of this post has been deleted, because it was totally rubbish and incorrect – what was I thinking?? Too much beer perhaps… 😉

This will soon be fixed…

Tagged . Bookmark the permalink.

3 Responses to Time based sampling vs signal duration

  1. Pingback: Oops and yeah!

  2. Lemoi says:

    Why use time sampling the signal levels and not using edge detection? Implementing manchester coding using edge detection, with edges occuring within timing limits, makes the code much simpler. Also, the whole difference between Manchester as per 802.3 and as per G.E> Thomas becomes irrelevant. It does save a lot of processing power as well.

Leave a Reply

Your email address will not be published. Required fields are marked *