Jump to content
Guides for the Forum Read more... ×
Poradniki na Forum Read more... ×
FIBARO Home Center App 1.0.0 - release Read more... ×
FIBARO Home Center App 1.0.0 - wydanie Read more... ×

Recommended Posts

  • Topic Author
  • The good news is that this version performs better than the old one - even with 1 mailbox. The reason is of course that the consumer is more aggressive than the old one that only read 4 events/s. This one will read as fast as possible when events come in and then back off to poll 4 times/s when idle (to keep the reaction speed to incoming events).

    With only 3 mailboxes it seems to be able to sustain a throughput of 100 events at 10 events/s - which also seems reasonable.

    Some more performance tests (cpu load) before I include it.

    The general rule of thumb seems to be that the heavier work the consumer is doing (lots of cpu heavy rules), the more advantage it is to add more mailboxes. For the average user however, 1 mailbox seems to be enough.

    Edited by jgab

    Share this post


    Link to post
    Share on other sites

    @jgab

     

    Thanks for your detail analysis :-) Any improvements are welcome!

     

    Could you send me your latest version of experimental code?

    I will do my own test - different result will be probably thanks to my load of consumer...

    Today evening I will send you my scene via PM.

    Share this post


    Link to post
    Share on other sites
  • Topic Author
  • 2 hours ago, petrkl12 said:

    @jgab

     

    Thanks for your detail analysis :-) Any improvements are welcome!

     

    Could you send me your latest version of experimental code?

    I will do my own test - different result will be probably thanks to my load of consumer...

    Today evening I will send you my scene via PM.

    Here it is

    Please login or register to see this attachment.

    Share this post


    Link to post
    Share on other sites
  • Topic Author
  • Here is the latest experimental version with the best throughput so far.

    Please login or register to see this attachment.

     

    So, normally the ER framework can sustain 9 simultaneous trigger (arriving on the same millisecond). We always have one reserved instance for the main loop otherwise it would be 10 like any normal scene. However, an ER scene instance started by an incoming trigger tries to quickly send the trigger to the main loop, via a "mailbox" and then terminate, giving an opportunity for a new trigger to start a new instance. We can think of this as having a buffer of size 9 for incoming triggers. Or the "bandwidth" being 9...

     

    The main loop then polled the mailbox at the speed of 4 events/triggers per second. That meant we could have a throughput of 4 events/s and take burst of 9 simultaneous triggers. Longer periods of triggers arriving faster than every 250ms would build up a backlog and eventually exceed the max 10 instances allowed.

     

    This worked out well for most normal users... including myself.

     

    @petrkl12 have a lot of devices giving off a lot of events and he hit the dreaded max 10 instances limit in his scenes. His suggestion was to use multiple mailboxes so that incoming triggers have a better chance to be off-loaded from instances so they can terminate quicker. The other optimisation was to let the mainloop poll more often than 4/s when events arrives and then back off to 250ms when no events have been seen for a while.

    All this, together with some discussions back and forth, have improved the throughput significantly in the experimental code included.

    It can still only take 9 exact simultaneous triggers ,that's what the HC2 limits us to. However, the throughput is much better with sustainable trigger streams of 1 trigger every ~45ms when using 30 mailboxes. Burst of 10-20 triggers at speed of 15-20ms/trigger is also handled, which probably will have more practical importance.

     

    Already at 3 mailboxes a much better performance is seen.

    This improvement (thanks to @petrkl12)  will be included in the next release and the default will be 3 (autogenerated) mailboxes. In the past the mailbox names was "MAILBOX"..__fibaroSceneId, but now it will be  "MAILBOX"..__fibaroSceneId.."_"..boxID, ex "MAILBOX14_1"

     

    I'm surprised how powerful the HC2 really is :-) 

    • Thanks 1

    Share this post


    Link to post
    Share on other sites

    Thanks!

    Your framework is really best one and my recommendation is that other HC2 user should use it!

     

    • Like 1

    Share this post


    Link to post
    Share on other sites

    @jgab not everyday i have this error, can you see what is causing it?

     

    Please login or register to see this code.

    rule

    Please login or register to see this code.

     

    Share this post


    Link to post
    Share on other sites
  • Topic Author
  • 24 minutes ago, jompa68 said:

    @jgab not everyday i have this error, can you see what is causing it?

     

    Please login or register to see this code.

    rule

    Please login or register to see this code.

     

    Nothing obviously strange with that rule. Could you PM me with the src lines around 1308 and 1235 in your scene?

    Share this post


    Link to post
    Share on other sites
  • Topic Author
  • New version v.1.15. Update EventRunner.lua and EventRunnerDebug.lua.

    New trigger handler as discussed in previous posts. If you experience that the scene exceeds 10 instances you may increase the variable _NUMBEROFBOXES in the beginning of the scene to possibly improve the performance. Default is 1 mailbox and it will behave more or less as before. At 3 you get a bit better performance for shorter trigger bursts, and we have tested it up to 30 mailboxes that gives quite good throughput.

    The trade-off is that more mailboxes will require a bit more cpu load when "idle", because there are more mailboxes to look through. At 30 mailboxes I get around 10% cpu at idle. At 1 mailbox it behaves as before with very low idle cpu.

    The other behaviour that is different is that when triggers are coming in the cpu will spike a bit more as the new algorithm is more aggressive reading triggers - however it will throttle back when no triggers are available. This is just to better avoid hitting the "max 10 instances" for the scene and is in general a sensible trade-off.

    So, rule of thumb. Keep _NUMBEROFBOXES at1 and if you experience more than 10 "max instances"... increase it in steps of 3.

     

    The other way is also to not have too many triggers per scene, but instead distribute them over many scenes - like scenes per room etc.

     

     

     

     

    Edited by jgab
    • Like 1

    Share this post


    Link to post
    Share on other sites
  • Topic Author
  • Tried node-red and the Telegram Bot functionality. It uses the

    Please login or register to see this link.

    node.

    The total flow I use is shown below, the new nodes are the telegram receiver and the telegram sender.

    Please login or register to see this image.

    /monthly_2019_02/nr-15.png.90906d9c08ce0856a05688f0ed3d975d.png" style="width:700px;height:auto;" />

    You need setup the bot and get a token according to the instructions that come with the node. You also need to get the chatID that is created when you connect to the bot.

    The Telegram receiver will send the message to the HC2/ZBS wit the format

    Please login or register to see this code.

    the 'msg' field is the telegram message as received.

    To send a message back to Telegram use

    Please login or register to see this code.

    Easiest to define some helper functions that copy the chatId and type and messageId from the incoming message

    Please login or register to see this code.

    Everything in the 'msg' field is copied over to the payload for the Telegram message, so options like buttons, pictures or whatever Telegram support can be included.

     

    So, now it is really easy to chat with your home :-)

     

    Flow.

    Please login or register to see this spoiler.

     

    Edited by jgab
    • Like 1

    Share this post


    Link to post
    Share on other sites

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

    ×