Jump to content

Welcome to Smart Home Forum by FIBARO

Dear Guest,

 

as you can notice parts of Smart Home Forum by FIBARO is not available for you. You have to register in order to view all content and post in our community. Don't worry! Registration is a simple free process that requires minimal information for you to sign up. Become a part of of Smart Home Forum by FIBARO by creating an account.

 

As a member you can:

  •     Start new topics and reply to others
  •     Follow topics and users to get email updates
  •     Get your own profile page and make new friends
  •     Send personal messages
  •     ... and learn a lot about our system!

 

Regards,

Smart Home Forum by FIBARO Team


Recommended Posts

16 minutes ago, jgab said:

Please login or register to see this code.

 

OK, Thanks

 

code bellow currently doesn't work - now there is error, this is your code for setting level of lights based on long press of key

btw. it never worked in my case and I don't know why ..

Please login or register to see this code.

 


 

Link to comment
Share on other sites

are you sure long press is working for you? I saw that some devices with be shown in Fibaro as long press/hold down and then after second or so as released, even if the device never sent released.

Link to comment
Share on other sites

it could be (Z-Wave Remotec Scene Master) - I can try different remote control to check it ie. KeyFob from Fibaro

 

@jgab

in newest version in Github has to be some error -

API error if any CentralSceneEvent is triggered ...

 

 

Edited by petrkl12
Link to comment
Share on other sites

  • Topic Author
  • 2 hours ago, petrkl12 said:

    it could be (Z-Wave Remotec Scene Master) - I can try different remote control to check it ie. KeyFob from Fibaro

     

    @jgab

    in newest version in Github has to be some error -

    API error if any CentralSceneEvent is triggered ...

     

     

    Sorry, the latest push was not tested on the HC2 . I think I fixed the API error, and pushed a new version.

    I'm a bit unsure why the Z-Wave Remotec Scene Master example doesn't work. I only have a Fibaro key fob to test with.

    Let me have another look at your example tomorrow morning. Meanwhile, do some simple logging of the CentralSceneEvent and see if you get the release like @tinman refers to.

    At least I hope the data structure from different CentralSceneEvents aren't inconsistent. Then we need to keep track of device type too.

     

    Link to comment
    Share on other sites

    @tinman

    You're right - Z-Wave Remotec Scene Master generates a lot of HeldDown and Released events during my ONE long press of button ....

     

    Is it possible to solve it?

     

    btw corrected rules with :central looks like:

    Please login or register to see this code.

     

    @jgab

    your framework on github is now OK

    Edited by petrkl12
    Link to comment
    Share on other sites

  • Topic Author
  • 15 hours ago, petrkl12 said:

    @tinman

    You're right - Z-Wave Remotec Scene Master generates a lot of HeldDown and Released events during my ONE long press of button ....

     

    Is it possible to solve it?

     

    btw corrected rules with :central looks like:

    Please login or register to see this code.

     

    @jgab

    your framework on github is now OK

     

    Ok, haven't been able to test this, but the principle is to delay the 'Released' event (here 2sec) ... the d.heldDown flag is to make sure we don't do two 'HeldDown' in a row.  Maybe it works with 1sec. 

    I'm not sure if the solution is acceptable. Otherwise do up/down on one key and stop on another. 

    Please login or register to see this code.

    This version has a chance to work. There were some syntax errors in the first attempt. One tricky one that I missed myself were the "|| >>"

    The syntax for the "|| <test> >> <expr> || <test> ..." is

    Please login or register to see this code.

    This will log 'b' if 55 is on. It's important that one doesn't end the <expr> with a semicolon unless one wants to terminate the construct. E.g.

    Please login or register to see this code.

    This will not log 'b', as the second statement becomes a substatment to the first (log 'a'), and because that's false it will skip until the log 'always...'

    Another this is that a statement like "key = foo:central & key.keyId=='1'" needs to have parenthesis as '&' has priority over  '='

    E.g. "(key = foo:central) & key.keyId=='1'"

    Otherwise the code would interpret it as "key = (foo:central & key.keyId=='1')" which is most likely not the intention.

    Edited by jgab
    Link to comment
    Share on other sites

  • Topic Author
  • This post is long and contains thoughts and implementation notes for the EventRunner framework. This part will discuss the basic EventRunner framework and how it deals with asynchronicity (to the best of my knowledge). A second post will talk about implementation aspects of the EventScript language.  These posts will be updated over time. 

    Writing things down serves as a way for me to learn what I have been doing...

     

    Concurrency and the perils of fibaro globals

    When implementing scenes in Lua on the HC2, it comes a point when one starts to deploy helper scenes; maybe a scene responsible to do notifications on behalf of other scenes, or a log scene, aggregating log messages from the other scenes and once a day mailing them out. In short, there is a need to communicate between scenes.

    Sometimes it can be indirect; an VD sets a fibaro global to ‘night’ and another scene triggers on that and starts turning off lights.

     

    However, the HC2 runs VDs and instances of started scenes in parallel, like applications typically run on a PC.

     

    The problem with that is when parallel processes compete for the same resource, there is risks for really strange and difficult to find errors - errors that only happens maybe once a week or month, highly unpredictable - errors that make developers blame the hardware...

    On a PC, to continue the comparison, the application developer gets a lot of support to share resources in a safe way, on the HC2 on the other hand we don't have that much help...

     

    Example.

    Assume we have a simple scene that updates a fibaro global ‘Test’ with the values 1 to 10. Let’s call it the “Producer scene”, as it produces numbers…

    Please login or register to see this code.

    Then we have another scene that triggers on the global ‘Test’ changing value, and prints that value. Let's call it the “Consumer scene” as it consumes the incoming number. Btw, I have set “maximum allowed instances” to max (i.e. 10) for this scene.

    Please login or register to see this code.

    When we run the producer scene, in the best of worlds, we would expect the consumer to print 1,2,3,4,5,6,7,8,9,10
    but in reality we get (it varies slightly every time we run it):

    Please login or register to see this code.

    First note that we got 10 values, but not the values we expected. This means that the consumer got notified 10 times, every time ‘Test’ changed value. That’s great.

    However, what happens is that the producer sets ‘Test’ to ‘1’, consumer gets triggered, producer unfortunately (because it’s running in parallel to the consumer) manage to set ‘Test’ to ‘2’ before consumer’s new instance is up and running and reading the value of ‘Test’.

    So it continues, the producer manages to update the variable quicker than the consumer manages to read it. We get the strange number sequence in the log above, but now we know why.

     

    So, can’t we solve it with a small delay in the loop where the producer updates ‘Test’, to allow the consumer to get time to read the value?

    Please login or register to see this code.

    No.

    In general you can’t. This example will do better and in most runs produce all the numbers.

    But we can’t guarantee it. There could be other things going on in the box that gets all the cpu for a while, making that time relatively zero for the consumer and producer. It’s not a real-time operating system, and how much time instances get is depending on load and other resource constraints. Anyway, fixing things by throttling the performance is not that satisfactory…

     

    Assume now that we have 2 different producer scenes. No matter how much they sleep it's a chance that they will wake up at the same time and set the global ‘Test’ with their value. One of them will loose its value before the consumer can read it… It may also be that the producers are implicitly synced because they react on same triggers, like other globals, making a good chance for them to sleep at almost the same time and wake up at almost the same time. So, if you do add sleeps, make it at least random.

     

    There is a construct that partly allows us to solve this. fibaro:startScene(sceneID,{args}).

    This is a way to give arguments to another scene without relying on setting a fibaro global to trigger the other scene. Instead the consumer scene gets started with a sourceTrigger of ‘other’ and the arguments available for the consumer in fibaro:args(). The HC2 makes sure to store the arguments in each started consumer instance, so there is no risk that the producers starting the consumer will overwrite any argument.

    Please login or register to see this code.

    Please login or register to see this code.

    This works much better. Almost 10 consumer instances will be started depending on the speed of the producer and consumer, and they will run in parallel and each print the value they were given, all the number 1 to 10 will be printed, no missing or duplicates…

    Please login or register to see this code.

    …but not in the right order.

     

    That's  because the instances of the consumer are started in parallel, and what instance that gets the first chance to run may vary a bit…

     

    Ok, this may not be a big problem. A log scene (consumer) should get a timestamp with the messages from the logging scenes (producers) so it can sort the messages in time order before storing or sending them elsewhere - it can’t deduce the order of the log messages only based on the order they arrive.

     

    However, assume the log scene would like to store the messages in a fibaro global for persistence? Here our consumer append the numbers it gets and store it in a global named ‘Log’.

    Please login or register to see this code.

    Well, besides that the numbers may be out of order (we don’t care in this case), we have another potential problem. Assume a started Logger instance gets the global Log value and appends its argument. However, before it is able to write that Log value back (fibaro:setGlobal) another parallell instance of the Log scene also reads the Log value and are quick enough to append its argument to it and write it back to the global before the first instance. The first instance will then write its version of the Log value back to the global, in effect loosing the value that the second instance stored there.

     

    So, we have the same possible errors within a scene and its instances if we somewhere have multiple instances writing to fibaro globals…

     

    Maybe we could solve this, so scene instances writes to fibaro globals in an orderly fashion? Some way to synchronise the producers and consumers?

     

    Solving concurrency with the mailbox model

    Many modern languages have support for multithreading ( parallel processes) or operating systems have libraries dealing with synchronisation of parallel processes. In the Lua we have on the HC2 we have neither.

     

    However, a reasonable good approach is using a shared mailbox and a write token.

     

    Think about the fibaro global as a mailbox that can store one message.

    1. The consumer looks in the mailbox and if there is a message, it takes the message and sets the mailbox to empty.
    2. Producers don’t want to put a message in the mailbox that will immediately be overwritten by another producer. Instead, a producer will wait until the mailbox is empty and then throw in its own marker message in the mailbox. Then it looks in the mailbox to see if the marker message is still there. If its there it assumes it got the right to the mailbox and puts its real message there. If some other producer’s marker message is there instead, it assumes the other producer got the right to the mailbox and will wait until the mailbox is empty before trying again to acquire the right to it post a message.

     

    A reasonable good approach, because If one picks this algorithm apart there is a chance that messages gets overwritten here too (we would need a test-and-set primitive). But here theory meets factual implementation. I have run extensive stress tests with producers and consumers exchanging 10’s of thousands of messages using this model and never had a single overwritten/lost message. At this point I consider it to be good enough to make it a practical mailbox implementation.

     

    A bit simplified the producer’s code look like this

    Please login or register to see this code.

    There are a lot of subtleties to the algorithm; ex. the marker written have to be unique for each producer, otherwise producer may think it got the right to the mailbox when in fact it was another producer that got it. Can’t use time, or random as these are same for spawned scenes. If the marker is not unique empirical tests show that we get an overwritten message every 20 to 30 message on average.The marker includes the tostring value of the event which includes a “memory address” that tends to always vary between scene instances.

     

    One can think that the Producer polling the mailbox every 100ms to see if it’s empty would drain resources. Well it turns out that actual check takes ~5ms, so it’s sleeping 95% of the time - and it only happens when it gets a trigger and it usually don’t do any wait as the mailbox is mostly empty.

     

    The (simplified) consumer looks like this

    Please login or register to see this code.

    The consumer, in the loop waiting for a new message to arrive, will do a fibaro:sleep(250) every iteration. Like the producer it turns out that actually checking if there is message takes less than 10ms, so it’s usually working less than 4% and sleeping more than 96%. Running the framework “empty” takes very little cpu resources.

     

    In the consumer, loops are handled by setTimeout and not fibaro:sleep. The reason for that is that the consumer code is typically doing things with the events coming in. The means that we let the consumer loop run in it’s own “process” with setTimeout. Also, when we call our ‘handleEvent function that should carry out whatever should be done when a trigger arrives, we call that function also in its own “process” with setTimeout. This allow our single consumer scene to juggle the tasks of simultaneous polling the mailbox and carry out actions associated with triggers/events without blocking each other.

     

    So, it seems that we have a good enough model to create a synchronised mailbox to be used between producers and consumers that won’t drop messages in practice. That could be used for something cool…

     

    A side note.

    Many have a line in the beginning of their scenes that reads

    Please login or register to see this code.

    The idea is to limit the number of simultaneous running scenes to one. That in itself can be problematic because you are actually ignoring triggers. However, some have noticed that if they add this to the scene and sets maximum allowed instances to 2 and hope to be safe… they still get “too many instances” warnings. Well we have learned now that 3 almost simultaneous triggers will start up 3 instances and the HC2 will issue a warning because the 3 instances are more than the 2 allowed. And the HC2 has no idea that you have code in the scene that will kill the last two and reduce it to 1 instance… at least not before they have got a chance to run….

     

    A single scene instance model using a mailbox

    Assume a simple scene starting a light when a sensor is breached and turning off the light when the sensor has been safe for 5 minutes.

    Please login or register to see this code.

    After started, the scene continues in a loop where it will poll a motion sensor waiting for it to be safe and then turn off the light and terminate the loop and the scene. While we loop we don’t want new scene instances to start and interfere with the currently running instance. Like if the motion sensor is breached again while the first instance motion loop is running.

    That’s why we test if the current scene is not the first (i,e. current scene is higher than 1) and in that case terminate that instance.

     

    Here is a thought. What if new additional triggered instances of the scene could post whatever sourceTrigger that started them back to the first instance using a shared mailbox and then terminate?  The first instance could then in a loop look at the mailbox and see if there is any trigger that it should react to. The first scene instance will start the “consumer loop” and the next scene instances triggered by the sensor breached and sensor safe would post their triggers to the mailbox where the first instance will pick them up and act on them.

    Please login or register to see this code.

    The consumer loop is exactly like the previous mailbox example. The first instance of the scene starts the consumer loop and sends all incoming events to the ‘handleEvent(sourceTrigger)’ function, and all the ‘sibling’ instances of the scene started because of new incoming triggers act like producers and post those sourceTrigger back to the consumer…

    What have we won with this setup? The traditional example with a fibaro:sleep loop even has less code…?

     

    Quite a lot it turns out.

     

    This ‘handleEvent’ function will be called, in the same scene instance, with all arriving triggers.

    Want to count the number triggers that have arrived?

    Please login or register to see this code.

    In a traditional scene this can’t be done as every scene instance terminates after finished dealing with the incoming trigger. To count the number of triggers a scene has received we have to update a fibaro global variable every time.

    More specifically, we say that our scene can preserve state between incoming triggers without having to rely on storing it away in a fibaro global. This turns out to be mighty important to some tricks we will do later.

     

    The other advantage is that we get the triggers as soon as they happen, we don’t have to wait in a loop that maybe poll a device every 20 or 30 second.

     

    The third advantage is that we can run many different automation rules in the same scene at the same time, welcome to event based programming…

     

    Events as the solutions to deal with inherently asynchronous home environment

    A common approach when designing software for highly asynchronous systems are some kind of message based, event based programming model. Trying to explicitly synchronise processes and threads is very demanding and even the most skilled sometimes get hit by that case that could never happen. Quite often it comes down to implementing some tried and true pattern like the consumer producer pattern, with a mailbox or queue, i.e. a message or event model.

     

    Programming languages like Erlang is based around processes and asynchronous message passing between them. Traditional GUI implementations have some type of GUI event loop where programmers take care of mouse clicks or other user interaction events. You find it in Ms Windows or MacOS. Imagine if when programming a Windows GUI app you had to store everything you wanted to remember in the filesystem between each user interface event happening, because the program was terminated between each event. That’s what happening with scenes and instances on the HC2.

     

    Home automation is highly asynchronous. With that we mean that sensors and devices can send events at any time in any order and sometimes in parallel. Add to that, that we often want to call out to other systems, like web services or MQTT services that also are asynchronous in nature. Many implement APIs so that we get call-backs when the result arrives, i.e. events (a synchronous HTTP call just means that your program hangs while waiting for the result, it may make your life a bit easier but it’s not very productive).

     

    Writing (smart) home automation rules can be thought of as detecting patterns in a set of events happening over a defined time span and carry out the associated action(s).

     

    Another popular mental model is to treat it as a stream of events arriving to your scene, and you write “filters” that transform one or sequences of events to new events that create new streams that you write “filters” for etc etc…

     

    In the HC2 you will get sourceTriggers for door and window sensors that look like

    {type=‘property’, deviceID=66, propertyName=‘value’}

    66 is the id of the device in this example. The HC2 sends you this event to tell you that deviceID 66 has changed value. You, that know that 66 is the front door can read this event and repost it as

    {type=‘doorOpen’, name=‘front door’} and resend it to your own program.

    There you trigger on events of type ‘doorOpen’ and carry out whatever action should be done. You have taken a generic HC2 event and transformed it to a scene specific event making your coding clearer.

    Another example, you have many light sensors on a floor plan. Whenever a sensor triggers because a lux value change, you read the lux value of that sensors, save it, and calculates the mean lux value of all the sensor you have so far seen. Maybe you get a value of 200 in this case, and repost it as a new event

    {type=‘lux’, where=‘downstairs’, value=200}

    Then you can have an event handler or rule that triggers on ‘lux’ events and turns on and off light appropriately. You have aggregated events and created a “higher level” event. See, you are already starting to think in “events”…

     

    EventRunners event dispatching model

    How is event handlers implemented in the EventRunner framework?

    In a previous example, the consumer part of the framework read incoming triggers/events in the mailbox and called the function ‘handleEvent(sourceTrigger)’, where we wrote code to inspect the sourceTrigger and carry out actions.

    This style means that we typically write long chains of

    “if eventA then actionA else if eventB then actionB else if eventC then actionC …”

    which can be a bit tedious…

     

    Another approach is to define an dispatch table for type of events we can get;

    Please login or register to see this code.

    A little bit more structured but we still get quite large and complex handlers, as we typically will have many ‘property’ events if we have many devices.

     

    A side note.

    We typically get events from scene triggers, how do we send events to ourselves, like the ‘doorOpen’ event? That is quite simple;

    Please login or register to see this code.

    We define an event posting function that uses setTimeout to call our handleEvent function with the event, asynchronously. In other words it will arrive to our handler like any other event from the main consumer loop that reads “external” triggers/events.

     

    Back to the dispatch table. We can do better.

     

    We can register event patterns with associated dispatch functions.

    Ex. If the event matches {type=‘property’, deviceID=66, propertyName=‘value’} then call our handler function for device 66.

    Please login or register to see this code.

    Here we assume that we have a function ‘equal(table1,table2)’ that returns true if two tables are equal, in our case if two events are equal.

    Now we can write all our code by defining handlers for types of events. And we can post new events that handlers can react on. We are doing event based programming.

     

    It turns out that we would like to define handlers for partly complete events. Maybe we want to define a handler that matches all ‘property’ event, no matter what the deviceID is?

    We can replace the ‘equal’ function in the example above with a ‘match’ function. The match functions treats the first argument as a ‘pattern’ and returns true if the second argument matches it.

    • pattern {type=‘property’} matches {type=‘property’, deviceID=66, propertyName=‘value’}, because the second argument has the "type=‘property’’ field and it doesn’t care that there are extra fields.
    • pattern {type=‘property’, foo=42} doesn’t match {type=‘property’, deviceID=66, propertyName=‘value’}because the second argument misses the ‘foo=42’ field.
    • pattern {type=‘property’, deviceID=67} doesn’t match {type=‘property’, deviceID=66, propertyName=‘value’}, because the deviceID field does not match.

    This means that we can write event patterns, and that helps us to define appropriately abstract event handlers.

     

    The match function in the EventRunner framework also allows us to add constraints to the event pattern.

    {type=‘property’, deviceID=’$>67’} only matches ‘property events where the deviceID field is bigger than 67.

     

    And finally, and incoming 'property' event or 'global' event from the HC2 is complemented by the framework with the value of the property or global before it's sent to the event handler. This means that we can do more interesting matches.

    {type='property', deviceID=66, propertyName='value', value='$>0'} will only match a property changing value to something larger than 0, like if a dimmer being turned on.

     

    We arrive at a model that on a abstract level could be depicted like this:

    Please login or register to see this attachment.

     

    In the EventRunner framework we have the following basic functions

     

    Event.event(pattern, fun).

    This is the defineHandler function described above. It tries to be smart and preprocesses the pattern to make it faster to match against incoming rules. It also hashes the patterns on ‘type’ and other tricks to not have to make a search through all patterns when matching against an incoming event. Here there are still improvements to be done.

     

    Event.post(event, optional time).

    This is the post function described above, but it also allows for specifying a time in the future when this event should be delivered. It also return a reference to this future post that is handy. That means we can use it as a timer, posting an event that in 5min will turn off a lamp, but then reacting to a motion sensor being breached and change our mind and cancel the “turn off” post given the reference

     

    Event.postRemote(sceneID, event, optional time).

    Like Event.post but sends the event to another scene running the EventRunner framework. The event will be internally posted in the receiving scene and look like any event. This makes it very easy to distribute events across many scenes and delegate functionality (shared functions and libraries). After a while designing scenes like this makes you treat the HC2 as a service oriented architecture (SOA) , or a micro services platform...

     

    More details on how to write EventRunner Lua handlers is available in <

    Please login or register to see this link.

    >

    Edited by jgab
    Link to comment
    Share on other sites

    @jgab

     

    Is it possible to start rule more often than 1s? ie. 500ms

     

      rule("@@00:00:01 => log('Start function')")

     

     

    Link to comment
    Share on other sites

  • Topic Author
  • Why on earth would you want to do that? :) Loops like that may start to stress the system....

    Have to make the loop in Lua.

    Please login or register to see this code.

     

    Link to comment
    Share on other sites

    :) Update my Hue sensors to have quick reaction on press hue buttons ... btw it's currently running but via refresh in my Hue VD :)

     

    Link to comment
    Share on other sites

  • Topic Author
  • I don't have a Hue sensor, but I guess that you have a VD to control it? How often does the VD poll the sensor? Or, does it do it every time you press a button on the VD?

    I would move the Hue sensor polling code (HTTP I guess?) to the EventRunner, and write it as a Lua Event.event handler (not EventScript), and make the polling intervals adaptable so it polls more often when something is happening in the house and less otherwise. And the Lua Event.event handler would post an #hue{status=breached/safe} when the actual state changes for the Hue - that could be picked up by other EventScript rules. And maybe put the polling loop in another scene like how the IOSLocator scene operates.

    Edited by jgab
    Link to comment
    Share on other sites

     

    14 hours ago, jompa68 said:

    @petrkl12 how did you setup Hue in Eventrunner? 

     

    I'm working on it. I want to develop scene that will:
    1. update all my VDs for Hue sensors, buttons and lights groups (I use only Hue groups not lights separately) from all my hue bridges (currently 4)
    2. generate events for other EventRunner scenes (movements, change level of lights, buttons, temperature etc. - all information from hue sensors, buttons and lights

     

    @jgab

    I want to develop scene described above.
    Polling interval is currently 1s per bridge :) and I have 4 hue bridges

     

    Currently I have problem with parallel calling of http request in function with error message "Bad file descriptor" ...

     

     

    Link to comment
    Share on other sites

  • Topic Author
  • 1 hour ago, petrkl12 said:

     

     

    I'm working on it. I want to develop scene that will:
    1. update all my VDs for Hue sensors, buttons and lights groups (I use only Hue groups not lights separately) from all my hue bridges (currently 4)
    2. generate events for other EventRunner scenes (movements, change level of lights, buttons, temperature etc. - all information from hue sensors, buttons and lights

     

    @jgab

    I want to develop scene described above.
    Polling interval is currently 1s per bridge :) and I have 4 hue bridges

     

    Currently I have problem with parallel calling of http request in function with error message "Bad file descriptor" ...

     

     

    Ok,

    I don't know the Hue sensor but I read this article 

    Please login or register to see this link.

     

    ...and my thinking was to do something like below, to not have to take the overhead of simulating button presses on a VD. Then you can update the presence state on the VD from the EventRunner scene.

    There are probably a million small errors in the code but I think the principle should show. There are ways to get the the timing exactly 1s, now its 1s + time for HTTP call.

    Please login or register to see this code.

     

    6 hours ago, jompa68 said:

    can i somehow track errors in main loop of a vd and somehow restart it with eventrunner?

    I have to admit that VDs are not my strong point. I use to have an issues with the VD main loop in the past and moved over to run all logic in scenes and only use VDs for GUI (and I also think that home automation should work automatically behind the scene and not require a lot of user interaction). I guess it's possible to get hold of the VD log but is there a way to "restart" VDs? Does startScene work with VDs?

    Link to comment
    Share on other sites

    5 hours ago, jgab said:

    I have to admit that VDs are not my strong point. I use to have an issues with the VD main loop in the past and moved over to run all logic in scenes and only use VDs for GUI (and I also think that home automation should work automatically behind the scene and not require a lot of user interaction). I guess it's possible to get hold of the VD log but is there a way to "restart" VDs? Does startScene work with VDs?

    I have setup a watchdog scene created by Lazer on

    Please login or register to see this link.

     and it seems to do the job.

    Link to comment
    Share on other sites

  • Topic Author
  • 2 hours ago, jompa68 said:

    I have setup a watchdog scene created by Lazer on

    Please login or register to see this link.

     and it seems to do the job.

    Ok, it seems like one can restart a VD by doing a GET/PUT on the definition. Quickly browsing the code there seems to be several corner cases to consider. If the watchdog scene works, why would you like reimplement it? Is there something that could be improved?

    Link to comment
    Share on other sites

    Hi,

    @jgab - what can I say. This is prof work, thanks.I guess, this could be commited to the official FW release :)

    Single scene - this is what I want to try on my HCL, because it has so small RAM to run multiple scenes (LUA interpreter processes running)

    The thing I want to ask: how big is event handle delay? On the original system I have up to a few seconds delay sometimes from the motion detected blink to the lights ON.

    Thanks

    Link to comment
    Share on other sites

  • Topic Author
  • Delays of seconds that people report sometimes seems often depend on the z-wave network clogging up due to faulty devices or other issues. When events finally arrive at the box there should be no reason to have delays more than 10s of milliseconds. 

    To send events back from new spanned instances to my main scene instance I have to jump though some hoops. In theory I should be able to process 4 events/s so if they arrive quicker they may start to queue up.  Then I do an event matching algorithm to find out what event match what event handlers, but that's max a few 10s of ms and done in a separate 'setTimeout' thread (haven't actually timed that part). It can actually be coded even smarter, but will require some deep thinking....

    However, Fibaro could have chosen to implement this kind of model (i.e. a traditional event loop model) much more effectively.

    Having said that I have some sympathy with Fibaro why they chose the model they did. It's kind of a server less/lambda model, with scenes being spawned and terminated as soon as they have handled the trigger, which makes for easy resource management... and developers were probably not expected to do complex things. The drawback is that Fibaro have to spawn new instances (and many parallel instances if events arrive at the same time) requiring memory, instead of just queuing up the event structure (with much less memory requirement). Events  that will then be consumed by a single running instance polling the queue in a loop.

    The other drawback is that it makes it challenging for developers creating complex scenes to coordinate tasks spanning over many triggers. It's like an operating system that gives you processes and shared variables but no way to (safely) synchronise tasks...

    It's always a trade-off where to draw the line when designing these kind of systems but I would have drawn the line elsewhere to allow developers to create really interesting scenes. To have a product that is going to be really competitive this will be needed (attracting really good and creative developers, and also allowing people/companies to make money/business on developing sw for the box), otherwise the intelligence will be in Siri, Google home etc and the HC2/HCL becomes only just-another a z-wave gateway... and it's much better to be the gateway preferred by developers vs.  just-another gateway.

    • Like 1
    Link to comment
    Share on other sites

    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.

    Guest
    Reply to this topic...

    ×   Pasted as rich text.   Paste as plain text instead

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.

    ×
    ×
    • Create New...