Copyright ©

Mongoose OS Forum

ATTENTION! This forum has moved to:

Do not post any new messages.

Use rpc-gatts to send data from ESP32 to client


I use the rpc-gatts service to send data via Bluetooth to the ESP32. This works fine using the attributes _mOS_RPC_data___ and _mOS_RPC_tx_ctl_.
Now I would like the client to receive notifications from the ESP32 when data is available for reading. According to the README this seems possible via the _mOS_RPC_rx_ctl_ attribute.
But how do I trigger the sending of an RPC frame from the application?

I tried to install the rpc-loopback service and then use

struct mg_rpc_call_opts opts;
opts.dst = mg_mk_str(MGOS_RPC_LOOPBACK_ADDR);
mg_rpc_callf(mgos_rpc_get_global(), mg_mk_str("My.Func"), NULL, NULL, &opts,
            "{param1: %Q, param2: %d}", "jaja", 1234);

But when rpc-gatts and rpc-loopback are installed the system crashes when I connect to the BLE service.
And I'm not sure at all, if this is even the right way to do it.
Can someone help please?



  • frscfrsc Germany

    Actually the crash happens when I call mg_rpc_callf.

  • rojerrojer Dublin, Ireland

    can you post console output with debug.level = 3?

  • frscfrsc Germany
    edited January 2018

    Attached is the log of the crash including the core dump.

    Is what I try to do even possible? Can I send something trough a RPC channel to the client, without the client having sent a request?

  • frscfrsc Germany

    I extracted a backtrace with gdb.
    The crash seems to happen in json_escape:

    Remote debugging using /dev/ttyUSB0
    0x400f5400 in json_escape (out=0x3ffdb320, p=0x800fc368 '\377' <repeats 200 times>..., len=1073591552) at /mongoose-os/frozen/frozen.c:432
    432     /mongoose-os/frozen/frozen.c: No such file or directory.
    (gdb) bt
    #0  0x400f5400 in json_escape (out=0x3ffdb320, p=0x800fc368 '\377' <repeats 200 times>..., len=1073591552) at /mongoose-os/frozen/frozen.c:432
    #1  0x400f59e5 in json_vprintf (out=0x3ffdb320, fmt=0x3f40cb87 "%.*Q", xap=...) at /mongoose-os/frozen/frozen.c:609
    #2  0x400f5c34 in json_printf (out=0x3ffdb320, fmt=0x3f40cb83 "src:%.*Q") at /mongoose-os/frozen/frozen.c:732
    #3  0x400ff6b4 in mg_rpc_dispatch_frame (c=0x3ffb7ab4, src=..., dst=..., id=1270216262, tag=..., key=..., ci=0x3ffb7dbc, enqueue=false, payload_prefix_json=..., payload_jsonf=payload_jsonf@entry=0x3f40b695 "{}", ap=...)
        at /fwbuild-volumes/1.23/apps/ColorGrid_esp32/esp32/build_contexts/build_ctx_223341374/libs/rpc-common/src/mg_rpc/mg_rpc.c:617
    #4  0x400ffba9 in mg_rpc_vcallf (c=0x3ffb7ab4, method=..., cb=<optimized out>, cb_arg=0x0, opts=0x3ffdb4d0, args_jsonf=args_jsonf@entry=0x3f40b695 "{}", ap=...)
        at /fwbuild-volumes/1.23/apps/ColorGrid_esp32/esp32/build_contexts/build_ctx_223341374/libs/rpc-common/src/mg_rpc/mg_rpc.c:686
    #5  0x400ffc91 in mg_rpc_callf (c=0x3ffb7ab4, method=..., cb=0x0, cb_arg=0x0, opts=0x3ffdb4d0, args_jsonf=args_jsonf@entry=0x3f40b695 "{}") at /fwbuild-volumes/1.23/apps/ColorGrid_esp32/esp32/build_contexts/build_ctx_223341374/libs/rpc-common/src/mg_rpc/mg_rpc.c:718
    #6  0x400fc585 in Control::sendEvent (this=<optimized out>) at /fwbuild-volumes/1.23/apps/ColorGrid_esp32/esp32/build_contexts/build_ctx_223341374/src/Control.cpp:45
    #7  0x400fc38d in AppSnake::run (this=<optimized out>) at /fwbuild-volumes/1.23/apps/ColorGrid_esp32/esp32/build_contexts/build_ctx_223341374/src/AppSnake.cpp:38                                                                                                              
    #8  0x400fc5f9 in main_task (pvParameters=<optimized out>) at /fwbuild-volumes/1.23/apps/ColorGrid_esp32/esp32/build_contexts/build_ctx_223341374/src/main.cpp:25    
  • frscfrsc Germany

    Did you have time to look at this?

  • rojerrojer Dublin, Ireland

    sorry for delayer response. i think what happened is that src field got added recently, and you don't initialize it. so in Control.cpp you have something like this:

    struct mg_rpc_call_opts opts;
    opts.dst = ...;
    mg_rpc_callf(c, &opts, ...);

    stack allocated structures are not initialized by default, and when a new field is added, its value is undefined. thus, opts.src ends up with garbage.
    make sure you memset opts (and any other struct) to 0, or use named initializers (which also initialize fields that are not mentioned to 0).
    so, you should have something like this:

    struct mg_rpc_call_opts opts;
    memset(&opts, 0, sizeof(opts));
    opts.dst = ...;
    mg_rpc_callf(c, &opts, ...);


    struct mg_rpc_call_opts opts = { .no_queue = false }; /* per standard, the rest of the fields are zeroed out */
    opts.dst = ...;
    mg_rpc_callf(c, &opts, ...);

    this way, src gets a defined value (NULL, 0) and would work fine.

  • frscfrsc Germany

    Thank you @rojer for investigating and pointing out the problem. The local RPC calls work fine now.
    But I still wonder if I can use RPC for what I want to do? The local RPC calls do not seem to be what I need.
    Can I send data to my GATTS client using rpc-gatts without polling for it on the client?

    Example: I have a switch connected to the ESP32 and whenever it is turned on or off, a notification should be send via rpc-gatts to a connected client. The client can then read the state of the switch from the _mOS_RPC_rx_ctl_ attribute.
    Currently I can only create a RPC handler, that returns the switch state and call it from the client.
    So what about the other way round? Something like a remote RPC handler on the GATTS client, that can be called from mOS?

  • rojerrojer Dublin, Ireland

    no, local rpc is not what you need. but yes, you can notify your GATTS clients without writing special service, by just using RPC.
    your GATT client should subscribe to notifications on rx_ctl attribute and then it will get notifications when there is a message awaiting on the GATTS RPC channel.
    long story short, probably the easiest way to achieve what you want is to use the recently added RPC broadcast mechanism, which will publish RPC to all the channels (for all clients that are currently connected - UART, all the websocket connections and all the GATTS connections).
    just set the broadcast option in the opts struct.

  • frscfrsc Germany

    Great! Thanks a lot!
    It seems to work quite nicely using the RPC broadcast.
    Is it also possible to publish to a specific RPC channel?

  • rojerrojer Dublin, Ireland
    edited January 2018

    i'm glad you asked :) here's the long story, then.
    when choosing which channel to use for sending, mg_rpc matches the destination against the list of available channels and chooses the one that has matching destination or uses default, if available.
    associating destination with channel happens when we receive first frame from that channel - we remember which source is on the other side (the value of the src key, if present).
    so, in order for server to send unicast frames to a specific channel, the client on the other side of the channel must first send a "hello" RPC in order to introduce itself and make server remember the src.
    then server can send frames to that particular dst and they will use the channel (if available, if the channel is not available and there is no default, the frame will be queued, and when the queue is full, subsequent frames will be dropped).
    RPC.Ping can be used as a "hello" request, so you don't need to create a method just for that.

    Thanked by 1frsc
Sign In or Register to comment.