Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lag when holding cursor movement in input window #1246

Open
320x200 opened this issue Dec 31, 2019 · 10 comments
Open

Lag when holding cursor movement in input window #1246

320x200 opened this issue Dec 31, 2019 · 10 comments

Comments

@320x200
Copy link

320x200 commented Dec 31, 2019

Expected Behavior

After holding a cursor movement key (like left or right arrow keys) there should not be any delay after releasing the key.

Current Behavior

After holding a cursor movement key, the key action (cursor movement) keeps on a repeating for a duration proportional to the time the key was held, as if profanity is not able to keep up with key input speed and an event queue is building up, which may explain why once released, the action keeps on repeating.

Steps to Reproduce

Write a long text in the input window. Go back and forth with the arrow keys (holding the keys), notice the lag/delay.

Context

I noticed that while wanting to go back towards the beginning of a long sentence to edit it and while releasing the arrow key to go back, the cursor kept moving for some time.

Environment

  • Give us the version and build information output generated by profanity -v
Profanity, version 0.7.1dev.master.5d7f2d15
Copyright (C) 2012 - 2019 James Booth <[email protected]>.
Copyright (C) 2019 Michael Vetter <[email protected]>.
License GPLv3+: GNU GPL version 3 or later <https://www.gnu.org/licenses/gpl.html>

This is free software; you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Build information:
XMPP library: libstrophe
Desktop notification support: Enabled
OTR support: Disabled
PGP support: Enabled (libgpgme 1.13.1)
OMEMO support: Enabled
C plugins: Enabled
Python plugins: Disabled
GTK icons: Enabled
  • Operating System/Distribution
    FreeBSD 12.1-STABLE r354337 GENERIC amd64
  • glib version
    2.56.3_6,1
@jubalh
Copy link
Member

jubalh commented Jan 1, 2020

Right, happens to me too. Don't know yet why.
In the meantime just use your Pos1 key ;)

@pasis do you see why this happens?

@rodarima
Copy link
Contributor

rodarima commented Jan 5, 2020

I can reproduce the issue too with release 0.7.1. It is very noticeable when the keyboard repeat speed is high like 60Hz, such as with xset r rate 200 60. This is what perf record sees when pressing an arrow key. It only happens after login:

# To display the perf.data header info, please use --header/--header-only options.
#
#
# Total Lost Samples: 0
#
# Samples: 3K of event 'cycles:u'
# Event count (approx.): 737337445
#
# Overhead  Command    Shared Object            Symbol                             
# ........  .........  .......................  ...................................
#
    13.02%  profanity  libncursesw.so.6.1       [.] pnoutrefresh
     4.52%  profanity  libglib-2.0.so.0.6200.4  [.] g_hash_table_lookup
     4.45%  profanity  libc-2.30.so             [.] __GI___strcmp_ssse3
     2.69%  profanity  libglib-2.0.so.0.6200.4  [.] g_str_hash
     2.20%  profanity  libncursesw.so.6.1       [.] werase
     1.96%  profanity  libc-2.30.so             [.] malloc
     1.92%  profanity  libc-2.30.so             [.] __vfprintf_internal
     1.89%  profanity  libc-2.30.so             [.] _int_free
     1.87%  profanity  libncursesw.so.6.1       [.] _nc_hash_map_sp
     1.81%  profanity  libc-2.30.so             [.] _int_malloc
     1.43%  profanity  libc-2.30.so             [.] __dcigettext
     1.36%  profanity  libc-2.30.so             [.] wcwidth
     1.30%  profanity  libglib-2.0.so.0.6200.4  [.] g_slice_free1
     1.28%  profanity  libc-2.30.so             [.] __strchrnul_sse2
     1.22%  profanity  profanity                [.] autocomplete_reset
     1.22%  profanity  libc-2.30.so             [.] __memcpy_ssse3
     1.19%  profanity  libncursesw.so.6.1       [.] _nc_waddch_nosync
     1.14%  profanity  libc-2.30.so             [.] cfree@GLIBC_2.2.5
     1.00%  profanity  libc-2.30.so             [.] __tfind

@rodarima
Copy link
Contributor

rodarima commented Jan 3, 2022

Hi, sorry for the long delay. This problem only is quite noticeable in a machine that I only use a few days a year.

It seems to be caused by the select() that libstrophe is doing. This is the strace when pressing one arrow key which sends the bytes ^[OD. The trace shows what happens between the bytes O and D:

17:38:49.863918 read(0, "O", 1)         = 1
17:38:49.864053 rt_sigaction(SIGTERM, {sa_handler=0x7fa5eae3cc10, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.864162 rt_sigaction(SIGHUP, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.864260 rt_sigaction(SIGQUIT, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.864359 rt_sigaction(SIGALRM, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.864458 rt_sigaction(SIGTTOU, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.864555 rt_sigaction(SIGTTIN, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.864652 rt_sigaction(SIGWINCH, {sa_handler=0x560afea3bfa2, sa_mask=[WINCH], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb430, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa>
17:38:49.864857 pselect6(15, [14], [], NULL, {tv_sec=0, tv_nsec=10000000}, NULL) = 0 (Timeout)
           ^
           look at the time here: is wasting 10 ms waiting for data in the xmpp connection
17:38:49.875385 recvmsg(10, {msg_namelen=0}, 0) = -1 EAGAIN (Recurso no disponible temporalmente)
17:38:49.875552 poll([{fd=10, events=POLLIN}, {fd=11, events=POLLIN}, {fd=12, events=POLLIN}], 3, 0) = 0 (Timeout)
17:38:49.875663 read(4, 0x560b002ab7c0, 4000) = -1 EAGAIN (Recurso no disponible temporalmente)
17:38:49.875813 poll([{fd=7, events=POLLIN|POLLOUT}], 1, -1) = 1 ([{fd=7, revents=POLLOUT}])
17:38:49.875929 writev(7, [{iov_base="\220\1\2\0$\4\0\0", iov_len=8}, {iov_base=NULL, iov_len=0}, {iov_base="", iov_len=0}], 3) = 8
17:38:49.876087 poll([{fd=7, events=POLLIN}], 1, -1) = 1 ([{fd=7, revents=POLLIN}])
17:38:49.876226 recvmsg(7, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\1\0v3\0\0\0\0#\4\0\0\242'\t\0\36\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", iov_len=4096}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 32
17:38:49.876358 recvmsg(7, {msg_namelen=0}, 0) = -1 EAGAIN (Recurso no disponible temporalmente)
17:38:49.876458 recvmsg(7, {msg_namelen=0}, 0) = -1 EAGAIN (Recurso no disponible temporalmente)
17:38:49.876556 pselect6(1, [0], NULL, NULL, {tv_sec=0, tv_nsec=0}, NULL) = 1 (in [0], left {tv_sec=0, tv_nsec=0})
17:38:49.876691 rt_sigprocmask(SIG_BLOCK, [HUP INT QUIT ALRM TERM TSTP TTIN TTOU], [], 8) = 0
17:38:49.876793 rt_sigaction(SIGINT, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=SIG_IGN, sa_mask=[INT], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.876897 rt_sigaction(SIGINT, {sa_handler=SIG_IGN, sa_mask=[INT], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.876997 rt_sigaction(SIGTERM, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eae3cc10, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.877096 rt_sigaction(SIGHUP, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.877305 rt_sigaction(SIGQUIT, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.877448 rt_sigaction(SIGALRM, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.877576 rt_sigaction(SIGTSTP, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=SIG_IGN, sa_mask=[TSTP], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.877701 rt_sigaction(SIGTSTP, {sa_handler=SIG_IGN, sa_mask=[TSTP], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.877829 rt_sigaction(SIGTTOU, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.877954 rt_sigaction(SIGTTIN, {sa_handler=0x7fa5eaffb420, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fa5eade1870}, 8) = 0
17:38:49.878074 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
17:38:49.878182 rt_sigaction(SIGWINCH, {sa_handler=0x7fa5eaffb430, sa_mask=[], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa5eade1870}, {sa_handler=0x560afea3bfa2, sa_mask=[WINCH], sa_flags=SA_RESTORER|SA_RESTART, sa_restorer=0x7fa>
17:38:49.878333 pselect6(1, [0], NULL, NULL, NULL, {sigmask=[], sigsetsize=8}) = 1 (in [0])
17:38:49.878468 read(0, "D", 1)         = 1

This is the backtrace of that select:

#0  0x00007fe93c5e6140 in select () at /usr/lib/libc.so.6
#1  0x00007fe93c6ef68a in xmpp_run_once () at /usr/lib/libstrophe.so.0
#2  0x000055f6443f5843 in connection_check_events () at src/xmpp/connection.c:120
#3  0x000055f6443f4b45 in session_process_events () at src/xmpp/session.c:267
#4  0x000055f6443ede1a in prof_run
    (log_level=0x55f6444c6afc "WARN", account_name=0x0, config_file=0x0, log_file=0x0, theme_name=0x0)
    at src/profanity.c:131
#5  0x000055f6444966fc in main (argc=1, argv=0x7fffba9e9b08) at src/main.c:180

To test my hypothesis I set the timeout of the select to 1 ms, using:

xmpp_run_once(conn.xmpp_ctx, 1);

After that, it runs nicely when I keep the arrow pressed. However I believe interrupting the main event thread with select to check the network for new events is a bad design for performance. I think it can be avoided by using another thread or a poll-based approach.

I'll try to take a look and find a proper fix, hopefully soon :)

@jubalh
Copy link
Member

jubalh commented Jan 3, 2022

@rodarima thanks for looking into this!

There is also #225. Maybe take a look.
And I think mcabber is using GMainLoop, but I think rewriting would be a lot of work.

@rodarima
Copy link
Contributor

rodarima commented Jan 4, 2022

Hi, again

I think I would like to expand a bit my answer. There are a two problems I discovered while trying to find why pressing the arrows takes too long (or pasting a lot of text) and also why this issue is not immediately obvious to debug.

In order to find what is happening I first used sudo perf record -g -p $(pgrep profanity) (same as 2 years ago), which interrupts the program at mostly regular intervals, and collects the current function (and the stack) so you can see on which parts of the code the CPU time is being used the most (and the aggregated time too). However, this technique doesn't show all the information of what is happening.

In my experiments I pressed one arrow key (I believe which doesn't matter, as long as you cause any input) while the profanity process is being recorded by perf (the arrow keys amplify by 3x any slowdown, as one keypress sends 3 bytes). I kept the arrow key pressed during about 5 seconds. It is very important that you are connected to a xmpp server that doesn't cause a lot of traffic, otherwise you may not be able to reproduce it.

Here is what you get with perf stat when you are disconnected and connected to a XMPP server, and I keep pressing the arrow key. I started the measurement with 1 second of delay to give me time to keep the arrow pressed, and stops by itself after 5 seconds:

# profanity connected
$ sleep 1; perf stat -e task-clock --timeout 5000 -p $(pgrep profanity)

 Performance counter stats for process id '548714':

            382,64 msec task-clock:u              #    0,076 CPUs utilized          

       5,002861790 seconds time elapsed

# profanity disconnected
$ sleep 1; perf stat -e task-clock --timeout 5000 -p $(pgrep profanity)

 Performance counter stats for process id '548714':

          2.152,10 msec task-clock:u              #    0,430 CPUs utilized          

       5,003918817 seconds time elapsed

High CPU when disconnected

In this machine I have configured the keyboard to emit events at 83 Hz when I keep the arrow pressed, so it will input three bytes (one arrow key emits three bytes) every ~12 ms. However, you can see that when it is disconnected, it is taking almost half the CPU time (43%) to cope with the input rate. This means that updating the screen after an input byte takes about 2 ms of CPU time (in average), which is a lot. Notice that the only thing that it should be happening is that it should receive one key press, and update the cursor position.

If you take a look with perf record you can see that is doing a lot of extra work that I believe should not be happening. This is showing only the functions that use more than 1 % of the CPU (it should add something about 43 %):

$ perf report --percent-limit 1
...
# Overhead  Command    Shared Object            Symbol                                 
# ........  .........  .......................  .......................................
#
    26.43%  profanity  libncursesw.so.6.3       [.] pnoutrefresh
     6.99%  profanity  libc-2.33.so             [.] __GI___strcmp_ssse3
     6.96%  profanity  [unknown]                [k] 0xffffffffa0000158
     2.55%  profanity  libc-2.33.so             [.] getenv
     2.51%  profanity  libc-2.33.so             [.] __dcigettext
     2.49%  profanity  libglib-2.0.so.0.7000.1  [.] g_hash_table_lookup
     2.40%  profanity  libpthread-2.33.so       [.] __pthread_rwlock_rdlock
     2.16%  profanity  libpthread-2.33.so       [.] __pthread_rwlock_unlock
     2.16%  profanity  libc-2.33.so             [.] _nl_make_l10nflist.localalias
     1.90%  profanity  libc-2.33.so             [.] __vfprintf_internal
     1.77%  profanity  libc-2.33.so             [.] __GI___strlen_sse2
     1.64%  profanity  libc-2.33.so             [.] malloc
     1.44%  profanity  libc-2.33.so             [.] _IO_default_xsputn
     1.42%  profanity  libc-2.33.so             [.] _int_free
     1.24%  profanity  libncursesw.so.6.3       [.] werase
     1.11%  profanity  [unknown]                [k] 0xffffffffa00010a7
     1.03%  profanity  libc-2.33.so             [.] wcwidth

This information shows that profanity is using quite a lot of CPU to update the screen with pnoutrefresh(), but it is still capable of digesting the keypresses at 83 Hz (otherwise the CPU would be 100%). It should not be noticeable when typing, but it may introduce quite a large delay when pasting a large input. This could be easily avoided by only updating the screen if the last refresh happened more than say 1/60 s ago.

This problem is not noticeable in general and I believe is not what @320x200 was experimenting. The other problem happens only when profanity is connected to a server.

Very slow when connected but low CPU usage

When you connect profanity to a xmpp server, the slowdown is very noticeable. In order to se what is going on, I used the newer perf timechart so I can see a timeline plot of what the process is doing in my machine:

select

You can see that most of the time is spent sleeping (gray color), and only works on regular intervals (the blue thin lines). Every blue line is one key being processed.

The problem with perf record is that only shows you what your process is doing when it is not sleeping. To obtain more information I used strace, which is what I commented previously. The strace above reveals that these intervals of being sleeping are caused by the select in the libstrophe library, which halts the main loop during 10 ms, waiting for new xmpp messages. Until the timeout expires or a new event arrives, it won't return, wasting the time that could be used to process the keyboard events.

I believe the issue #225 is also the select problem, otherwise you would see 100% CPU usage:

you can note how long it takes to process each character [...]
Now, if you will look in top/htop, you will see that it constantly takes 1-1.5% of a CPU

You can only reach to that point with profanity disconnected (no select) after pasting about 10^6 bytes:

$ yes | head -1000000 | tr -d '\n' | xclip
$ sleep 1; perf stat -e task-clock --timeout 3000 -p $(pgrep profanity)

 Performance counter stats for process id '548714':

          2.998,67 msec task-clock:u              #    0,999 CPUs utilized          

       3,003052293 seconds time elapsed

So I think the only problem noticeable now is the 10 ms select timeout.

This problem can be solved by modifying the libstrophe library to implement a polling mechanism (like the one provided by the GMainLoop, as you suggested) and then adding it to profanity too. Using a polling event loop is the easiest implementation, as you won't need to bother with thread safety (only one thread will call your library). Additionally, you get for free priority scheduling, so you can assign more priority to the user key events, and always process first the keyboard events.

I believe such a change doesn't require a lot of work, as we only need to replace the two select() with the event loop mechanism. Additionally, libstrophe also uses the glib, so the event loop support is already there.

I think I will first try to modify libstrophe and profanity to be used with GMainLoop and see how that goes.

Notice that this will only fix one of the problems, the high usage of CPU should be fixed later.

@jubalh
Copy link
Member

jubalh commented Jan 5, 2022

Thanks a lot for this detailed report!

libstrophe also uses the glib

It doesn't :-)

I think I will first try to modify libstrophe and profanity to be used with GMainLoop and see how that goes.

Looking forward to it! :-)

@rodarima
Copy link
Contributor

rodarima commented Jan 8, 2022

Hi,

It doesn't :-)

Ups, sorry I mixed things up when reading the source of libstrophe and loudmouth, the XMPP library of mcabber (which it does). Then it would be nice if I can avoid adding a new dependency :)

Looking forward to it! :-)

I've come up with what it lloks like a plan which I can test in small increments.

  • First I would introduce the main loop to replace the while in prof_run() with the GMainLoop, which only runs the body every 1/60 seconds. This would limit the ui update to 60 fps.

  • Then I would change the select in the inp_readline() for a gsource with polling. This would cause the main loop to call the cmd_process_input() as soon as a new byte is received from the keyboard.

  • And then, I would need to modify the libstrophe library to support polling. I've took a look at source code, and in order to allow a polling mechanism, I think the easiest option would be to allow the user (profanity) to register a callback which is called when a new connection is created or closed, so we can keep the list of updated file descriptors in profanity. Then we forward created/closed the calls to g_source_add_poll() and g_source_remove_poll() to maintain the fds in sync inside the gsource, which will be polled along the other event sources (such as the keyboard) in the GMainLoop.

Then, as soon as new activity is detected in the libstrophe fds, we call a modified version of xmpp_run_once(), which doesn't perform the select, only the part after it to process the events.

I believe this would solve both problems, but let's see :-)

@jubalh
Copy link
Member

jubalh commented Jan 9, 2022

I like the first two points.

About libstrophe it would maybe good to get an opinion from @pasis and @sjaeckel.

@rodarima
Copy link
Contributor

rodarima commented Jan 9, 2022

I got the two first points working, but adding polling support in libstrophe may be a bit tricky, as they also have timeouts in their own main loop.

Adding a callback for each connection is not really needed as profanity handles all xmpp connections (only two, connect and register aparently) so we already have a pointer to each xmpp_conn and we can get the socket. Furthermore, the tls connections are currently using the same socket in conn->sock, so activity in the socket given by the polling mechanism would work fine.

I did a dirty workaround where I extracted the socket from the offset of the xmpp_conn opaque typedef and I was able to build a gsource so that the GMainLoop can detect when incoming activity is detected in the main connection. With this hack, I can determine when I can safely call the xmpp_run_once() and know that the select() will return immediately as there is always new incoming data.

The problem is that I also need to progress in queued data which needs to be sent and also handle the timeouts. I could call a stripped down version of xmpp_run_once which doesn't perform the select(), only sending data and the timeouts and call that from time to time. While the incoming data would be processed by a callback called from the GMainLoop.

However, that would waste CPU and I believe it can be avoided by determining when is actually required to call the progress function. We can add a idle function in the main loop which is activated only when the user has caused some xmpp data to be queued. Also, we can handle the timers by first asking libstrophe for the time of the first timer and then adding a timeout callback in our main loop which in turn updates the timeout for the next call.

I believe this design would be the most efficient.

From libstrophe I would need a way to retrieve the conn->sock of a xmpp_conn (maybe xmpp_conn_get_socket()), and split the xmpp_run_once() into three new progress functions: attending the timers, sending queued data and attending incoming data. Maybe it would be better if I open a issue in libstrophe to continue the discussion, as I believe it belongs there.

@jubalh
Copy link
Member

jubalh commented Jan 10, 2022

I got the two first points working, but adding polling support in libstrophe may be a bit tricky, as they also have timeouts in their own main loop.

You could create a (draft) pull request for this already. Easier for people to check what's worked on then :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants