-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider debouncing ement-room-list-auto-update
#197
Comments
I guess I don't know whether multiple concurrent sync requests for the same session are getting stacked up here for some reason. If there isn't already protection against issuing a new sync while one is already in progress (or otherwise queued up), there probably should be. I expect that some kind of customizable throttling would be a good idea? I'm imagining something like "poll every X seconds; but when a message is received (or sent!) then temporarily reduce the polling interval to Y seconds so that active conversations are processed in a timely fashion. Maintain the smaller value until such time as there have been no new messages for Z seconds, at which point revert the interval back to X." (With X, Y, Z being user options.) In the meantime I've disabled |
Matrix is implemented via an HTTP connection that remains open until the server has data to send to the client. That data might be a large chunk of events for multiple rooms, or it might be a tiny response indicating that a user is typing in a room. As soon as such a response is received, the client makes another request to listen for the next update. If the user is in several busy rooms, it's entirely possible, if not very likely, that multiple such requests and responses will be made in short periods of time. Ement already limits to one outstanding sync per session: Lines 492 to 499 in 60e9a50
However, requests other than sync requests could be outstanding, e.g. sending messages, uploading files, retrieving images, etc. All of those may run in parallel.
I don't know of any reason such a thing would be needed. AFAIK incoming events are processed correctly and quickly already. This has worked fine ever since the first iteration of this client. What is the problem you're reporting? That it seems like there are too many HTTP requests being made? If so, I'd suggest making a test account that's only in one room, where you're the only member of it, and see if the problem persists. I think you'll see that only one connection is made, and it doesn't receive any data unless the room receives an event, and the connection will time out and be recreated every 30-40 seconds as programmed.
You can check for yourself which curl process the sentinel is being called for, and see if it's being called multiple times for the same process, or once per process. But how often a sentinel is called is up to Emacs, and accounting for that in plz.el has been difficult and only in the last version has been fully handled correctly, after years of looking for solutions. |
Ah, good, that make things less alarming!
My real problem was Emacs continually maxing out a CPU core, and when I profiled that I got this:
Disabling I am joined to a lot of rooms in my primary Matrix server (~180), so that's a huge factor, but it would be good if the user had a way to help ement/emacs cope better with heavier loads like this. |
I see, thanks. Well, I'm in 103 rooms, and I don't have any performance problems with the room list updating, so I'm surprised that you're seeing that. From looking at the profile, it doesn't look like Anyway, if necessary, the room list auto update function could be modified to run a timer of something like 200ms and reset it each time, so that if the function is called in rapid succession, the room list would only get updated when 200ms had expired since the last time it was called; that could serve to "debounce" the updates. |
Ah, darn... I'd figured my case might be abnormal among ement.el users, but a factor of < 2x isn't so huge. Or not unless there's some exponential effects which are starting to really ramp up around the upper half of that range. I guess I'll need to try to learn more. Which probably won't happen in a hurry, but leave it with me and I'll follow up when I have more info (and feel free to close this, I guess -- it can always be re-opened). |
I've been alerted to a Matrix server bug which is probably one (or both) of the following, which was apparently resulting in an enormous increase in Matrix traffic on our network.
I strongly suspect what I was seeing yesterday was simply on account of this, as the timing fits. I'll re-test with defaults later on (as presence has now been disabled for our server to work around this issue). |
I haven't gone right back to defaults, but I've reinstated
Even though this case was a server problem, I think something like that would be a good idea, if only to protect against any similar cases in the future. |
ement-room-list-auto-update
Upgrading to priority B since I heard from another user who might be experiencing this issue. |
As part of the testing/debugging for issue #298 I implemented the following: It provides a couple of knobs for users to customise the room list refresh. (defcustom ement-room-list-update-interval 5
"Minimum seconds between `ement-room-list-auto-update' refreshes.
If nil, then refresh immediately upon each sync.
In either case this is further subject to `ement-room-list-update-idle-delay'."
:type '(choice (const :tag "Refresh immediately upon sync" nil)
(number :tag "Minimum seconds between refreshes")))
(defcustom ement-room-list-update-idle-delay 1
"Necessary idle time before any `ement-room-list-auto-update' refresh.
If nil, then refresh immediately."
:type '(choice (const :tag "Refresh immediately" nil)
(number :tag "Wait until idle for this many seconds"))) If both options are If If If both options are non-nil, then we have the minimum interval and then, once that time has elapsed, we also wait for idle time to update. (n.b. That code is also set up for testing, with a non-nil Because #298 has resolved the known performance issues, it's not obvious that we need this; but it also might still be a nice-to-have in case of future issues (especially server-side issues, such as the one which prompted me to raise this issue in the first place, which was absolutely bombarding clients with events). |
I managed to mess up the option/behaviour descriptions. Now fixed. |
If I
(trace-function 'plz--respond)
I'm seeing about 5 calls per second consistently.That really seems like a lot. (Especially if each of those is starting an independent curl process?)
Is that level of activity in line with expectations?
The text was updated successfully, but these errors were encountered: