Aurélie Jallut from https://app.paris.happy-dev.fr/ shared those GIFs with me to illustrate the poor performance she is experiencing on Hubl.
I belive @matthieu is investigating this problem, but I'm not sure where we're at.
I'm creating this issue so everyone can see the problem, and so we have it under the radar and address it.
On Hubl
On RocketChat
Designs
Child items
...
Show closed items
Linked items
0
Link issues together to show that they're related.
Learn more.
We tried to call for the last 2 weeks but didn't make it with the holidays. We need to call this week, no specific day planned, but I'm sending her a message :)
Did we go forward on the lazy-loading of resources POC ?
Did we go forward on the HTTP requests number reduction using the depth ?
I have difficulty following what we tried in production for now, are we still waiting for @jblemee and @calummackervoy work validation and deployment ?
I also notice that the call to the alien.svg is executed once per users and always blocked by the browser on my side.
Yes, the browser itself handle that, only the first one is loaded.
On every circle every time you change. Is that normal ? I feel like today it's becoming particularly slow again.
I don't think so, we don't reload any resource if you don't save anything.
On my side, I only get a call on /circles/X/members/ then a call to every avatars of those users. (I even get it two times for some users? Widget reloading?)
Filtering: djangoldp-packages/djangoldp!175 (merged) (+djangoldp-circle, +djangoldp-project, +djangoldp-notification) Last time I tested, some permissions were messed up, had some commits from there, didn't tried again, awaiting the green light of "Not WIP" + the upgrade for djangoldp-communities
I'm wondering if it's possible that 2 versions of the core are running at the same time. Which means that your cache is filled in one version, and filled again in another one
Actually there is only the helpers of core@0.11 included so it's not related. We observed with @balessan a lot of RAM consumption (at least 500mb, it went up to 1.3gb) for the community tab, and a high and constant CPU usage with Firefox (~ 70%).
Did we go forward on the lazy-loading of resources POC ?
I forgot this one. On core side, I have some code stashed on my dead computer :( If we want to integrate this in 0.13, I can either redo it or wait for it to be fixed.
Did we go forward on the HTTP requests number reduction using the depth ?
This is a good question, I think we concluded that the number of requests was satisfying, but the time needed to generate the big request was too important. That's why @jblemee and @calummackervoy were working on optimization.
I forgot this one. On core side, I have some code stashed on my dead computer :( If we want to integrate this in 0.13, I can either redo it or wait for it to be fixed.
We'll need to redo that quickly, at least to test if that helps
This is a good question, I think we concluded that the number of requests was satisfying, but the time needed to generate the big request was too important. That's why @jblemee and @calummackervoy were working on optimization.
Most of my testing has been with the serialization of ~9000 resources. I recorded Django's ModelSerializer rendering this in 11 seconds (Figure 9). In my (work-in-progress) prefetch tests we're approaching this, but we're still short
Currently the overall definition-of-done seems to cross front-end and back-end as "when the app doesn't feel slow for users"? I'm not saying I disagree with this
As a parallel advancement, an issue on Solid XMPP Chat has been isolated, it triggers when the user opens a lot of different chats. components/solid-xmpp-chat#211 (closed)
Starting a thread about the user feedbacks so we can keep track of it:
Maud
Environment
Windows 10
RAM 4Go - core i3
Firefox
Details
Loading time of the app acceptable
Sometimes a bit slow when switching circles, and showing history. Not often. Ok during the call
Chrome slower to load everything than Firefox
No history on conversations on Chrome With the extension "Privacy Badger" on Chrome, nothing is loaded at all
A lot of RAM is used on Firefox with community open (~1.3Go)
Long loading time (~ 10s), quite in the average of what we know.
Long time to load a chat (especially the history of messages).
She has a lot of tabs opened in her browser, and a lot of web apps (GDoc, ...)
Her computer says 92% of RAM memory used and CPU running at 110% of its regular capacity
On a click on a circle, we can see in the network tab:
I ran a performance profile through the browser (Firefox v82) and what I notice is that 85% of the time is spent on "CC Graph reduction" / Garbage Collection and cycle collection related operations.
I am not sure it's something we can optimize, but it's probably the symptom of other issues.
Could it be converse related ? Router related ? Any idea on how to diagnose more precisely the problem ?
Would be interesting to test without solid-xmpp-chat (converse) loaded, for sure.
In any case, we may still want some investigation on which garbage collector this happen. There are bunch of possible optimizations, mainly based on a good code base: Re-use variables instead of creating new ones, avoid useless function evaluations, maximize built-ins usage. On heavy apps like our, with a lot of data treatments, it's pretty easy to reach memory leaks or bad memory management by the browser's garbage collector.
Beside the framework, Converse may also have poor performances.
I know JC recently pushed some changes that fix a memory leak when fetching messages among other things.
The changes I'm referring to are the ones introduced on October 29th here: https://github.com/conversejs/converse.js/commits/master
It looks like the origin of all the events triggered and which are slowing down my tab is converse. Not sure of my diagnostic though, if you know how to confirm that I am interested.
Used test1 /users/ with ~9000 resources (1,000 users, with 5,000 project members randomly distributed across 1,000 projects). Tested with Chrome
Serialization time - first response
I looked into an idea of prefetching nested-fields in the ViewSet, after initial tests indicated that this could reduce response time for the first request by several seconds and reduces database hits from > 9,000 to < 50
Updated some results above. Included results from Django's ModelViewSet as a control variable
### Summary
Both are improved by the prefetch by similar factors
ModelViewSet performs 3-5 times faster in all conditions. This indicates that LDPViewSet has a lot of room for improvement. @jblemee was working on improving the LDPPermissions checks in list which will help
The growth caused by depth is similar in all tests (between 1.3-2 times longer)
The performance of LDPViewSet is grossly unpredictable. The prefetch improves the predictability a lot (for ModelViewSet too), the most likely cause of this is the reduction in database hits. LDPViewSet is still more unpredictable than ModelViewSet with the prefetch, but this is likely because of database hits involved in permissions checks
It's worth stressing that the ModelViewSet performance recorded here is an unachievable goal for LDPViewSet. It lacks permission checks completely and is unsuitable for a production environment - it just gets a queryset and serializes. It would be possible to add some permission for better validity but this will obviously take time