-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Performance] Onyx.get
batching
#84
Comments
I like the idea of batching everything in one tick, but I'm not a fan of batching arbitrarily based upon time frames. "Magic number" solutions where we pick some arbitrary number (why 2ms vs 20m or 200ms) out of the hat require constant maintenance find the right numbers, and those numbers can change that are effectiveness unpredictably. I like the idea of batching until the JS engine is idle (which if I understand correctly, is what you mean by one tick), but I'm not sure the argument why we would wait longer than that. |
Let's say an average read from disk takes anywhere between 50ms to 500ms, by delaying execution by up to 10% (2-5ms) we can save the roundtrip for potential requests that might happen in this window of time
The idea here is the first
We can't reliably detect that, and it might be too long to wait to get idle, usually reads are triggered by components that need data "now" and can't afford to wait to idle. Going this direction would make complicated logic "to feed the most hungry first" "next tick" is the next iteration of the event loop, it's very similar to a |
Just to confirm, reads are triggered by user or network action right? So
the only difference between "batching one tick" or "batching one tick +
20ms" is the possibility that additional user/network activity happened in
that tiny 20ms window, right? It feels like the odds of that helping are
extremely low.
…On Tue, Jul 6, 2021 at 9:28 AM Peter Velkov ***@***.***> wrote:
why 2ms vs 20m or 200ms
Let's say an average read from disk takes anywhere between 50ms to 500ms,
by delaying execution by up to 10% (2-5ms) we can save the roundtrip for
potential requests that might happen in this window of time
This is very likely to happen during init where more data can be captured
this way
- a tick might be enough to batch 2 requests where a "forgivable"
delay would capture 20
- it's a tradeoff hard reads would happen 2-5ms slower but every time
a 2nd request is batched you save 50-500ms
The idea here is the first get triggers the batch of 2-5ms then the batch
flushes along with any other gets arrived by that time
I like the idea of batching until the JS engine is idle (which if I
understand correctly, is what you mean by one tick)
We can't reliably detect that, and it might be too long to wait to get
idle, usually reads are triggered by components that need data "now" and
can't afford to wait to idle. Going this direction would make complicated
logic "to feed the most hungry first"
"next tick" is the next iteration of the event loop, it's very similar to
a setTimeout() with a 0ms delay
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#84 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEMNUQSJ4MEGY6A7KLIMBLTWMVJFANCNFSM474CEQZQ>
.
|
Rendering a screen does not happen all at once, it also does not trigger the needed Onyx connections at once - it's more like an avalanche effect A tree is rendered from the children to the parent Implementing batching with a delay/debounce can capture more We did something similar with hooking into existing reads - when 2 components want the same thing, but it's not in This is the theory
We won't be able to capture all of this in a single batch as some items are codependent - you need to resolve Auth to continue with the rest of the app, but if we manage to group larger batches we should see a big improvement in time to resolve data from storage |
Great, thank you for that! However, there still isn't actually any "delay" between the avalanching cycles -- if one cycle of Onyx.gets() triggers a second cycle of Onyx.gets(), it's not like it waits 2-20ms between "ticks". Rather, immediately after one tick completes, the next one starts. So aren't we really trying to wait for all the cycles to stop avalanching, before doing any Onyx.gets()? In other words, we aren't actively looking to wait any amount of time at all. Rather, we just want to wait until the avalanche is "done" before sending all the get over in a large batch. If that's correct, are you saying that it's in practice very difficult to detect when the avalanche is done, and thus there's no more need to wait? Why wouldn't |
Yes ideally we would hook to some event instead of waiting an arbitrary time, but I'm not sure if that's possible at all.
Hey, remember the xlsx that I did originally they contain start time information of each 2ms window
5ms window
Here's a full xls: https://docs.google.com/spreadsheets/d/15GkUEUOyVLk02y8_-PFAJFF4Ej7OsyXfdbyJgFoC_4Q/edit?usp=sharing The only time I see where So in reality there are calls that happen in very close proximity, it's just how React rendering works and Onyx relies on that. |
Got it, if we can't accurately "hook" when the thread is otherwise idle,
then a short timeout might be the best we can do. Thanks for walking me
through it!
…On Tue, Jul 6, 2021 at 12:45 PM Peter Velkov ***@***.***> wrote:
So aren't we really trying to wait for all the cycles to stop avalanching,
before doing any Onyx.gets()? In other words, we aren't actively looking to
wait any amount of time at all. Rather, we just want to wait until the
avalanche is "done" before sending all the get over in a large batch.
Yes ideally we would hook to some event instead of waiting an arbitrary
time, but I'm not sure if that's possible at all.
Why wouldn't setTimeout(0) work -- why does the delay need to be nonzero?
setTimeout(0) would be about the same as with the next tick, and is just
not enough
Hey, remember the xlsx that I did originally they contain start time
information of each Onyx.get call, check the start times of the get calls
here: #70 (comment)
<#70 (comment)>
2ms window
- 1350859.00ms
- 1350861.00ms
5ms window
- 1350935.00ms
- 1350937.00ms
- 1350939.00ms
Here's a full xls:
https://docs.google.com/spreadsheets/d/15GkUEUOyVLk02y8_-PFAJFF4Ej7OsyXfdbyJgFoC_4Q/edit?usp=sharing
The only time I see where get calls are on the same millisecond is for
calls that happen in a foreach cycle, but this is handled in #78
<#78> where instead
of for-eaching we can directly call multiGet with all the keys
So in reality there are calls that happen in very close proximity, it's
just how React rendering works and Onyx relies on that.
A batching with delay can serve as a good POC whether we'll gain
something out of this and then we can try to improve by hooking to
something exposed by React to sync with it's rendering process
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#84 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEMNUXWRT5TZXVRUEMIA5DTWNMMTANCNFSM474CEQZQ>
.
|
Some more timing information, this time on iOS iOS Onyx.get timings durring init.xlsx Calls are ordered by |
I'm having trouble following some of this as it's a bit over my head. But it sounds like we are proposing that batching calls to Does that sound accurate? Can we easily test this by:
|
I've tested this while working on #88
Looks like this wont be needed, or we can use Even though retrieval became about twice as fast, the app seem to became slower as many |
Cool thanks for the summary! Lemme know if I've got the general idea correct...
And this is not normally observable because |
The "hang" effect happens when switching chats and only on Android.
Since I already have the change in git history, I can put it in a separate branch to test again after improvements in E.cash |
Sounds good. Let's put this on HOLD for now. |
Onyx.get
batchingOnyx.get
batching
Onyx.get
batchingOnyx.get
batching
Gonna take this back off hold now so we can keep discussing this...
@kidroca Seems like this is happening for sure. For good measure, did some logging from the native layer and can see about 400 separate Feels like we have a few options to explore, but I'm not sure which ones are worth pursuing next...
For both of these, I'm curious how the new Onyx version of a |
Onyx.get
batchingOnyx.get
batching
But I'm surprised why that's not used internally for I think we can implement both proposals - they both need a
Here's how that is handled:
When One thing to note is there won't be much use for the capture/pending task logic if we fetch all finite items with one |
I was thinking of opening a separate ticket but here seems a good place to discuss as well ProposalOnyx keeps track of keys with infinite size - the "Safe Eviction Keys". The data stored under such keys can grow as long as there is storage for it. Using the "safe eviction keys" and "all keys" we can derive the "finite keys" ProblemCurrently the entire "finite" information is requested and read during init, it just happens in chunks as each components subscribes to different data SolutionIdentify all "finite" keys and load them into cache with one multiGet request Additional ConsiderationSince general report information (typically needed to display reports in the LHN) is stored in separate keys and each one is about |
I think we should create a new issue if it's OK. I have some thoughts and would like to reply, but feels like a different problem to the problem of making many unoptimized data retrieval across the bridge. |
This is a good question. I'd be curious to hear what the maintainers think.
Oh I see, so we will still use Btw, it also looks like this code avoids fetching duplicates so perhaps the pending task is not needed // avoid fetching duplicates
keys.forEach((key) => {
if (this._getKeys.indexOf(key) === -1) {
this._getKeys.push(key);
} |
FWIW, just swapping out I logged this from
That is about 300 or so keys normally are fetched one at a time over the bridge. I kind of expected to maybe see some big reduction in start up time since the bridge flooding issue isn't happening anymore. But things are more or less the same. Which makes me wonder how much of a problem we have with the "bridge" or if the problem is somewhere else. Here's a quick side by side comparison of release builds (w/ getItem.v.multiGet.mp4I think these changes are maybe still worth doing, but the lack of a difference just makes me think that we are just "pushing the limits of |
It avoids duplicates but only for the current batch, while the pending tasks consider retrievals that are half way through
Yep tried that as well some time ago, the effect was noticeable but the app was slower back then
This way of making get calls still has the overhead of creating 300+ promises since calls are still happening one by one even though they are batched in the end. This will also capture 300 more promises for the capture/pending task optimisation. I've suggested to strip everything from the app try put it back together to identify what might be causing this, could it be the navigation setup with all the different Navigators that mount after auth? I colleague Android developer told me once that having many view stacks hurts performance |
@marcaaron |
I think this can be closed since we don't use async storage anymore? |
Plus in addition we are now working on this PR: |
In the benchmarks performed here: Expensify/App#2667 (comment) I've noticed that sequential reads happen very fast
8-9ms
, much faster than what it takes in E.cashI think the reason is that
Onyx
will blast AsyncStorage with as many calls as it need.Browsing the
AsyncStorage
code I've found this optimizationAsyncStorage#multiGet
When
multiGet
is called multiple times in a very short span of time (1 tick) it would batch all the calls that happen and invokeflushGetRequests
on the next tickFrom the docs of
flushGetRequests
I think that's exactly what happens when E.cash starts - a lot of operations at once
Proposal
Test the following theory:
Update
Onyx.get
to usemultiGet
internally when it fetches data fromAsyncStorage
this way the operations that happen at the same time would be batchedWe can try implementing our own
get
batching that waits a bit longer2-3ms
to capture more get calls and then make amultiGet
requestSample benchmarks
Reading a random key and
~1kb
text valueBased on 50+ tests of each
AsyncStorage.getItem
in parallel:AsyncStorage.multiGet
:AsyncStorage.getItem
in parallel:AsyncStorage.multiGet
:I've observed that there are gains even for
10
parallel calls made toAsyncStorage.getItem
, while as we get higher the performance degrades exponentially,multiGet
is not affected as much by the 10x increase of throughputAdditional notes
Write operations take about twice as much and can be optimized the same way
For my sample chat there are around 80-100 calls to
Onyx.set
during init with the following timings:The text was updated successfully, but these errors were encountered: