Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize LineCache #15

Open
2 tasks
patrickisgreene opened this issue Jul 23, 2018 · 10 comments
Open
2 tasks

Optimize LineCache #15

patrickisgreene opened this issue Jul 23, 2018 · 10 comments

Comments

@patrickisgreene
Copy link
Contributor

patrickisgreene commented Jul 23, 2018

Currently the LineCache struct doesn't really simplify that much except applying an update when we receive one from xi. It was copied here from xi-term as is but i think we can do better.

I wont have time to work on this for a while I'm just Noting some stuff here so i can remember whenever i get back to it. But if anyone who stumbles by wants to take a crack at it be my guest PR's are always welcome.

  • track changed lines(lines that need to be re-rendered)
    This is harder than you would think(or i'm an idiot 😉 ), xi update's are seen as a function from old cache state to new, so figuring out exactly when a line number has changed is not as easy as it first seems.
  • Add method to retrieve only changed lines

As i think of more way's to improve the line cache over the next couple month's as i mess with xi-term, ill mark down any other optimizations that could be made here

@Cogitri
Copy link
Collaborator

Cogitri commented Jun 7, 2019

It'd also be nice if LineCache::update were async - right now I'm having problems with styling in big documents (e.g. in gxi's edit_view.rs with Rust syntax highlighting enabled - when I put a " somewhere Xi resends me all the lines whose style has changed (read: all following lines), which can block my UI for a few seconds :c). I'm honestly not quite sure how to handle this, since I guess that I'd have to be very careful at making it async to not cause inconsistencies (e.g. user inserts a ", but due to the styling updates rushing in that character isn't registered immediately, then the user presses backspace - what happens?).

@little-dude
Copy link
Collaborator

Updating the cache is CPU bound task, so we cannot run it on the tokio event loop directly: it would block the loop at each update. If we want to make it asynchronous we could update the cache in a background thread for instance, send an event when it's ready, and re-draw when we see this event. I'm not sure this is something xrl should provide to be honest.

The best thing we could do at xrl level I think is optimizing the updates.

I guess that I'd have to be very careful at making it async to not cause inconsistencies (e.g. user inserts a ", but due to the styling updates rushing in that character isn't registered immediately, then the user presses backspace - what happens?)

Assuming the updates sent by the core are queued correctly, there's no way to apply an update partially or multiple updates at the same time thanks to the language's safety: the compiler will prevent you from mutating the cache in two places simultaneously (well unless you use a cell, but then you'd have a panic at runtime).

@little-dude
Copy link
Collaborator

For reference, here is how the cache update is done in xi-macs. I might be interesting to see if they have optimizations we don't: https://github.com/xi-editor/xi-mac/blob/d4ecbcad77928de8e60c7dd33dbb42da9a72a6f8/Sources/XiEditor/LineCache.swift#L103-L208

@Cogitri
Copy link
Collaborator

Cogitri commented Jun 7, 2019

it would hlock the loop at each update

Oh, alright, wasn't sure how much Tokio's threadpool and handle and how expensive spawning a new thread is - but it seems like it's pretty cheap, so I guess I'll go with that.

there's no way to apply an update partially or multiple updates at the same time thanks to the language's safety

Yup, I was more talking about this from a user's perspective: If the user inputs a ", then presses backspace while the linecache is still updating the " will be deleted even though only the char before that is being displayed. Since the update shouldn't ever take that long that the user wouldn't expect that the " is being deleted this shouldn't be a problem though, I guess.

I'll take a look at how xi-max does it, I suppose.

I also don't think xrl should provide this. Although it's a very common thing to do every frontend needs to do different things to redraw and spawning a thread for the update isn't too hard.

@Cogitri
Copy link
Collaborator

Cogitri commented Jun 7, 2019

FWIW this was easy to implement in gxi, see Cogitri/Tau@9315fbe . Thanks for the pointer :)

@Cogitri
Copy link
Collaborator

Cogitri commented Jun 7, 2019

For reference, here is how the cache update is done in xi-macs. I might be interesting to see if they have optimizations we don't: https://github.com/xi-editor/xi-mac/blob/d4ecbcad77928de8e60c7dd33dbb42da9a72a6f8/Sources/XiEditor/LineCache.swift#L103-L208

FWIW gxi's previous linecache seems to be very similiar to xi-mac's. I'll spin up some benchmarks of xrl with its current implementation and gxi's previous linecache in a bit. See:
https://github.com/Cogitri/gxi/blob/v0.8.1/src/gxi-linecache/src/linecache.rs

@msirringhaus
Copy link

This is probably a very stupid question, as I have not really dug deep into xrl's code base, but currently the LineCache consists of a Vec<Lines> structure.
Wouldn't it make more sense to have a HashMap/IndexMap with the line number as key and the Line as value?
I'm thinking of situations where one jumps through the file (searches, goto line x, ...) and skipping huge chunks of the file.
E.g. starting at the beginning of the file and immediately jump to line 130, I get

self.cache.lines()
           .iter()
           .map(|x| format!("{}", x.line_num.unwrap_or(0)))
           .collect::<Vec<_>>()
           .join(" ")
-----------
current lines 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 3
3 34 35 36 37 38 39 40 41 42 43 44 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 12
5 126 127 128 129 130 131 132 133

There I miss a lot of lines in between, so the Vec-index doesn't help me at all and I have to look at each Line individually to check their associated line number.
Or am I missing something?

@Cogitri
Copy link
Collaborator

Cogitri commented Jun 28, 2019

This is probably a very stupid question

No worries, please do ask these kinds of questions, it's very nice to hear an "oursiders" questions to get different views of one's own work :)

Wouldn't it make more sense to have a HashMap/IndexMap with the line number as key and the Line as value?

Not all Lines have a linenumber though, soft broken lines don't have one.

Or am I missing something?

I'm somewhat sure that the Vec-index should do the trick (and sort of automagically work around soft broken lines), but you're encountering a bug in xrl's linecache (see #15). Could you maybe try again with Tau's linecache, where this should work. When I have a bit more time I'll try to fix xrl's linecache and switch Tau over to that. In Tau I do the following during drawing:

  1. Figure out the first and last line to draw
  2. Draw the linecount
    2a) For that, fetch first to last line and draw line_num if it is some (so that soft broken lines don't have a line number)
  3. Draw line.text on the screen. As previously noted we automagically work with soft broken lines. If first_line is 30 and last_line is 70 we fetch 40 lines, but there might be x amount of soft broken lines (without a line_num) in there, but we don't mind because we still only draw 40 lines (so everything that fits on the view).

I'm thinking of situations where one jumps through the file (searches, goto line x, ...) and skipping huge chunks of the file.

As noted previously, the Vec-index should work for that, but you'll need a functioning index during normal operation too, otherwise scrolling can get veeeery weird (when lines aren't sorted correctly).

@msirringhaus
Copy link

Could you maybe try again with Tau's linecache, where this should work

As its not a drop in replacement, I would need to adjust a bunch of stuff, for which I currently have no time, unfortunately.
But I read quickly into that linecache. Seems to me like you are basically using the Vec, but with an Option<Line> instead of the Line directly. And then fill the gaps with Nones, if there are any (due to jumping around in the file).

If I understood that correctly, somehow I'm a bit afraid of the performance impact for very large files and big jumps (e.g. open a 200MB file and jump directly to the end of that file. Then tens of thousands of Nones would have to be added to the Vec).

@Cogitri
Copy link
Collaborator

Cogitri commented Jul 2, 2019

As its not a drop in replacement, I would need to adjust a bunch of stuff,

Ah, I think it's just replacing linecache.lines.get() with linecache.get_line()

Then tens of thousands of Nones would have to be added to the Vec

Yup, it's not optimal but it works whereas xrl currently bugs out it seems. I'll try to look into xrl's linecache before I go on holidays - but I'm not quite positive here to get the line_num from for lines which are soft broken, since those don't have a line number. And what number should they have? They can't just have the same number, they mustn't have previous_number + 1 since you'd get wrong results then:

line 238
line 238 (broken)
line 238 (broken)
line 239

We can't make that 238,239(actually 238),240(actually 238),241(actually 239) because then trying to work with line 239 wouldn't give correct results (e.g. when trying to measure its width)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants