Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache get decoder class from string #117

Merged
merged 6 commits into from
May 30, 2024
Merged

Cache get decoder class from string #117

merged 6 commits into from
May 30, 2024

Conversation

thewhaleking
Copy link
Contributor

Moved the logic from obtaining a decoder class from a string to its own method. This allows caching, which greatly improves performance.

…wn method. This allows caching, which greatly improves performance.
@arjanz
Copy link
Member

arjanz commented May 4, 2024

Hi @thewhaleking thanks for the PR. I agree if this function could be cached, it would be a great performance booster. I think I looked at this somewhere in the past, not sure why this is failing atm.

Related on this topic, recently I have been working on a complete refactor and V2 of the scalecodec, which has a drastic performance boost as well. The only caveat is it doesn't work which runtimes < MetadataV14, but I think by now no Substrate networks are running that anymore anyway. Here I let go of the concept type_string all together and basically every type definition is an object instance.

An example would be:

scale_obj = Enum(
    Bool=Bool(), 
    Number=U32, 
    Complex=Struct(data=String(), version=Compact(U8)), 
    None_=None
).new()
value = {'Bool': True}
data = scale_obj.encode(value)

It is still work in progress but could be alpha released soon, it would be great if you could have a look as well and maybe have some feedback:

https://github.com/polkascan/py-scale-codec/tree/az-v2
https://github.com/polkascan/py-substrate-interface/tree/az-v2

@thewhaleking
Copy link
Contributor Author

thewhaleking commented May 5, 2024

The failing was coming from the type_registry changes. Without a significant refactor, we can't really cache this method. I did discover we can get at least some significant speedups by cacheing convert_type_string and compiling the regex that's used within get_decoder_class.

Definitely not at the same level as the initial cache idea, but this still offers a temporary speedup until you're able to fully implement and ship V2.

@thewhaleking
Copy link
Contributor Author

@arjanz haven't had a chance to review the V2 stuff yet, but I plan to this week.

@arjanz
Copy link
Member

arjanz commented May 8, 2024

@arjanz haven't had a chance to review the V2 stuff yet, but I plan to this week.

@thewhaleking That would be appreciated! I'm out of office atm, when I have the chance I'll get back to you on the PR

@thewhaleking
Copy link
Contributor Author

@arjanz Checked out the V2 stuff. It's some really good ideas, and stuff we actually had been discussing implementing on our own. I haven't gone through every line, but the async stuff is great from what I've seen.

@arjanz
Copy link
Member

arjanz commented May 30, 2024

I'm back in the office; reviewed and tested your PR and looks good. Seems it will only impact pre MetadataV14 runtimes btw. Not very familiar with Opentensor, but is that still using older runtime metadata?

I will merge the PR and schedule for next release, perhaps we can connect on Matrix to further discuss V2

@arjanz arjanz merged commit f57f65b into polkascan:master May 30, 2024
5 checks passed
@thewhaleking
Copy link
Contributor Author

I'm unfamiliar with Matrix, but let me know how to connect. Definitely interested in discussing more about V2, and can share where we're at with async stuff as it related to substrate.

@arjanz
Copy link
Member

arjanz commented May 31, 2024

@thewhaleking you can join our channel at https://matrix.to/#/#polkascan:matrix.org

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants