Skip to content
This repository has been archived by the owner on Aug 9, 2021. It is now read-only.

Cache Requests + Speed #8

Open
zachferland opened this issue Dec 17, 2018 · 0 comments
Open

Cache Requests + Speed #8

zachferland opened this issue Dec 17, 2018 · 0 comments

Comments

@zachferland
Copy link
Contributor

We are already caching profile requests to the profile api, which will make this pretty fast. But what if connected directly to this cache as well? just connect to redis client and only make http req to server if cache mess. Always related, any benefit to be in same VPC as pining/profile api.

Then at the next layer, apollo platform comes with a lot of tooling and support for caching graphql requests. What are the options here? do they make sense or does I make sense to just do above with lower level data that we have more control on keeping cache entries valid. Maybe hard at higher levels to determine when to invalidate entry. (Since not doing write through graphql)

Also related, apollo has graphql clients that will help cache requests on the client side. Would this make any sense in the lib? we are just a using a very light client right now which does not doe this. Or is this obsolete if we had some hybrid orbitdb and graphql/profileAPI in the future (basically some form of fast sync using profileAPI, with orbit-db loading in background, with graph ql client side against orbitdb instance)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant