You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 9, 2021. It is now read-only.
We are already caching profile requests to the profile api, which will make this pretty fast. But what if connected directly to this cache as well? just connect to redis client and only make http req to server if cache mess. Always related, any benefit to be in same VPC as pining/profile api.
Then at the next layer, apollo platform comes with a lot of tooling and support for caching graphql requests. What are the options here? do they make sense or does I make sense to just do above with lower level data that we have more control on keeping cache entries valid. Maybe hard at higher levels to determine when to invalidate entry. (Since not doing write through graphql)
Also related, apollo has graphql clients that will help cache requests on the client side. Would this make any sense in the lib? we are just a using a very light client right now which does not doe this. Or is this obsolete if we had some hybrid orbitdb and graphql/profileAPI in the future (basically some form of fast sync using profileAPI, with orbit-db loading in background, with graph ql client side against orbitdb instance)
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
We are already caching profile requests to the profile api, which will make this pretty fast. But what if connected directly to this cache as well? just connect to redis client and only make http req to server if cache mess. Always related, any benefit to be in same VPC as pining/profile api.
Then at the next layer, apollo platform comes with a lot of tooling and support for caching graphql requests. What are the options here? do they make sense or does I make sense to just do above with lower level data that we have more control on keeping cache entries valid. Maybe hard at higher levels to determine when to invalidate entry. (Since not doing write through graphql)
Also related, apollo has graphql clients that will help cache requests on the client side. Would this make any sense in the lib? we are just a using a very light client right now which does not doe this. Or is this obsolete if we had some hybrid orbitdb and graphql/profileAPI in the future (basically some form of fast sync using profileAPI, with orbit-db loading in background, with graph ql client side against orbitdb instance)
The text was updated successfully, but these errors were encountered: