Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spec: cache v3 exploration #9518

Open
runspired opened this issue Aug 1, 2024 · 0 comments
Open

spec: cache v3 exploration #9518

runspired opened this issue Aug 1, 2024 · 0 comments

Comments

@runspired
Copy link
Contributor

Note

This document serves as a PRE-RFC gathering of requirements to inform the next iteration of the Cache interface.

As we continue to develop and ship more advanced features built over the core interfaces, we've periodically noticed missing capabilities that that either limited what we could ship or resulted in the need to write more complex code to work around the limitation.

We discuss these below:

Multiplexed Responses / Multi-entry Graphs

Some API formats allow multi-entry graphs. Some APIs optimize pre-fetch by producing request bodies containing multiple request responses at once. These are roughly similar problems requiring a single solution around how to signal the a document is actually a collection of documents.

Peek Remote State

When working on the experiments for PersistedCache and DataWorker, we found that often we want the ability to peek remote state or local state individually. Today, we effectively can only peek local state.

Peek Change

When working on an experiment around RPC style actions, we determined that sometimes we want to retrieve the change for a specific field, or rollback the change for a specific field. Currently, we do not have this level of granularity. Adding it would help in both complex form handling and in robustly constructing specialized PATCH operations.

Document Removal

While documents may invalidate, there is today no method to tell the cache to release a document, which was just an oversight. We haven't actually needed this yet, but would like to make this a cornerstone of a rework of the unload APIs such that we can do weak/strong ref based GC.

Persisted Cache - sync vs async

When working on the PersistedCache experiment, we found that we needed a tight integration between sync cache APIs and async PersistedCache APIs. Its unclear if there's something missing here but it feels as though there is because we needed a private intimate handshake between several carefully coordinating primitives to get around this limitation.

Persisted Cache - query

When working on the PersistedCache experiment, we found that being able to query the in-memory cache would be useful to avoid needing to maintain in-memory objects in two separate places (a PersistedCache instance and a Cache instance).

Persisted Cache - Shared Memory

When working on the PersistedCache experiment, we wondered whether it might be possible for the in-memory representations of the persisted resources to automatically keep in-sync with the in-memory Cache instance by having them be one and the same.

Automated GC

When considering the design for an automated GC, we noted that the Cache would need to be asked to perform the GC, and that a new capability would need to be added for asking if a given request is active / asking for a list of active requests. Potentially active resources as well in an upgrade from current isRecordInUse semantics.

Explicit Partials

When considering the design for supporting explicit partials, we've wondered whether a new schema or capability API would be needed to answer whether a given type is a partial of another type, or whether this should be left entirely to the cache. We've also wondered whether any of the existing APIs might need peek-partial equivalents.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: needs triage
Development

No branches or pull requests

1 participant