You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A teammate and I have been working on adding caching into a HAL API built on Rails and Oat. We are dealing with large sets of data in our app, and are hoping to cache as much as we can. Ideally we could use Russian Doll Caching. We have pretty much figured out how we want to do this, but for a variety of reasons, it won't work. We currently have full-collection caching figured out. I'm hoping to address a few of the roadblocks we've come across here, and start a discussion about how to solve them.
# config/initializers/activerecord/base.rbmoduleActiveRecordclassBasedefself.cache_keyDigest::MD5.hexdigest"#{all.order(:id).pluck(:id).join('-')}-#{all.maximum(:updated_at).try(:to_i)}-#{all.count}"endendend# app/serializers/application_serializer.rbclassApplicationSerializer < Oat::Serializerdelegate:cache_key,to: :item# Cache entire JSON stringdefto_json(*args)Rails.cache.fetchexpand_cache_key(self.class.to_s.underscore,cache_key,'to-json')dosuperendend# Cache individual Hash objects before serialization# This also makes them available to associated serializersdefto_hashRails.cache.fetchexpand_cache_key(self.class.to_s.underscore,cache_key,'to-hash')dosuperendendprivatedefexpand_cache_key(*args)ActiveSupport::Cache.expand_cache_keyargsendend
If we leave out #to_hash, we get caching of the full object/collection, no problem. However, RDC doesn't work, because we only cache the full JSON-serialized string.
Problems occur when we define #to_hash. Particularly, Marshal.dump cannot work with a Hash using default_proc, which is currently how we are initializing our @data object on Oat::Serializer. This raises a few questions:
Should the base serializer really be in charge of defining the default value of an empty key? maybe
Should Adapters be in charge of figuring out what keys should exist by default and what their default values should be? absolutely
dc4abe in #33 addresses this problem (though not with Adapters::JsonAPI).
Doing this at least lets us use #to_hash and will cache the full object and sub-objects (as hashes) without raising an error. The keys for each sub-object show up within the cache store.
However, RDC still does not work, because, as we quickly realized, all Oat does, in the end, is build a giant hash that gets fed to to_json. There is currently no way of caching only parts of this hash.
3ac2cb4 in #33 is my first attempt at trying to let us embed objects into our hash that can be converted to JSON. We have yet to make this work, though I have a few tricks I'm going to attempt tomorrow. It would be great if calling to_json could be done at a more granular level within a serializer, which, I believe, would give us the ability to fully implement RDC.
Going through all this, I've also realized that we should probably end up testing that calling to_json on the serializer gives us exactly what we expect. We should also probably split up each test to use isolated expectations so we can more easily see what's going wrong on a single test run.
This is a pretty big post, but I'm hoping we can seriously start to address the idea of granular caching, or at least make it easier to do so. Let me know if you have any input.
The text was updated successfully, but these errors were encountered:
A teammate and I have been working on adding caching into a HAL API built on Rails and Oat. We are dealing with large sets of data in our app, and are hoping to cache as much as we can. Ideally we could use Russian Doll Caching. We have pretty much figured out how we want to do this, but for a variety of reasons, it won't work. We currently have full-collection caching figured out. I'm hoping to address a few of the roadblocks we've come across here, and start a discussion about how to solve them.
We used Advanced Caching: Part 6 - Fast JSON APIs as a guide to get things started, and ended up with something like this:
If we leave out
#to_hash
, we get caching of the full object/collection, no problem. However, RDC doesn't work, because we only cache the full JSON-serialized string.Problems occur when we define
#to_hash
. Particularly,Marshal.dump
cannot work with aHash
usingdefault_proc
, which is currently how we are initializing our@data
object onOat::Serializer
. This raises a few questions:dc4abe in #33 addresses this problem (though not with
Adapters::JsonAPI
).Doing this at least lets us use
#to_hash
and will cache the full object and sub-objects (as hashes) without raising an error. The keys for each sub-object show up within the cache store.However, RDC still does not work, because, as we quickly realized, all Oat does, in the end, is build a giant hash that gets fed to
to_json
. There is currently no way of caching only parts of this hash.3ac2cb4 in #33 is my first attempt at trying to let us embed objects into our hash that can be converted to JSON. We have yet to make this work, though I have a few tricks I'm going to attempt tomorrow. It would be great if calling
to_json
could be done at a more granular level within a serializer, which, I believe, would give us the ability to fully implement RDC.Going through all this, I've also realized that we should probably end up testing that calling
to_json
on the serializer gives us exactly what we expect. We should also probably split up each test to use isolated expectations so we can more easily see what's going wrong on a single test run.This is a pretty big post, but I'm hoping we can seriously start to address the idea of granular caching, or at least make it easier to do so. Let me know if you have any input.
The text was updated successfully, but these errors were encountered: