Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add log message when hash power is too low #29

Open
brayniac opened this issue Jan 29, 2023 · 3 comments
Open

Add log message when hash power is too low #29

brayniac opened this issue Jan 29, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@brayniac
Copy link
Collaborator

Add a log message that helps users determine that the hash power is too low. It's easy to have a configuration issue there and it would be nice to suggest that they increase the hash power. We might want to log these messages only once per run, so we'd need to think about that a bit.

Other related configuration issues might be that the segment size is too small (items not fitting into segments).

@brayniac brayniac added the enhancement New feature or request label Jan 29, 2023
@thinkingfish
Copy link
Member

thinkingfish commented Feb 19, 2023

I wonder if the grander idea is to slowly build out a "config diagnostics console" that eventually can evolve into an autotuner.

E.g. hashtable load factor is the metric behind the problem in the issue title. For segment size, we want a max object size to segment size ratio, or internal fragmentation %.

There are generally two ways to approach this objective, one is self-contained, such as codify in Pelikan some intelligence that runs as a little state machine (or ML agent if we want to sound trendy) that "scores" the main configuration values based on the metric they are responsible for, like the few mentioned here. The other way is to outsource that work to an external entity, and simply curate a stream of events/logs to provide as data. In both cases though, I suspect the trigger and frequency of the internal action(s) will be somewhat independent of debug logging.

Given we have a very generic and flexible logging backend, we can potentially create a new log type to support this functionality, and gate the logging differently too (e.g. evaluate and/or log hash table load factor when we have to allocate a new hash bucket as overflow, only calculate internal fragmentation when the most recent write wasted more than X% of segment space) to keep it very lightweighted.

@brayniac
Copy link
Collaborator Author

That'd be a big improvement. I wonder if in the interim we should just adjust some of our default values. Maybe making those values match what we currently have in the example config? The current default is hash_power = 16 with an overflow factor of 1.0 - so effectively we have only 114688 item slots if launched without a config file.

I guess as an alternative, we could make the config file be a mandatory argument.

My biggest immediate concern is that the "no config provided" defaults are so conservative that it's easy to run into problems.

@thinkingfish
Copy link
Member

thinkingfish commented Feb 20, 2023

Agree on improving the current default. I think lacking a config file, I will probably base default values on a presumed average key-value size, e.g. 1KB (we can add an internal constant called TARGET_OBJECT_SIZE. So if people ask for 4GB of data memory, we assume we will have 4 million objects.

Related (but no action needed now), we have target size for read and write buffers. The current value (16K) agrees with the 1KB object size with moderate pipelining, but we can eventually provide a calculator and config generator that produces a config that sets multiple parameters based on a few key assumptions, such as object size, object life cycle (creation rate and desirable TTL), concurrency level, etc that map closer to users' mental model of caching.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants