-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka topic patterns should be per-schema #1205
Comments
This would be resolved by the multi-configuration plugin support. As a reminder, what is proposed is as follows:
We need this for streams subscription too. We are developing a generic "stream sampler" that consumes JSON and produces LDMS metric sets that can then be decomposed for storage. Consider that this sampler may want to consume from different streams and may need different configurations for each stream like you suggest. |
If creating a stream to set transform plugin, please see also: Also, why are we creating a new keyword/usage "as=foo" when for plugins and objects in general we already had/have "name=foo" for the object pointer resolution and "plugin=bar" for the object type/instantiator (which have been shorthanded in v2/v3/v4 load implementation)? Finally, it's not at all clear to me why I would want full-blown sets (and the implication that down-stream aggregators will see them and move them, with the timing and sizing issues it entails for user-derived streams) made out of streams. Are you proposing making pseudo-set objects for the purpose of the store plugin->store command and storage policy logic? Or are you proposing full-on set instance with all the inter-daemon work this entails? |
@baallan are you quibbling with the syntax? If so the reason is that using as="" allows us to be backward compatible with the existing load command syntax. I suppose we could create a whole new command:
Personally, I think that would be more confusing than adding as= to the existing command. We could certainly change as to another keyword, maybe "config=" or something. I think you are confused about load; it does not create a configuration object. That is what the proposed change would do. I couldn't parse the remainder of your comment(s). |
@morrone I'm wrong. The multi-config thing doesn't fix this at all. I think you need additional syntax to bind schemas to topic formats. Something like: topic="":"","":"" You would need this kind of binding with or without multiple configurations per plugin. |
@morrone Markdown ate my syntax
|
@tom95858 re "I couldn't parse the remainder of your comments." The remainder of my comments (syntax quibbling aside-- you win there) could be:
Are you saying the comments in #1217 are unparseable, or just the 'Finally ' paragraph? If just the 'Finally' paragraph didn't parse, then more directly about a "stream to set" plugin: |
@baallan Are you talking about the generic JSON storage plugin? What does that have to do with the avro-kafka store? |
@tom95858 yes i am referring to
|
@baallan, please have that discussion in another ticket. |
Multi-config could almost solve it, because we can use different strgrps with different regex's to map to the different store configs. But strgrps regex only match LDMS schemas, not decomposed schemas. We might want different topic patterns for each decomposed schema. In my ideal world, there would be a way in the decomposition configuration file to pass store-specific configuration options along to the store. For instance, under a row's mapping we might have an additional mapping named "store_options".
If we did it that way, we would have to come up with some rules for how we add store-specific options that don't conflict. Stores would just have to ignore options that they don't recognize, which means we basically can't catch typos if we can't throw errors on unrecognized options. So we could also get more verbose:
We could also move "indices" under the new "store_options". |
Right now we can only set the kafka topic name pattern one when store_avro_kafka is configure. It would be much better to allow us to configure different topic patterns per-schema, or per-decomposition, or maybe per strgp.
A particular use case we have is that data from some schemas we want to store in topics names that are independent across all clusters, and other schemas we want have all of the clusters feed into topics of the same name.
The text was updated successfully, but these errors were encountered: