Your one-stop tool for generating accurate and well-tested models for representing the message payloads. Use it as a tool in your development workflow, or a library in a larger integrations, entirely in your control.
- Installing Modelina
- AsyncAPI CLI
- Features
- Roadmap
- Requirements
- Documentation
- Examples
- Versioning and maintenance
- Development
- Contributing
- Contributors
Run this command to install Modelina in your project:
npm install @asyncapi/modelina
If you have the AsyncAPI CLI installed (ONLY support AsyncAPI inputs), you can run the following command to use Modelina:
asyncapi generate models <language> ./asyncapi.json
Modelina put YOU in control of your data models, here is how...
Modelina lets you generate data models from many types of inputs |
const asyncapi = ...
const jsonschema = ...
const openapi = ...
const metamodel = ...
...
const models = await generator.generate(
asyncapi | jsonschema | openapi | metamodel
); |
Use the same inputs across a range of different generators |
const generator = new TypeScriptGenerator();
const generator = new CsharpGenerator();
const generator = new JavaGenerator();
const generator = new RustGenerator();
...
const models = await generator.generate(input); |
Easily let you interact with the generated models.
Whatever interaction you need, you can create. |
const models = await generator.generate(input);
for (const model in models) {
const generatedCode = generatedModel.result;
const dependencies = generatedModel.dependencies;
const modeltype = generatedModel.model.type;
const modelName = generatedModel.modelName;
...
} |
Easily modify how models are constrained into the output |
const generator = new TypeScriptGenerator({
constraints: {
modelName: ({modelName}) => {
// Implement your own constraining logic
return modelName;
}
}
}); |
Seamlessly layer additional or replacement code on top of each other to customize the models to your use-case |
const generator = new TypeScriptGenerator({
presets: [
{
class: {
additionalContent({ content }) {
return `${content}
public myCustomFunction(): string {
return 'A custom function for each class';
}`;
},
}
}
]
});
const models = await generator.generate(input); |
Seamlessly lets you combine multiple layers of additional or replacement code |
const myCustomFunction1 = {
class: {
additionalContent({ content }) {
return `${content}
public myCustomFunction(): string {
return 'A custom function for each class';
}`;
},
}
};
const myCustomFunction2 = {...};
const generator = new TypeScriptGenerator({
presets: [
myCustomFunction1,
myCustomFunction2
]
});
const models = await generator.generate(input); |
The following table provides a short summary of available features for supported output languages. To see the complete feature list for each language, please click the individual links for each language.
Supported inputs | |
---|---|
AsyncAPI | We support the following AsyncAPI versions: 2.0.0 -> 2.6.0, which generates models for all the defined message payloads. It supports the following schemaFormats AsyncAPI Schema object, JSON Schema draft 7, AVRO 1.9, RAML 1.0 data type, and OpenAPI 3.0 Schema. |
JSON Schema | We support the following JSON Schema versions: Draft-4, Draft-6 and Draft-7 |
OpenAPI | We support the following OpenAPI versions: Swagger 2.0 and OpenAPI 3.0, which generates models for all the defined request and response payloads. |
TypeScript | We currently support TypeScript types as file input for model generation |
Meta model | This is the internal representation of a model for Modelina, it is what inputs gets converted to, and what generators are provided to generate code. Instead of relying on an input processor, you can create your own models from scratch and still take advantage on the generators and the features. |
Supported outputs | |
---|---|
Java | Class and enum generation: generation of equals, hashCode, toString, Jackson annotation, custom indentation type and size, etc |
TypeScript | Class, interface and enum generation: generation of example code, un/marshal functions, custom indentation type and size, etc |
C# | Class and enum generation: generation of example code, serializer and deserializer functions, custom indentation type and size, etc |
Go | Struct and enum generation: custom indentation type and size, etc |
JavaScript | Class generation: custom indentation type and size, etc |
Dart | Class and enum generation: json_annotation |
Rust | Struct/tuple and enum generation: generation of `implement Default`, generate serde macros, custom indentation type and size, etc |
Python | Class and enum generation: custom indentation type and size, etc |
Kotlin | Class and enum generation: use of data classes where appropriate, custom indentation type and size, etc |
C++ | Class and enum generation: custom indentation type and size, etc |
PHP | Class and enum generation: custom indentation type and size, descriptions, etc |
This is the roadmap that is currently in focus by the CODEOWNERS
The following are a requirement in order to use Modelina.
- NodeJS >= 14
A feature in Modelina cannot exists without an example and documentation for it. You can find all the documentation here.
Do you need to know how to use the library in certain scenarios?
We have gathered all the examples in a separate folder and they can be found under the examples folder.
As of version 1, Modelina has a very strict set of changes we are allowed to do before it requires a major version change. In short, any changes that change the generated outcome are not allowed as it's a breaking change for the consumer of the generated models.
Here is a list of changes we are allowed to do that would not require a breaking change:
- Adding new features (that do not change existing output), such as generators, presets, input processors, etc.
- Change existing features, by providing options that default to current behavior. This could be a preset that adapts the output based on options, as long as the API of Modelina and the API of the generated models does not have any breaking changes.
- Bug fixes where the generated code is otherwise unusable (syntax errors, etc).
Breaking changes are allowed and expected at a frequent rate, of course where it makes sense we will try to bundle multiple changes together.
We of course will do our best to uphold this, but mistakes can happen, and if you notice any breaking changes please let us know!
Because of the number of the limited number of champions, only the most recent major version will be maintained.
Major versions are currently happening at a 3-month cadence (in a similar fashion as the AsyncAPI specification), this will happen in January, April, June, and September.
We try to make it as easy for you as possible to set up your development environment to contribute to Modelina. You can find the development documentation here.
Without contributions, Modelina would not exist, it's a community project we build together to create the best possible building blocks, and we do this through champions.
We have made quite a comprehensive contribution guide to give you a lending hand in how different features and changes are introduced.
If no documentation helps you, here is how you can reach out to get help:
- On the official AsycnAPI slack under the
#04_tooling
channel - Tag a specific CODEOWNER in your PR
- Generally, it's always a good idea to do everything in public, but in some cases, it might not be possible. In those circumstances you can contact the following:
- jonaslagoni (on AsyncAPI Slack, Twitter, Email, LinkedIn)
Thanks go out to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind are welcome!