Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test our python scripts with multilevel profiles #5

Open
aaronlippold opened this issue Jan 27, 2021 · 7 comments
Open

Test our python scripts with multilevel profiles #5

aaronlippold opened this issue Jan 27, 2021 · 7 comments
Assignees
Labels
enhancement New feature or request

Comments

@aaronlippold
Copy link

Given that organizations make dependant profiles that take the raw STIG or CIS benchmark and make a dependant profile, our parsing solutions must be 'smart' and know how this constitution works.

You can use https://github.com/mitre/inspecjs as a guide and the https://github.com/mitre/inspecjs/tree/master/parse_testbed as well which has both double and triple-level overlay examples in it.

@aaronlippold aaronlippold added the enhancement New feature or request label Jan 27, 2021
@aaronlippold
Copy link
Author

@tohch4
Copy link
Collaborator

tohch4 commented Jan 27, 2021

Ha, I see you beat me to it, @aaronlippold. Nice!

@aaronlippold
Copy link
Author

Not that I mind the python, we just put a lot of work into the Typescript/Javascript API to ensure it absolutely works with any level of complexity in HDF data, mapping, our Passed, Failed, NA, Manual, Profile Error, etc data types etc. If we are going to write a python library. I would like to make sure we keep feature parity between the libraries.

@tohch4
Copy link
Collaborator

tohch4 commented Jan 27, 2021

Totally makes sense to me. Are you going to be on the call tonight? If so, two things:

  1. I will admit my ignorance on the multi-level profile scenario, but will try to read up before tonight's meeting, but will likely have questions.
  2. If we are talking a library level of functionality and not just some utility scripts, let's discuss how we match expectations on "any level of complexity in HDF data, mapping, our Passed, Failed, NA, Manual, Profile Error, etc data types etc" as different libraries in different langs. It is a pleasant learning experience for me to understand the models better.

Re 2, Having reviewed InspecJS last night quickly, @aaronlippold , can I presume from what I read you all are generating the JS/TS classes from the JSON Schemas direct? Perhaps it is time I give that a hand in OSCAL as well to try and see if the mapping approach will work.

FYI, @gregelin re 2 from your end, this is exactly how compliance-trestle is doing it on the Python side for OSCAL and JSON, but not without many headaches and setbacks from following the repo, so maybe this is not a concern for now, but a viable approach for the medium-to-long term?

I bring this up in both these examples because you all have a good notion of your model and maybe the OSCAL model tool, but we have to rebuild the model of OSCAL out of band multiple times for different languages as util libraries and to go beyond basic util scripts, I think we will ought to think about reducing the lead-up time there. :-)

@aaronlippold
Copy link
Author

@tohch4
Copy link
Collaborator

tohch4 commented Jan 28, 2021

Did some reading after the call tonight and it is making a little more sense, comparing a basic profile and that side by side helped.

@aaronlippold
Copy link
Author

aaronlippold commented Jan 28, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants