-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add security assessment via {oysteR} #272
base: master
Are you sure you want to change the base?
Conversation
Codecov Report
@@ Coverage Diff @@
## master #272 +/- ##
==========================================
+ Coverage 59.23% 61.90% +2.66%
==========================================
Files 64 69 +5
Lines 991 1063 +72
==========================================
+ Hits 587 658 +71
- Misses 404 405 +1
... and 2 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
Notes from 2/8 sprint meeting:
|
R/assess_security.R
Outdated
#' @export | ||
assess_security <- function(x, ...) { | ||
# TODO: discuss preferred approach for handling packages within Suggests | ||
if (!requireNamespace("oysteR", quietly = TRUE)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the version of oysteR they have installed matter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, more of a general question is if we need to create some tests for each new metric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the version of oysteR they have installed matter?
Good catch. Addressed in 1a7e7da
Also, more of a general question is if we need to create some tests for each new metric.
A good discussion to have generally. I did plan on adding them for this metric once the Suggests
implementation details were sorted out.
Updates based on feedback from 2/8 sprint meeting:
Handled via a package-level option that will suppress further attempts to install the package (see 07478e7).
Only the
Security metric is now included in |
If you need anything from {oysteR}, please let me know. |
Testing strategy:
|
Ran into this error after installing the branch. library(riskmetric)
dplyr_pref <- pkg_ref("dplyr")
dplyr_pref$security
#> Error in `dplyr::bind_rows()`:
#> ! Can't combine `version` <character> and `version` <package_version>. Created on 2023-03-22 by the reprex package (v2.0.1) |
@parmsam-pfizer -- thanks for reporting. This is resolved now. |
@emilliman5 (and any others) -- this should be ready to review now. I believe I have checked off everything we discussed to add as functionality and have covered the desired testing scenarios. Note, the Also, as a reminder, there is a wiki page that describes the implementation strategy that may be helpful when reviewing: https://github.com/pharmaR/riskmetric/wiki/Adding-Metrics-from-External-Packages |
#' @return \code{NA} if no vulnerabilities are found, otherwise \code{0} | ||
#' | ||
#' @export | ||
metric_score.pkg_metric_security <- function(x, ...) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if assess_security
== 0 wouldn't that mean no vulnerabilities were found? and thus the metric should be 1? but I also seem to remember discussing what 0 vulnerabilities found might mean and that just because none were found doesn't mean non-exist... I think I am back tracking on that line of thought and metric should either be 1 or 0, and we reserve NA for cases when the assessment cannot be performed (for some reason).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we can capture the vulnerability overivew then we could set metric to NA
if the package is not found in the Sonatype database.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for the ref_cache we should capture the oysteR output, if ony the tibble to start, but the entire message would be best (especially the overview).
oysteR::audit("dplyr", version = "1.0.8", type = "cran")
ℹ Using cached results for 1 package── Vulnerability overview ──
ℹ 1 package was scanned
ℹ 1 package was found in the Sonatype database
ℹ 0 packages had known vulnerabilities
ℹ A total of 0 known vulnerabilities were identified
ℹ See https://github.com/sonatype-nexus-community/oysteR/ for details.
# A tibble: 1 × 8
package version type oss_package description reference vulnerabilities no_of_vulnerabi…
1 dplyr 1.0.8 cran pkg:cran/dplyr@… "dplyr: A… https://… <list [0]> 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the assessment should be the tibble or possibly the list of vulnerabilities found. and then metric can be binary or presence/absence of vulnerabilities.
This is a draft of what the integration with
oysteR
could look like (#210). I am looking for feedback on some design decisions for following theSuggests
integration approach for metrics from other packages (Option 2 in #209). Hopefully this draft PR is a good place to raise these questions and gather feedback. Please advise if another forum is preferred.1) Handling calls to
assess_*
functions when package fromSuggests
is not presentI added some logic to the generic function to detect whether the
Suggests
dependency is available and prompt the user to install the dependency if it is not. The package is installed and the method dispatch continues as expected if the user chooses to install. An error is raised if the installation is not performed.This seemed like reasonable behavior to me but I welcome feedback on what the end-user experience should look like here. Also, I noticed custom exceptions/conditions being used throughout the package. Is this a situation where we would want to raise something other than the default from
stop
?2) Hooks to
all_assessments
-based workflowsAnother area to address when extending the package in this manner is the behavior of the
all_assessments
function. This currently works bygrep
-ing the packages exported functions to find those starting withassess_*
.I have modified this mechanism to allow
assess_*
functions with dependencies present inSuggests
to be excluded fromall_assessments()
based on the presence of an attribute. The full list ofassess_*
objects can be returned by passinginclude_suggests = TRUE
).This seemed in line with the desire to not impose additional dependencies on the end-user but I think there may be many ways to accomplish this (e.g. different naming conventions, etc.). I am not sure what is best.
What are others thoughts on what the behavior should look like here?
3)
pkg_ref_cache.security.default
ImplementationThis is less integration focused and more just a question on implementing this metric. The current implementation does the following:
assess_dependencies()
.pkg_refs
including the original package and its dependencies.oysteR::audit
.This seemed like a good way to get the package dependencies and versions that were relevant to the "assessment context". It also made it easy to cover a number of
pkg_ref
sources (source, cran, bioconductor) in a singledefault
method.Admittedly, I am still building my mental model of all the package internals. Is this seem like a good way to accomplish this? Anything to look out for?
There is still work to be done to clean this up and build out supporting tests but was hoping to get some feedback at this early stage. @elimillera @emilliman5 looking forward to your feedback on the above points. And are there others to include in the design discussion?