You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based of the presentations during yesterday's meeting and in particular that shown by GSK: how do people ensure that the right technical expertise is available when assessing R package function outputs / when deep-diving into the source code itself.
At NVS, we typically assume that the governance team does not possess the skillset necessary to meaningfully assess all types of packages, especially those that concern themselves with novel or niche statistical methods or those related to cutting-edge ML. We therefore shift the responsibility to the requestor during PQ test writing (which we only do for high-risk packages).
The text was updated successfully, but these errors were encountered:
At Roche, we have embraced a similar approach to what you are describing at NVS. We assume that the validation process applies only to the package as software, the R code itself. We are doing our best to validate and assess the quality of the R code, whether it was created with adherence to the best practices, is error-free, or is covered by unit tests. However, we don't rule on what the code actually does. We cede that obligation to the researchers doing their analysis. They have to make sure that the results they are getting are appropriate, validation only says whether it is reliable or not.
This specificity w.r.t. purpose of validation activities is probably in line with the general definition of CSV. Still, a) 'validation' as a term has different meanings depending on the context (cf. "validation" in ML/model development) and b) our user base is relatively unfamiliar with CSV. Therefore, we assume that users will take the scientific accuracy of the package for granted, because we serve it to them as a 'validated package on a validated system'. Should they do that? No. Will they do that? We believe yes.
Hence, I'd argue that our philosophy is that for high-risk packages we will do at least some checks regarding scientific accuracy, namely in the form of PQs/UATs, which is where this makes the most sense.
Based of the presentations during yesterday's meeting and in particular that shown by GSK: how do people ensure that the right technical expertise is available when assessing R package function outputs / when deep-diving into the source code itself.
At NVS, we typically assume that the governance team does not possess the skillset necessary to meaningfully assess all types of packages, especially those that concern themselves with novel or niche statistical methods or those related to cutting-edge ML. We therefore shift the responsibility to the requestor during PQ test writing (which we only do for high-risk packages).
The text was updated successfully, but these errors were encountered: