Planning, coordination and development over long distances at different locations
How to manage the project efficiently?
How to track bugs and feature requests?
What is the best integrated development environment (IDE) \(\rightarrow\) RStudio, Eclipse, VS Code, or ?
What clean code rules to use for R packages? \(\rightarrow\) Generally accepted guidelines were missing
3. Development of a sustainable user concept
Usability:
The software must be easy to learn and use
Goal: A very high user acceptance
Consistency:
How to name functions and arguments?
What is the best output format?
A list() is not enough…
2024: What Our Users Say About RPACT
“One of the best software and team in the field of adaptive design!”
(Senior Director of Statistics)
“rpact is by far the easiest to use.”
(Professor, Human-Technology Interaction Group)
“RPACT is just amazing.” (Biostatistician)
“We are impressed by the high quality of the package and the excellent support by rpact.” (Biostatistics director of a pharmaceutical company)
“[We] exclusively uses rpact, complemented with a huge internal webportal of supporting code, documentation, internal case studies, repository of health authority questions, etc. for all clinical trial design purposes” (see DOI)
“Excellent package! Many thansk.” (Biostatistician)
4. Development of a reliable and sustainable validation concept
High download rates are a common quality criterion for open source software
Our expectation: rpact remains a niche software with low download rates
Application of good software engineering rules was not clear for R packages
Challenge 1: Funding
Crowdfunding \(\rightarrow\) 10 pharma companies and CROs agreed to sponsor the project
Service Level Agreement \(\rightarrow\) Together we developed a simple contract: software support and training; no software development!
RPACT was founded as a GbR \(\rightarrow\) Easiest and fastest solution for freelancers
Development of clean code rules for R packages \(\rightarrow\) We developed own guidelines inspired by Java and widely accepted clean code rules (see Robert C. Martin (2009) Clean Code)
Automatic creation of test plans and references to function specifications
Automatic creation of test protocols directly linking to individual test cases
Risk Assessment
Levels:
High Risk
Medium Risk
Low Risk
Risk Assessment Level “High Risk”
High Risk: Functions and calculations that directly or indirectly affect the planning and analysis of clinical trials. These involve decisions regarding the safety and effectiveness of drugs and treatments. Errors or inaccuracies in these functions could lead to incorrect conclusions about the efficacy or safety of a clinical intervention, potentially causing harm to patients or misguiding regulatory decisions.
Risk Assessment Level “Medium Risk”
Medium Risk: Functions and operations that support the main analytical procedures but do not directly influence the critical outcomes of clinical trial planning and analysis. These could include data preparation, intermediate statistical methods that inform but do not determine the final analysis, or other support functions that facilitate the primary objectives of the package without being directly tied to decision-making about treatments.
Risk Assessment Level “Low Risk”
Low Risk: Utility functions for specific output formats (e.g., R Markdown) or user-specific customizations that do not fall into the other risk categories. These functions assist in the presentation and documentation of results but do not impact the core analytical procedures or outcomes of clinical trial analysis.
Risk Assessment - Risk of Dependencies
Check the nature of the imported and suggested packages
Given the roles of these packages, it is assumed that any malfunctioning behavior would be detected during the testing of rpact functions, as these packages support technical rather than methodological aspects of the package
Formal validation
Documentation structure inspired by GAMP 5
User requirements specification (URS)
Functional specification (FS)
Software design specification (SDS)
Verification
Test plan (TP)
Test protocol (TL)
Appendix
Validation documentation of rpact 4.0.0: 7,470 pages
Automation of recurring validation processes/activities
Validation Utility Package rpact.validator
User requirements specification (URS) \(\rightarrow\) Manual work
\(\rightarrow\) Efficient unit test case generation
Basic Idea of Template-Based Unit Testing
Step 1: Compare software results manually, e.g., with simulation results and results from the literature and/or other programs \(\rightarrow\) Reference point is correct and trustworthy
Step 2: Fix the validated state, i.e., generate unit tests which test the software systematically and reproducibly \(\rightarrow\) Further development and refactoring do not cause undetected side effects
Advantages of the Template-Based Approach
Automation and Consistency:
Uniform Test Structure: Test templates ensure a uniform structure for the tests, improving maintainability and readability.
Automated Test Generation: Automatically generating tests from templates reduces manual effort and minimizes errors that can occur with manual test script creation.
Flexibility and Extensibility:
Easy Switch of Test Packages: Since the tests are defined in templates, these templates can be easily adapted to work with other test packages. This facilitates the transition to new or improved test frameworks without significant effort.
Modularity: Using templates allows tests to be modular, making it easier to add, remove, or modify specific tests.
Traceability and Documentation:
Granular and Traceable Tests: Each test case is clearly defined and traceable, making debugging easier. This avoids creating “black-box tests” and improves understanding of the functionality being tested.
Documentation: References to functional specifications and software design specifications can be defined in the templates. This promotes better documentation and traceability of the tests, which is particularly advantageous for audits and reviews.
Efficiency Improvement:
Time Savings: The one-time creation of test templates saves time as they can be reused to generate tests automatically.
Scalability: With test templates, tests can easily be scaled to new functions and modules, increasing test coverage and improving software quality.
Example: Test Template
test_template_f_design_group_sequential.R:
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
Template
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
Regeneration is disabled
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
Section title in the document (context for testthat version <3)
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
Unit test title/description
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
Reference to a table in the Functional Specification
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
Reference to a formula in the Functional Specification
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
The function call to be tested
#' @exit Do not create the unit tests again#' @context Testing the Group Sequential and Inverse Normal Design Functionality#' @test_that 'getDesignInverseNormal' with default parameters: #' parameters and results are as expected#' @refFS[Tab.]{fs:tab:output:getDesignInverseNormal}#' @refFS[Formula]{fs:criticalValuesOBrienFleming}x0 <-getDesignInverseNormal()getUnitTestObject(x0, "x0")
Create a unit test for each field of the object x0.
Validation of an R package is challenging, time consuming, and expensive
The template-based unit testing approach offers a structured, flexible, and efficient method for software quality assurance.
By automating test generation, using traceable test cases, and enabling comprehensive documentation, software development is not only accelerated but also the quality of the final products is significantly enhanced.
Especially the combination of manual validation and automated verification ensures that the software remains stable and reliable, even as it is further developed or refactored.