Once a biomarker has been identified and quality data from the discovery tools indicate that it may answer the clinical question posed (in the targeted context of clinical use), its successful translation will require an assay that must meet stringent requirements. This step represents a critical cause of failure for biomarkers, as the development of a robust and reproducible assay, with appropriate sensitivity, specificity, and precision, depends on a standards-based discovery process. For assays to be submitted for FDA qualification or clinical validation the test must be analytically validated and, depending on the targeted clinical application, evaluated in one or more clinical trials. Finally, as the assay moves to later discovery, it is important to define how clinical decision making will be impacted by the assay, and increasingly careful considerations to the economics of manufacturing, competing technologies, and scale must also be considered.
Development of an assay must be undertaken only with a clear understanding of the context of use, as this decision will drive the types of data needed for regulatory submission. Biomarker assays can be developed for a number of context of use categories but three stand out: diagnostic biomarkers (to measure the presence or absence of a defined individual's health state); prognostic biomarkers (to predict the likely future course of a disease generally in the presence of a therapeutic intervention); and predictive biomarkers (generally used to predict the outcome of a specific treatment, also referred to as "surrogate" endpoints). The FDA further classifies biomarkers on the basis of risk (The Assay Dilemma: Selecting the Right Biomarker Path), and in almost all cases predictive biomarkers will be classified as Class III (high risk) and require a Premarket Approval Application (PMA) versus the 510(K) route (where unlike PMAs submissions can be based on the prior approval of similar devices).
Selection of the analyte to be tested will depend to a major extent on the availability of high quality clinical samples. For example, although today’s biomarkers are being developed for a range of analytes (nucleic acids, proteins, expression arrays, and complex combinations of markers), the ease of sample acquisition is a critical issue. For example, molecular biomarkers from tissues require invasive surgical procedures, which may produce samples of variable quality, that may not be easily implemented in the clinic. The technology platform to be used for the assay will be largely driven by the nature of the analyte (protein, image, nucleic acid, etc.). Whatever the technology platform selected, each step from sample preparation to performance requires the use of broadly accepted standards.
To accurately measure molecularly based analytes, the selection of the reagents, which will drive the specificity and sensitivity of the assay, is also a critical aspect of test development. For example, selecting reagents with lower than optimal affinity for the analyte or kinetics that do not support needed reaction times can result in false readings in the target clinical population.
The assay must be evaluated in depth to ensure that it has sufficient sensitivity (correctly identifies the intended health state), which in clinical use will be measured as true positive response/true positives + false positives. In addition, the assay must be optimized for specificity, which is essentially the capacity of the test to correctly correlate the biomarker with the health state in the presence of all potential confounding factors. Specificity is essentially the extent to which the assay correctly identifies the patients that do not exhibit the biomarker of interest as reflected in the health state. Specificity is reflected as a measure of true negatives/true negatives+ false negatives. These parameters can all be impacted by co-morbidities and pre-analytical variables in samples, as well as the accuracy and efficiency of the test.