Workshop I - Report

Workshop I Report PDF

Introduction and Background


Approximately 90 percent of oncology drugs in development fail, many in Phase III trials.  Analytically validated biomarkers, as defined by the Food and Drug Administration (FDA) as being “fit-for-purpose,” offer hope to improve this situation both by enabling more targeted drug discovery efforts and through better patient selection for clinical trials.  To date, however, an organized effort to establish the evidentiary standards needed to advance biomarkers in a robust and predictable manner has not emerged.  As a result, although protein-based biomarkers represent a key strategic element for the development of molecularly-based medicines,  biomarkers have only been approved at the meager rate of approximately 1.3 per year since the early 90s—despite the submission of well over 1600 potential candidates for regulatory review.

Exploration and planning for the National Biomarkers Development Alliance (NBDA) concept was initiated on November 1, 2012.   Led by Arizona State University (ASU) through its Complex Adaptive Systems (CAS) Initiative,  the NBDA planning phase integrated input and counsel from the a number of Arizona bio-medically focused organizations, including the Critical Path Institute (C-Path), the International Genome Consortium (IGC), collaborating partners Mayo Clinic and the Translational Genomics Research Institute (TGen), and other clinical partners.  The overall goal of this early phase was to understand and ultimately create the knowledge base needed for a standards-based, end-to-end pipeline for the expressed purposes of bringing rigor and predictability to biomarker development.  Ultimately the NBDA was organized as a 501(c)(3) though the ASU Foundation.

The overall goal of the NBDA is to facilitate biomarker development by creating/assembling and evaluating level-of-evidence standards (standards is used to include best practices, guidelines, standard-operating procedures)  at each modular stage of the biomarker pipeline (see Figure 1).  This is a daunting challenge, but one that must be met; otherwise, biomarkers will continue to fail, targeted therapeutics development will continue to be fraught with uncertainty, and the molecular diagnostics industry will remain an unattractive target for investors.  It is a challenge that requires breaking down the intellectual silos that dominate biomarker discovery and development. A major task for the NBDA will be to create an environment that encourages all constituencies in the biomarker development enterprise to come together as a cooperative community.

Although certain biomarkers and/or biomarker classes will be utilized to develop NBDA’s processes and workflows, when fully developed, the system will be agnostic in terms of the targeted disease or class of biomarkers.  Demonstration projects will be employed to determine compliance levels needs for biomarkers, both established and experimental, with a goal of assembling/developing the needed evidentiary standards.  The NBDA’s data will be public and accessible through its website, as will key workflows and procedures.

Through its extensive networks of partners, the NBDA will integrate  knowledge from the biomarker field; contributions from key individual investigators and organizations, both private and public; ASU’s strengths in biomarker discovery and diagnostics technologies in the Biodesign Institute; expertise and counsel from Arizona organizations that participate in the biomarker space (e.g., IGC, C-Path, TGen and Mayo Clinic, AZ); and innovative clinical trial models from a number of clinical partners. These unprecedented knowledge networks will provide what has heretofore been missing in solving the biomarker development area: the convergence of the knowledge, organizations, and mechanisms required to identity and remove the barriers to advancing successful biomarkers.  NBDA’s partners will range from local biomarker-focused efforts such as IGC , with its unparalleled expertise in collecting and managing high quality biospecimens, and C-Path, a standards organization, to pharmaceutical, diagnostics, and biotechnology companies to advocacy groups, payers, and investors.  The NBDA is seeking national and international partners who bring in-kind and/or financial resources to enable the organization’s mission.  It will also welcome pre-competitive consortia and advocacy groups to the NBDA.

To kick-off its efforts, the NBDA held a workshop to explore the status of the biomarker pipeline from discovery through regulatory review.  The workshop was designed as a facilitated conversation rather than a presentation-heavy meeting, one that would focus on identifying specific action items needed to establish evidentiary standards for a robust biomarker pipeline.  The goal of this workshop specifically, and the NBDA overall, is the development of a rational standards-based end-to-end pipeline approach that will remove much of the uncertainty from the biomarker development process.

Thursday December 13, 2012

Dr. Barker welcomed the participants and gave a brief overview of the mission and status of the NBDA.  She highlighted the unique array of skills and expertise already assembled to meet the challenges of the NBDA and welcomed other partners to join the effort by contributing in-kind technical support and expertise, data, and financial support.  She emphasized the need to assemble information to address certain barriers and perform needed research to inform the development of approaches to deal with others.  Based on progress to date in defining the status of the biomarker field overall, she introduced a graphic representation of the potential modules of an end-to-end approach for biomarker development.  

Keynote Presentation: A Status Report on Biomarkers

George Poste, D.V.M. Ph.D.,  Chief Scientist of the Complex Adaptive Systems Initiative and the Del E. Web Chair in Health Innovation at ASU, laid out some of the key questions and issues that the biomarker development enterprise must address if healthcare is going to shift over the next few decades from the current economically unsustainable “do more, bill more” system to a sustainable one that manages individual risk in a way that improves health outcomes and controls costs.  Biomarkers, or perhaps more accurately biosignatures, represent an intellectual foundation for the rational diagnosis and selection of appropriate treatments for the most vexing human diseases, but building this foundation requires two major changes in mindset.  First, the biomarker discovery community must embrace the entire ‘omics' domain, not just genomics, if it is to find the signatures of dysregulation that characterize disease and quantify the risk of disease.  Second, the field must adopt the attitude that diagnostics should be at the forefront of biomedical research efforts, not the forgotten stepchild.

Some of the questions that the field must answer if it is serious about the business of biomarker development include

  • What is the best biospecimen for an intended use?
  • How will the biospecimen be profiled and when?
  • Who will conduct the profiling?
  • What will be reported?
  • How will profiling information be used to support a clinical decision and/or clinical decision support system?
  • How will physicians and other healthcare providers be trained and supported in interpretation of complex profiling data?
  • How will profiling results be incorporated into electronic health records and be aggregated into larger databanks?
  • How will profiling assays be regulated and reimbursed?

A key question that needs to be addressed, one that sits at the core of the NBDA’s mission, is how can we create a systems-based strategy for biomarker discovery and clinical development that is fully integrated from end-to-end, from discovery research to clinical validation? In a sense, this is much the same question that researchers now face in thinking about complex diseases in terms of dysregulation of entire biological networks rather than simply as individual pathways gone awry.  Though each step in the biomarker discovery to clinical utility process must be tackled individually, it must also be done in the context of the larger system.  In other words, when searching for a biosignature for a given disease, it is important to not do so blindly but in the context of biological relevance, therapeutic development, and eventual clinical utility.  Though this is certainly not a simple task, it is feasible as evidenced by a few successes in the cancer field that include  HER2+ and Herceptin, EML4-ALK and Xalkori, KRAS and Erbitux and Vectibix, and BRAF-V600 and Yervoy and Zelboraf.

A pathway to data-intensive healthcare.

One possible trajectory for data-intensive healthcare focuses on phenomes rather than clinical phenotypes.  In such a trajectory, profiling will integrate information on an individual’s genome, his or her phenomes, “exposomes,” and lifestyle into an integrative, personal “‘omics” profile, or iPOP.  Given the enormous datasets that are already being generated, data analysis will be critically important, making it imperative that NBDA engages experts on computing, engineering, and decision making into its efforts.  Skills in these areas will be critical for identifying an iPOP and developing the risk identification and mitigation and decision-support systems that will be needed to produce actionable information and knowledge-driven decisions based on that iPOP.  Success will result in earlier detection of disease, the ability to subtype disease and select appropriate therapeutic regimens, the opportunity to monitor health status and identify predisposition to and risk of developing disease, and the chance to optimize resource allocation more efficiently.

Many of the problems that plague biomarker discovery, as evidenced by the disturbingly low reproducibility of biomarker publications, are of the community’s own making and have to do with the “garbage in/garbage out” phenomenon.  Most researchers, for example, have little or no access to rigorously annotated biospecimens from stringently phenotyped sources.  Moreover, too many biospecimens used in discovery research have been collected with insufficient control of pre-analytical parameters and with variable analytical standards.  Researchers use idiosyncratic, lab-specific analytical methods that fail to meet industry and regulatory standards, and they conduct “small N” studies that lack statistical power as a result of over-fitting that they then report using chaotic, incompatible data formats deposited into databases with poor interoperability.  These issues are magnified by the pressure to publish and through poor compliance with funding agency and journal policies on open data sharing.

Reforming the biomarker discovery system.

Another issue the biomarker community faces is that it is not treating biomarker discovery as the large-scale production process that it is, and large-scale production processes require sophisticated project management systems and automated workflows to achieve optimum productivity.  In other words, quality happens only when someone is responsible for it. The dismal productivity of biomarker research and development is a legacy of the failure to embrace multidisciplinary expertise, such as that possessed by clinical instrument makers, and to adopt and enforce the same type of stringent quality assurance and quality processes that are standard in the drug and vaccine development world.  This is unacceptable given that the complexity of biomarker discovery, validation, and clinical adoption is comparable to that for biopharmaceutical research and development.

The first step needed to create a new legacy, one that the NBDA can catalyze, is to build large-scale, standardized resources for biomarker research.  These resources must include biobanks that can provide rigorously phenotyped, matched, and consented specimens of both normal and diseased tissue, which in turn will require establishing standard operating procedures for pre-analytical and analytical methods.  In parallel, it is critical to establish standardized data ontologies and formats for large-scale datasets and federated databanks and to create a broad or future-proof patient-consent process suitable for an era of ‘omics profiling and DNA banking.  This consent process should be flexible enough to address personal preferences yet be dynamic enough to anticipate discoveries made in the future based on profiles identified today.

While the appeal of developing blood-based biomarkers is obvious, there are many challenges in applying myriad new analytical technologies to discovering biomarkers and biosignatures using blood as a biospecimen.  For example, these new ‘omics technologies are generating data with rapidly escalating volume, velocity, and variety, and data analysis usually requires complex deconvolution of low signal-to-noise signatures.  In addition, the large-scale molecular datasets that these new technologies generate differ fundamentally from traditional biological and clinical datasets.  Traditional clinical and epidemiological datasets comprise a small number of variables tracked across a proportionately larger number of samples, while today’s ‘omics technologies are measuring variables per sample whose numbers far exceed the typical number of samples.  Together, these factors create a need for new data analytics and infrastructure that are only now being developed and that today’s institutional review boards (IRBs) and study sections do not have the competency to understand or address.

In addition, the ever-growing scope and scale of ‘omics data will have profound consequences affecting individuals, enterprises, infrastructure, investment, education, and public policy.  Current approaches for data access and downloading will not be sustainable in an era of petabyte datasets.  Already, the data pipeline is struggling to handle today’s datasets, and moving forward, only a few institutions will have the bandwidth or processing power to handle even larger datasets.  Given the looming impact of austerity budgets on infrastructure and personnel, there is the prospect for serious performance degradation and system breakdown.

There are also challenges to address regarding the adoption of personal ‘omics profiling data as a clinical decision-making tool.  While whole genome and exome sequencing will certainly become a part of modern medicine, taking a genome-sequencing-centric perspective is overly simplistic if the goal is to reveal the full etiology of disease pathogenesis.  For example, genomic data alone cannot identify the network interactions between causal and modifying genes that define expressivity, penetrance, and ultimate phenotypic impact that produces a continuum of clinical phenotypes.  The diversity of the human genome, which was not accounted for in the reference sequence generated by the Human Genome Project, has huge implications for cohort selection and profiling studies and for genome profiling in general.  Indeed, what is needed to serve as a baseline is deep sequencing of 20,000 or more individuals to link variants and combinations of variants to disease phenotypes?  Cancer, in particular, poses a unique challenge in this regard because of the extravagant scale of causal somatic mutations plus the rapid progression of genomic heterogeneity in advanced disease.

A new future is possible.

Another concern is the need to deal with the ongoing miniaturization of analytical technologies.  This trend means that researchers will have to think about biomarkers in terms of remote, real-time, on-body or in-body sensing that will enable remote health monitoring and chronic disease management.  The commensal microbiome adds another dimension of complexity to biomarker profiling that the community must face.

The field must also start proactively engaging survivor networks and patient communities in order to increase the number of patients enrolled in clinical trials and observational outcome studies.  There is an opportunity to engage the FDA and Centers for Medicare and Medicaid Services (CMS) today to help inform and shape policies toward biomarker approval, clinical use, and reimbursement.  Another important factor going forward will be the dramatic shift in where primary care is provided and where tests will be run, moving from large centralized laboratories to in-clinic diagnostics.  There are also a number of intellectual property issues, some coming before the court, that need to be addressed.

Assuming that the NBDA is successful in its goal to establish an end-to-end development path, and that standards exist and biomarkers are being validated in a predictable manner, then comes the hard part: implementation and integration into the rest of the medical enterprise.  Today, physicians receive minimal training in diagnostic laboratory medicine, leaving little time to educate them on the proper use of ‘omics-based biomarker test results.  What will be needed are tools for reducing the multi-dimensional complexity and diverse data sources to provide end-user simplicity and support for facile decisions.  Physicians want yes, no, or maybe answers.

In the end, if the above issues are addressed, it will be possible to employ the wealth of ‘omics data to identify biomarkers or biosignatures that can be used to reduce the cost, time, and high failure rate that today plagues the study of investigational drugs.  These biomarkers and biosignatures will enable patient stratification and drive adaptive clinical trials designs that reduce the cost and time required for drug development and give rise to greater regulatory clarity.  The pathway that the field must take will involve generating data-driven knowledge and intelligence that leads to actionable decisions, which will in turn require breaking down intellectual silos in a way that changes the nature of discovery from one driven by hypothesis testing to one of unbiased analytics of large datasets.  The path to success will be one that changes the cultural process of knowledge acquisition, moving from individual investigator-driven research to large-scale collaboration networks with open systems and using social media.  This revolution will require changing the education and training for both researchers and clinicians.

The potential economic and health benefits from biomarkers for molecular diagnostic profiling, rational treatment selection, and continuous health monitoring transcend any other current category of healthcare innovation. However, realizing this potential will not be straightforward and will require

  • improved technical standards for biomarker research and development;
  • sophisticated integration of complex, multidisciplinary expertise;
  • proactive national and international leadership to establish comprehensive resources for biobanks, cyber-infrastructure, and interoperability and health information exchange (HIE);
  • new clinical trial designs for therapeutic and molecular diagnostic combinations; and
  • streamlined updating of standard-of-care guidelines to reflect disease subtypes, patient heterogeneity, and predisposition risk.

NBDA can be the driving force to enable these changes. Charting a new path in medicine will require building a systems-based, end-to-end approach, together with acceleration of transformative technical, institutional, and cultural change of a large magnitude.

(Return to the top)

Discussion Session: The One Thing that Could Make the Difference in Biomarker Development

Robert Mittman, M.P.P., Facilitation|Foresight|Strategy, who served as the workshop facilitator. He initiated the conversation by asking each workshop participant to name one idea that would enable the biomarker development enterprise to realize the NBDA’s vision of creating a standards-based, end-to-end pipeline for the expressed purposes of bringing rigor and predictability to biomarker development. Based on participant feedback the following list of objectives was generated:  

  • Create systems to handle multi-dimensional data.
  • Build broad collaborations that extend across the nation.
  • Create standards as the National Institute of Standards and Technology (NIST) does for various industries.
  • Integrate disciplines to work in team settings.
  • Use the consortium approach pioneered at C-Path to bring together all of the stakeholders in biomarker development under one umbrella.
  • Improve collaboration between industry and academia and a thereby develop a better understanding of the economics of biomarker development.
  • Adopt complex biomarker scores to establish clinical utility.
  • Stop blaming technology for failure, and instead, standardize analytical approaches to ensure that all technologies generate high-quality data.
  • Improve reporting of discovery data to include more qualifications of data parameters.
  • Develop better preclinical models for validating biomarker hypotheses.
  • Ensure that biomarker discovery research has clinical relevance in mind.
  • Build a system to handle the quantity of data that will be generated.
  • Plan for a clinical endgame early in the process of biomarker development.
  • Improve the rigor of exploratory work.
  • Increase the utility of scarce tissue by developing technology platforms that enable the conducting of multiple tests for each tissue sample.
  • Let clinicians' needs drive biomarker discovery efforts.
  • Share data more effectively.
  • Increase access to biospecimens and reduce costs.
  • Establish large biorepositories with common access protocols and standardized analyses.
  • Develop new statistical techniques and clinical trial designs appropriate for “small N” diseases and disease subtypes, where biomarker-based stratification may create what are essentially ultra-orphan diseases.
  • Focus on the social, cultural, and organizational issues that need to be solved by tying funding to properly utilized standards.
  • Change the academic and funding reward system to encourage sharing instead of hoarding of tissue specimens.
  • Recognize the connection between life and inorganic chemistry that extends beyond carbon, hydrogen, nitrogen, oxygen, and sulfur and broaden the search for biomarkers beyond traditional organic chemistry.
  • Develop clarity as to what data is needed for medical societies and insurance companies to accept a biomarker as clinically relevant and reimbursable.
  • Determine how to run the “N of 1” clinic.
  • Define the characteristics of an effective biomarker.
  • Foster the collaborative multidisciplinary science needed to understand how to address biology and information.
  • Believe in the ability to change—if the field comes together to apply all of the needed expertise to the challenge of developing clinically relevant biomarkers.

These items were grouped into seven themes:

  • Improve access to and efficient use of tissues that have been processed using standardized methods, carefully controlled, and thoroughly annotated.
  • Develop an information technology infrastructure that can transmit, process, store, and efficiently share the vast quantity of data that will be generated in the near future.
  • Create standards to define biomarkers that inform decisions to proceed with development.
  • Consider clinical utility, relevance, and value early in the biomarker discovery process.
  • Derive new models of clinical development that move away from hypothesis-driven clinical trials.
  • Establish funding incentives that support research but enforce standards, transparency, and sharing, rather than hoarding, of data and biospecimens.
  • Move the field from where individual researchers work in silos to one based on collaboration across disciplines and institutions and between industry and academia.

Panel Discussion: The Early and Late Stages of Translatable Biomarker Discovery

Carolyn Compton, M.D., Ph.D., C-Path’s President and Chief Executive Officer, highlighted issues at both ends of the biomarker development pathway.  Regarding the very beginning of the process, starting with the wrong biospecimen or one of poor or uncertain quality makes it unlikely that research will identify a useful biomarker since the biospecimen never reflected the real biological state that is relevant to disease.  At the end of the process, biomarker development encounters the FDA, and the biomarker community must be prepared to seize the rare opportunity to influence the FDA's approach to biomarker validation.  The FDA has published a guidance document, its plan to review evidence on a biomarker for a given context of use and qualify that biomarker as a standard, but now the Agency is looking for assistance developing standards for qualifying biomarkers.

(Return to the top)

What makes the biomarker discovery process robust?


George Vasmatzis, Ph.D., Director of the Biomarker Discovery Program at Mayo Clinic’s Center for Individualized Medicine, discussed how to create a robust system for biomarker discovery. He offered an example from his group’s results using microarray data on cells isolated from prostate and normal cancer tissue.  They obtained high-dimensional gene array data from 107 tissue samples representing patients with diverse grades of prostate cancer and used those data to identify a small number of candidate biomarkers.  These candidates were then evaluated in an independent set of 157 samples with matched case controls and validated using an immunostain, rather than as a reverse transcription-polymerase chain reaction (RT-PCR) assay, in biospecimens from another 57 patients with matched case controls.  The resulting assay was approved as a laboratory-developed test (LDT) that was released in 2012 for use by laboratories that meet Clinical Laboratory Improvement Amendments (CLIA) standards.

The success of this process depends on having access to large numbers of clinically annotated biospecimens, a resource that Mayo Clinic possesses. Mayo Clinic is aided by an institutional culture that is built around team science and is driven by the desire to create useful products, not test hypotheses.  In addition, Mayo Clinic has the infrastructure to support such studies, including statisticians to help select samples that are suitable for a given study; a product development engine, comprised of a pathology and cell extraction group; a bioinformatics group; molecular biology and functional studies groups; and teams that specialize in clinical assay development, assay verification, and assay validation.  All of these groups are integrated and work together from the beginning of a project.

Mayo Clinic addresses biospecimens in the operating room in real-time, with pathologists creating frozen sections immediately after tissue is removed from the body and determining which specimens are suitable for which on-going project.  This approach sets Mayo Clinic apart from institutions that exercise this level of diligence.  This system secures specimens of high quality, and it resulted from the unexpected enthusiastic buy-in of pathologists, who are now essentially driving the biomarker discovery process.  One important point to note is that the leadership at Mayo Clinic has created a team-oriented environment staffed by people who want to engage in this type of product-directed research. The key question is whether the culture created at Mayo Clinic can be translated to other institutions. Without question, an understanding of that culture can help inform the work of the NBDA.

(Return to the top)

Can We Find Biomarkers in Big, Multi-Generational Data Sets?


Joe Vockley, Ph.D., Chief Scientific Officer and Chief Operations Officer at the Inova Translational Medicine Institute, discussed how to work with big data. As a first step, it is imperative to rid public databases of garbage data. While improvements in technology improve data quality, they also change the inherent properties of the data and that can lead to inconsistent data sets.  For example, Affymetrix arrays have changed over time, yet the data from early studies that are no longer valid are still in public databases.  In another example, The Cancer Genome Atlas (TCGA) project took five years to correct an error about the number of pathways in glioblastoma multiforme (GBM).  This delay meant that all of the biomarker development work that derived from those findings (conducted over those five years) was flawed.  Another thing to keep in mind is that research should never be based on diagnostic codes in electronic medical records, since they are chosen to obtain maximum reimbursement, not to represent an accurate diagnosis.

The growing trend of sequencing everything, generating huge datasets, and then using bioinformatics to sort out the data afterwards is not only generating untenable amounts of data but is an inherently flawed approach to biomarker discovery.  In addition, the reference genome that everyone uses today is faulty at the population level, which increases the difficulty in performing accurate whole genome sequencing and conducting biomarker discovery.  To remedy this situation, Inova is building multiple reference genomes based on populations in 71 countries in North and South America, Asia, northern and southern Africa, and northern and southern Europe.  Inova has also created a new data environment de novo that will not include any currently available datasets because of the challenge of going back and validating all of the existing data in any of these repositories.  The Inova dataset already includes 1500 whole genome sequences and increase by another 30,000 over the next two years.  It is a pan-‘omic, multigenerational dataset that includes longitudinal data and is linked to clinical records and participant surveys.  One potential complication of this effort is that it may create an entirely new set of silos.

(Return to the top)

A View from Industry

Anahita Bhathena, Ph.D., Associate Director of Cancer Research at Abbott Laboratories, discussed the pharmaceutical industry’s belief that biomarker research is a path forward to improving the industry’s research productivity.  Industry’s approach today is to start by thinking of the end use for a biomarker and from that decide on the type of data needed and the metrics that any biomarker must meet to continue moving through the development path.  It is a very product-oriented as opposed to a hypothesis-testing approach. Some of the issues that industry struggles with as it develops a biomarker include:

  • Of the many biosignatures that are identified, which ones are worth moving forward?
  • What is the biological plausibility of a biosignature and is it important to understand the mechanism or pathway that ties a biomarker to a disease?
  • What is the best way to analytically validate a biomarker assay?
  • How does the biomarker help with patient selection?
  • Is the biomarker the drug target or is it just a signature that is a predictor of response?

There are additional issues specific to biomarkers that will be further developed as companion diagnostics.

  • What is the regulatory path to get approval for both the diagnostic and therapeutic—and is it clear?
  • What is the best approach for commercializing both the diagnostic and therapeutic?
  • If the companion diagnostic is approved as a standalone assay for use with a single piece of tissue, how can it be transitioned to a panel of assays that make multiple uses of the same tissue?

Industry’s focus is on building a biomarker development pipeline that considers context of use right from the start.  Industry is also beginning to think, particularly in the oncology area, about panels of biomarkers that include not just the primary target of a drug but also predictors of relapse.

(Return to the top)

Discussion

The ensuing discussion centered on a few key themes.  One theme dealt with reforming the biomarker discovery system from top to bottom so that it generates high-quality and reproducible data at every stage of a product-focused process.  One suggestion was that the NBDA may want to involve experts from industries such as aircraft manufacturing that have developed total quality management systems that span an entire system involving many suppliers, a situation not that far removed from biomarker discovery.  It was noted that today there is little incentive for requiring this level of quality in a system that merges the world of research with those of clinical and commercial development.  Government can play a role in overseeing standards, as it has done with assays for influenza, but the FDA will not fix the quality problem and is looking to the field to bring it recommendations on metrics and standards around which it will write guidance documents.  Granting agents and publishers have to be made aware of the need to improve data quality and be brought into this effort as enforcement agents.  The key is accountability, an idea that was seconded by all of the participants.

Regarding standards, one participant noted that the field needs to agree on certain standards in the way disease is interrogated with the new ‘omics technologies.  Another participant asked if the field could agree on some evidentiary standard for a fit-for-purpose biomarker and wondered if the NBDA could catalyze the activity needed to generate that standard.  In some fields, standards are developed to adjudicate between teams in what is called a constructive combat model.  One participant questioned whether it’s possible if biomarker development might explore the model in which standards are an emergent property of the system fostered by competition.

In the same vein of thinking about new models for the biomarker research enterprise that NBDA could develop, one idea said was to suggest that biomarker development be based on a tiered process that reflects maturation of a biomarker through validation.  This is the approach that the target validation field is taking.  For example, a tier 1 biomarker would be one discovered in a single lab, while a tier 2 biomarker would be one for which another group found the same results or identified the biomarker using a completely different technology.  As a biomarker moves forward, the entire community gets involved in the validation effort, creating a path based on maturation that involves academia, industry, clinicians, and payers.  Such an effort then becomes a “coalition of the willing.”  Determining who pays for each step of this process might be a challenge, but a solution could involve networks of investigators who cross-validate each other’s work.

Finally, it was stressed that the field has to move past the paradigm of single biomarkers.  Diseases such as cancer and Alzheimer’s are impacting complex systems, and perhaps the field needs to take an engineering approach in dealing with these diseases.  In other words, it may be more effective to stop trying to understand every piece of a system, and instead identify for the minimum number of parameters needed to define a system.      

(Return to the top)    

Panel Discussion: Assay Development and Assay Performance

Robert Penny, Ph.D., M.D., Chief Executive Officer and Chief Medical Officer of ICG, explored what it takes to develop a clinically useful, commercially viable diagnostic assay.  It was again noted that the biomarker field needs to seize the opportunity to create compelling evidence-based standards that the FDA can use in its guidance development efforts.  Ultimately, these standards must promote the development of the highest quality tests with clinical utility, and they must be unbiased, decrease approval time, and better define the performance needed to enable good clinical decision-making.  They also need to account for the fact that the Human Genome Project and TCGA have created a new world in biomedical research, one that regulations developed in the 1970s and 1980s could not have envisioned.

Lessons Learned from the HER-2 Assay Story


Abigail McElhinny, Ph.D., a Senior Project Leader for Companion Diagnostic Assay Development at Ventana Medical Systems, explained that the company develops immunohistochemical (IHC) assays for corporate partners that can become companion diagnostics. She discussed Ventana’s development of an assay for HER2.  A HER2 IHC assay was first approved in 1998, and a HER2 fluorescence in situ hybridization (FISH) assay was approved in 2002 for use on samples that produced equivocal results with the “gold standard” IHC assay.  In 2010, Ventana submitted a Premarketing Approval (PMA) for its INFORM HER2 dual in situ hybridization DNA probe cocktail assay, a bright field, standard microscopy, low-magnification assay that is fully automated.  This assay determines HER2 gene status in invasive breast carcinoma.  The company first approached the FDA suggesting that the results were so obvious that there was no need to manually count positive clusters; however, the FDA replied required the company to compare INFORM to the gold standard.  Ventana performed the necessary clinical trial and received FDA approval for its PMA.

Of the challenges that new diagnostic tests face in getting approval, many are from the transition that the field is going through as it moves to molecular-based diagnostics and the associated regulatory uncertainty.  The FDA has been upfront about the fact that this is an evolving era, but the agency is taking the position that until it has data to the contrary, it is not going to change its requirements for approval.  Where this position has the biggest impact is that the FDA appears to want all new companion diagnostics to go through the more stringent PMA process, rather than the simpler 510(k) approval process for which an assay developer only has to show that a new assay is “substantially equivalent” to one that has already been approved.  Even in cases where a 510(k) submission is allowed for a stand-alone molecular diagnostic assay, discrepancy review can be a problem because the FDA wants to wait for clinical outcomes data, which can add two to three years to the approval process.

When Ventana is asked to convert an LDT into an approved assay, a particularly challenging problem is that assay review and approval will require the original clinical sample blocks, which are often not available.  As a result, the company must often start from scratch to generate the necessary data for approval.  Investigators should partner with a pharmaceutical company rather than developing an assay on their own.  Next-generation sequencing is not likely to replace IHC assays as companies want these assays in the community pathology labs.

(Return to the top)

Will Complex Assay Development Require New Levels of Evidence?

Kevin Halling, M.D., Ph.D., Vice Chair of Research and Development for the Department of Laboratory Medicine and Pathology at Mayo Clinic, described some of the requirements that a laboratory must meet to run a given assay.  For an FDA-cleared or -approved test, the laboratory must verify that it can meet the manufacturer’s performance specification for accuracy, precision, reportable range, and reference interval.  For a modified, FDA-cleared, or -approved test or for an LDT, a laboratory must establish the assay’s accuracy, precision, reportable range, reference interval, analytical specificity, analytical sensitivity, and any other relevant performance characteristics.  Test manufacturers must develop standards for how biospecimens are handled for a given assay.

Clinical utility is a separate matter from analytical validation.  Clinical utility refers to whether the test can provide information about diagnosis, treatment, management, or prevention of a disease that is helpful to patients.  CLIA requires analytical validation of an LDT but not clinical validation, while the College of American Pathology (CAP) Molecular Pathology checklist states the need for clinical validation.  The FDA, however, is growing concerned about clinical validation, and so there may be an increased requirement for evidence of clinical validation and clinical utility by the FDA and other regulatory groups in the future.  The type of clinical validation that is required of a test will depend on the type of test.  If the test will be used to help establish diagnosis, then determining the clinical sensitivity and specificity of the test would constitute clinical validation of that test.  However, if the test will be used to help determine prognosis, then performing Kaplan Meier curve analysis would be an example of clinical validation for that type of test.  One issue going forward will be how to demonstrate clinical utility for rare diseases where patient numbers are small enough to make clinical trials difficult.  Another concern will be educating physicians about the proper use of given assay.

(Return to the top)

Developing Assays in the Translational Medicine Space


Jannick Anderson, Ph.D., Senior Associate Direct of Drug Discovery in the Institute of Applied Cancer Science at the MD Anderson Cancer Center, discussed whether biomarkers will be able to link drug response in preclinical models and human relevance.  Biomarkers are decision-making tools that inform go/no-go decisions.  For example, knowing that a target is expressed is a good starting point for drug development, as is a biomarker that shows that a substrate activates a targeted pathway. Unfortunately, changes in biosignatures, as measured by gene expression, have proven more difficult to connect with clinical effect.

A parallel, integrated, and potentially more promising approach to target validation, drug discovery, and biomarker validation could involve both genetic and pharmacological target validation, biomarker discovery, and demonstration of relevance and clinical pathology. Another promising approach for drug discovery efforts could focus on the identification of biomarkers for direct target engagement.  Unfortunately, although some work is ongoing in these areas, it is being done on a very small number of patients.  Finally, biomarker development in the academic sector would be well served to ensure the robustness of assays by engaging a contract research organization to develop the requisite assay.

Discussion

Several important points were made in the discussion following this panel.  In summary, to really progress, the biomarker field should learn from the venture capital and the pharmaceutical sectors and require that any biomarker discovery be replicated by an independent group before the findings are published; develop standards for biospecimens in terms of their diagnostic accuracy (i.e., would pathologists agree on the initial classification of the samples); and focus the process so that the bottom-line is clinical utility. Developing a biomarker that can characterize a patient but that has no meaningful impact on that patient is a waste of resources.

(Return to the top)

The Biomarker Complexity Conundrum

The panel was introduced with a reminder that not all biomarkers are bimolecular entities, and it is important to think of biomarkers in the context of health and wellness in addition to disease, which dominates most thinking.

Is Imaging the Ultimate Biomarker? Lesson from Alzheimer’s Disease

Eric Reiman, M.D., Executive Director of the Banner Alzheimer’s Institute, described the use of different imaging modalities as emerging biomarkers for the early detection of Alzheimer’s disease using images from fluorodeoxyglucose and positron emission tomography (FDG-PET), magnetic resonance imaging (MRI), and novel PET ligands for identifying plaque formation in the brain.  Many of the changes visible in the images occurred years before symptoms appeared.  Thanks to these new imaging biomarkers, the Alzheimer’s field has now re-conceptualized this disease as a progression from pre-clinical disease through cognitive changes resulting from the disease to full-blown dementia.  The Banner Institute is now looking for biomarkers that can be used to accelerate the evaluation of preclinical Alzheimer's disease treatments.  These biomarkers are essential for Alzheimer’s disease, since prevention trials would likely take longer than patent protection would last for any drug that could be developed for prevention.

There are currently ongoing clinical trials with an anti-amyloid therapy.  The first trial involves currently cognitively unimpaired patients with a rare mutation that always leads to the development of Alzheimer’s disease when the affected individuals are in their 40s.  These patients are located in Medellin, Columbia, and this study was made possible through a partnership with both philanthropy and industry.  As part of this clinical trial, investigators are examining every known imaging biomarker as a predictor of drug efficacy.  The second trial is looking at ApoE4-positive individuals who are nearing the typical age of onset.  There is a real need to identify non-imaging biomarkers in cerebral spinal fluid, with a goal of embedding them in clinical trials.  There is also a need to develop a financial model that will incentivize pre-competitive pharmaceutical development to address these types of critical prevention opportunities.

Are There Better, More Quantifiable Biomarkers Waiting to Be Discovered?

Ariel Anbar, Ph.D., Professor of both Earth and Space Exploration and Chemistry and Biochemistry at ASU, gave a brief presentation that compared earth science and biomedical science.  Although the Earth and humans are different in terms of physical and temporal accessibility, they are both complex systems for which markers are key to understanding mechanisms.  In geoscience, elements and isotopes are the markers of interest, and because geoscientists have been interested in markers from “day one,” standards have grown up in parallel with markers.  In fact, there are three government agencies—the National Institute of Standards and Technology, the Department of Energy, and the Environmental Protection Agency—that support standards development for the geosciences.

It was suggested that the biomarker discovery enterprise would be wise to add another ‘omics to its toolbox, metallomics, as inorganic biomarkers could prove very useful in biomedicine. For example, recent work shows that changes in the ratio of the two stable isotopes of calcium—40Ca and 44Ca—in urine correlate with perturbations in the body’s calcium metabolism.  Moreover, new data have demonstrated that metallomic markers may be predictive of disease progression in multiple myeloma, metastatic breast cancer, and prostate cancer, as well as osteoporosis.  This is a relatively easy marker to study because of the body’s large calcium mass, and tools developed over the past 15 years are now making it feasible to measure other biologically important inorganic elements, such as the iron and zinc isotopes.

Big Data Problems and Biomarker Development


Andy Hospodor, Ph.D., Executive Director of the University of California, Santa Cruz, Storage Systems Research Center, provided some insights into the growing flood of data coming from ‘omics research, using lessons learned from the second Investigation of Serial Studies to Predict Your Therapeutic Response with Imaging and Molecular Analysis (I-SPY 2) clinical trial.   It is this tsunami of data that will be needed to identify biomarkers of interest for development from clinical trials such as I-SPY2).  Each of the 800 patients in this trial is generating about 1.2 terabytes of data for a total of about a petabyte of data over the course of the trial, which translates into about $100,000 for the disk drives to store that data and another $1 million a year to manage the data.  It turns out that moving and storing data in the cloud is more expensive than processing it.  Therefore, it was recommended that data should be stored next to the supercomputer center where the data will be processed.  Although there are different views on the topic, centralization may be the right approach with a database of this size.

It will be important going forward to develop better methods for compressing data. TCGA, for example, is generating huge, “dirty” datasets and it may be possible to only store the differences between a reference genome and an individual’s genome.  The key to this type of approach will be developing the technology to improve the accuracy of whole genome sequencing.  Currently, there is not enough collaboration today between computational and biological scientists that would aim at understanding what data is essential to store.

Discussion

In the discussion following this panel, several points were made: medical education has a long way to go in terms of educating physicians about how to manage and use biomarker data; an open question in the biomarker field is how to handle information that may come from biomarkers that do not inform treatment but may be relevant to other patient issues; and finally, it was suggested that the NBDA organize appropriate collaborations to bring the biomarker and information technology communities together for the purpose of identifying and pursing opportunities to reduce the costs of data processing and storage which impacts the overall system.

(Return to the top)

Qualifying Versus Validating a Marker


Laura van ’t Veer, Ph.D., Associate Director of Applied Genomics at the University of California, San Francisco, discussed some of the technical, clinical, therapeutic, and computational challenges facing efforts to use personalized genomics in a clinical setting.  The challenge of turning data into knowledge hinges on using data to ensure that each patient receives the best treatment for his or her specific cancer.  In breast cancer, for example, the clinical and therapeutic challenge is to determine who to treat and how to treat after surgery and local radiotherapy.  Today, only 25 percent of patients treated surgically and with radiation experience disease recurrence, yet 75 percent of patients receive chemotherapy.  As a result, half of all women with breast cancer receive toxic chemotherapy that will not benefit them and that may in fact harm them.

Two prognostic RNA gene expression profiling technologies, Oncotype and MammaPrint, are now being used to reduce over-treatment by up to 30 percent.  Dr. van ’t Veer worked on the latter and reviewed the steps that were taken to turn this assay from a research tool to an approved diagnostic test.  Currently, the test is CLIA registered, CAP accredited, and cleared as a 510(k) in the United States (U.S.) and seven additional countries and in Europe and Japan for use in clinical trials.  The test is reimbursed in the U.S. and several other countries, although there is a lack of harmonization among different country's regulatory agencies as to what they will accept for reimbursement for a diagnostic test compared to a drug.

A major challenge for the biomarker community is to develop tools that can evaluate new ideas and targets efficiently in settings such as the I-SPY 2 trial. The goal of the I-SPY 2 trial is to enable dramatic improvements in knowledge turns and take real time off the clock for the development of therapies for high-risk breast cancer patients.  Currently, five drugs are being tested in parallel in I-SPY 2.   I-SPY 2 also incorporates biomarker validation.  The goal of this part of the trial is to qualify existing biomarkers for use as prognostic indicators and to identify new, exploratory biomarkers.  Through data from I-SPY 2, biomarker Investigational Device Exemption (IDE) filings completed as part of IND filings can facilitate the approval of a companion diagnostic via the PMA route.

Beyond the challenges of conducting this type of drug and biomarker trial, as previously noted, there are also numerous issues related to data analysis and storage.  Another challenge is to create a data system that can integrate clinical data and case reports with next-generation sequencing data.  The chief computational challenges for turning genomics information into clinically useful predictions of response include developing processing techniques for producing accurate datasets for storage and analysis, integrating this data into a patient management system, and providing useable access to physicians through health system data networks.

(Return to the top)

Closing Comments – Day 1


Before ending the day’s discussions, key messages that derived from the preventions, panels, and discussion session were summarized:

  • Since most diseases are complex adaptive systems, biomarkers are often emergent properties of the system, so all biomarkers should be viewed as measuring multidimensional parameters.
  • If viewed in this manner, it may be prudent to think more about discovering and developing biosignatures as opposed to individual biomarkers.
  • The reality is that biosignatures are much more of a challenge for both standards development and for regulatory review by FDA.   (It was posited that the term biosignature may prove to be a better descriptor for complex diseases—and the NBDA should consider that fact in their development.)
  • If a biomarker or biosignature is not clinically relevant, it should be carefully reviewed in the context of the NBDA’s mission and goals.
  • Today’s efforts are largely genome-centric, which is already producing a large number of new challenges.  Although it is yet to be seen if the genomics movement will be a rich source of new biomarkers, the NBDA should play a role in the development of standards and best practices for the field.

In closing, it was added that the NBDA’s aspirational goal was to build the NBDAover the next year to a point of launch, followed by operations to support programs that will have a significantly positive impact on the advancement of biomarkers and precision medicine for the benefit of patients.

Thursday December 13, 2012

Mind-Bending Presentation (and it really was)

Designing and Implementing Biomarker-Driven Clinical Trials: Reflections on I-SPY 2

Donald Berry, Ph.D., Professor of Biostatistics at the University of Texas MD Anderson Cancer Center and President of Berry Consultants, LLC, described the I-SPY 2 trial, which took nearly two years for the IRB chairs from the various institutions that wanted to participate in I-SPY 2 to fully comprehend the concepts of this trial.  The notion of using a repeatedly measured imaging biomarker to inform an adaptive clinical trial was just too foreign for most of these clinicians to accept even though adaptive clinical trial designs are not new.  In 2006, the FDA acknowledged that improved utilization of adaptive and Bayesian methods could help resolve the low success rate and expense of phase 3 clinical trials, and since then, numerous clinical trials have made use of Bayesian adaptive designs, including over 300 trials at the MD Anderson Cancer Center alone.  Device companies have filed over 25 PMAs based on Bayesian adaptive trials and most of the top 40 pharmaceutical companies and many biotechnology companies now use Bayesian adaptive designs for their clinical trials.

There are several documents that support the concepts embedded in this trial.  The first was the FDA’s Critical Path Opportunities Report of 2006, which stated that there was “a consensus that the two most important areas for improving medical product development are biomarker development and streamlining clinical trials.”  The second was a 2011 FDA white paper on driving biomedical innovation that stated that the agency would publish draft guidance on how the use of pathological complete response as measured by MRI would serve as a surrogate endpoint for accelerated approval in primary high-risk breast cancer.  This white paper made specific mention of I-SPY 2 as the type of trial that could benefit from this draft guidance, which was issued in May 2012.  Illustrating the value of pathological complete response as measured by MRI as surrogate endpoint for drug response are examples demonstrating the clear relationship between pathological complete response in five different subtypes of breast cancer and disease-free survival.  These data also show the importance of segmenting patients into subgroups.

The goal of I-SPY 2 is to use a factorial trial design that will identify specific patient-drug combinations to support a greater than 85% success rate in a subsequent small phase 3 trial.  These phase 3 trials will be performed in a population of patients who were clearly identified and shown to benefit from a specific therapy.  Patients in I-SPY 2 are stratified into fixed subsets based on hormone receptor status, HER2 status, and MammaPrint score.  Though there is a much larger number of possible signatures based on these measures, the trial restricts the subsets to 10 marketable signatures.  The sample size for each of the up to eight drugs being tested is 20 to 120 patients, with a minimum of 60 if based on an appropriate level of patient responses.  Once patients are admitted to the trial, the investigators identify the Bayesian probability that each drug would be more effective than the common control based on current results and use that probability, along with the biomarker status results, to assign patients to a particular study arm.

Drugs are dropped from the study or graduated to a potential phase 3 trial following a set of criteria.  For each possible biomarker signature, the Bayesian probability that a test drug will be more effective than the control is calculated.  If the Bayesian predicted probability for success in a 300-patient phase 3 trial is less than 10 percent for all signatures, the drug is dropped.  If the probability of success is greater than 85 percent for some signatures, the drug then graduates, with a six-month delay before the results are announced.  At graduation, the trial organizers provide the drug sponsors with a predictive probability for phase 3 success for each signature.  In reality, I-SPY 2 is a never-ending screening process that drops and adds drugs based on ever-accumulating data.  I-SPY 2 is sponsored by the Foundation of the National Institutes of Health, the Quantum Leap Foundation, and the FDA and supported by the Safeway Foundation and various industry and academic partners.  The trial is coordinated with FDA, includes 20 centers, and has treated 300 patients so far.  Today, seven drugs from multiple companies are in the trial.

In addition to moving better therapies through trials faster and the savings that accrue from using one common control arm, results from the I-SPY approach provide matched sets of drugs and biomarker signatures.  Successful drug/biomarker pairs graduate together to small, focused, more successful phase 3 trials based on Bayesian predictive probabilities.  Currently, I-SPY type trials are being planned for lymphoma, melanoma, colorectal cancer, Alzheimer’s disease, HIV, and acute heart failure, and it is likely that the I-SPY approach would also work for rapidly testing influenza drugs.

For lymphoma, there are four billion possible signatures, so the trial design team used modeling and covariate analysis to select five that represent significant population sizes.  These signatures will be adjusted and constructed dynamically based on the results of Bayesian analysis of interim results.  This same approach could be used to power a tumor agnostic trial, and it would also provide information on the heterogeneity of tumors based on how these signatures are compiled.  This Bayesian approach may also be useful for capturing tumor adaptation, and if so, evolutionary theory could add power to the models used to assemble the signatures.  It would be a significant challenge statistically, but this could be a fruitful avenue to investigate.

(Return to the top)

Panel Discussion - Biomarker Driven Clinical Trials: The Last Stop on the Way to Regulatory Submission


Karen Anderson, M.D., Ph.D., Associate Professor in the ASU Biodesign Institute, chaired the panel discussion (with four short presentations) where panelists were asked to think about two questions: How do we get biomarkers into clinical trials and how do we ensure that they impact clinical care? One advantage of the never-ending adaptive clinical trial design is that it eliminates the startup time of follow-on clinical trials, but that while this approach works with predictive biomarkers, it may not work with diagnostic and other type of biomarkers.  One impediment to clinical trial design for these other types of biomarkers is the lack of metrics that must be developed and the required endpoints.  There is also no clear picture on the economic value of the various types of biomarkers, and therefore it is still unclear what the financial constraints are for a successful biomarker.

Biomarkers for Colorectal Cancer


Ray Dubois, M.D., Ph.D., Executive Director of the Biodesign Institute, discussed the possibility that colorectal cancer is a disease that could be ripe for the application of diagnostic biomarkers given that the range of ages where people are affected is well-defined, which limits the number of individuals who need to be screened.  Today, colonoscopy is the technology of choice for detecting colorectal cancer, but there are a number of molecular candidates that are in clinical validation or preclinical development, as well.  There are also well-defined genetic changes that are associated with the various stages of colon cancer development, and there is a clinical trial that is looking at these genes as predictors of drug response.  Colorectal cancer would be an ideal case for using an adaptive Bayesian trial to test the predictive value of these biosignatures in combination with the multiple drugs that are available or are being developed for this disease. There are huge potential savings that could result from using biomarkers wisely.  Using KRAS status, for example, to direct therapeutic decisions could save over $400 million per year by not treating patients who will not respond to anti-EGFR therapy.  Though the benefit of colonoscopy is undeniable, the impact of identifying a biomarker that could detect early adenomas would be even greater.

Biomarkers for Multiple Myeloma


Rafael Fonseca, M.D., Professor of Medicine at Mayo Clinic, discussed the idea that clinicians do not need certainty but do want clues.  For multiple myeloma, for example, there are biomarkers that are absolutely associated with disease and clonal progression, as well as proteomic markers that clearly indicate the presence of multiple myeloma cells.  Multiple myeloma proceeds from presymptomatic condition, and research is ongoing to determine how various markers correlate with the progression of this presymptomatic state to full-blown disease.

For prognostic markers, there are a number of candidates that are moving into the clinic.  One marker appears to have value in identifying patients with high-risk disease that have a low probability of survival.  Genome sequencing has shown multiple myeloma is a disease of many, many subtypes.  Given that phase 3 trial methodology is not working for this disease, it may be that multiple myeloma would benefit greatly from adaptive trial design.  One aspect of the disease that makes it challenging to treat is that treatment itself creates a new disease by differentially killing the many different clones present in each patient.  Accounting for the rapidly changing identity of multiple myeloma represents a significant challenge to any biomarker discovery or drug therapy trial design.

Biomarkers for GBM

Michael Berens, Ph.D., Professor and Director of the Cancer Cell Biology Division at TGen, discussed the status quo for GBM, which is that this disease is characterized by a variety of molecular subtypes as identified by methylation, gene expression, and gene mutations and that they can be binned into subgroups.  Each group is defined by a particular median survival, and failure to account for the three subtypes creates confusing clinical trial results.

It should be possible to use these subtypes to design better clinical trials that are driven by biomarkers.  Two promising areas that could prove valuable would be to use “futility” biomarkers to provide evidence that therapy is having no therapeutic effect, and use subtype-classifying biomarkers to inform clinical trials enrollment and enable an adaptive clinical trial design for GBM.  Dr. Beren’s group is also developing a phase 0 trial to determine if potential GMB agents can cross the blood-brain barrier in individual patients.

Biomarkers for Diabetes


Randy Nelson, Ph.D., Research Professor and Director of the Molecular Biomarkers Center at the Biodesign Institute, discussed the question of whether the field can move from a tumor agnostic model to a disease agnostic model for biomarker development. This is a key issue in diabetes because FDA guidance says that any drug for diabetes must be evaluated in the context not only of diabetes but other conditions, such as cardiovascular risk.  As a result of this requirement, his group is attempting to identify “sick” proteins that signal the presence of disease.  As more people are examined and more proteins identified, it is becoming possible to paint a picture of biosignatures that reflect movement from healthy to diabetes and then to the comorbid conditions.  Once identified, the question will be how to move these signatures into industry to enable drug development and improve clinical trial designs.

One of the biggest barriers to enabling better clinical trials is that there is too large a gap between ad hoc academic studies and strategic industrial deployment.  Earlier collaboration between academia and industry could address this problem, which could reduce development time, ease the use of biomarkers in drug development, and decrease the failure rate of biomarkers.

(Return to the top)

Recommendations for the NBDA and the Field

In the workshop’s final activity, the participants broke into three working groups.  The groups were each asked to address three questions in the context of the early, middle, and late stages of biomarker development.  Early stage development encompasses early discovery in which biology is verified in patient samples and translatable discovery that establishes clinical measures.  The middle stage of development includes robust assay development and analytical validation of assay performance.  Late stage development includes “fit for clinical purpose” biomarker qualification and clinical biomarker validation.

The three questions were

  1. What biomarker and/or biomarker class and/or disease(s) would be ideal for the NBDA to demonstrate proof of concept with a level of evidence that could inform decision making to advance a biomarker from one stage of development to the next?
  2. Who (sectors/partners) should the NBDA engage to accomplish #1?
  3. What other resources—infrastructure, data sets, access to patients, FDA guidance and others—will the NBDA need to achieve #1 and #2?

Early-stage development

For cancer, this group thought that DNA would be a good analyte because of its stability and its capability to provide a good readout in cancer.  The group members also thought that imaging should be part of any biomarker development program in cancer.  It was also suggested that it may be possible to tap into the large number of DNA banks, such as the ones being established by Kaiser Permanente and Vanderbilt University, as a resource.  In particular, the group recommended that multiple myeloma would be a good candidate to study because there is a defined population that is not responding to treatment.

This group suggested specifically that Alzheimer’s disease would be a good area of focus for two reasons: (1) the pressing need for biomarkers for all aspects of this disease, and (2) the extensive collection of tissues that are available for study thanks to the efforts of the Banner Alzheimer’s Institute, Arizona Alzheimer’s Coalition, and a number of national organizations.  The NBDA could have a major impact by developing standard operating procedures (SOPs) for biospecimen harvesting and storage to reduce preanalytical variability.  In particular, the group suggested starting with SOPs developed by Mayo Clinic.

In terms of whom the NBDA should engage, the group cited active clinicians, radiologists, and both basic and translational scientists.  It will be important to engage the manufacturers of products that are used to collect and store samples, as well as industrial partners for specific diseases.  It would probably be useful to engage NIST and its new biologics division to help develop metrics and standards. It would also be important to collaborate with Roche Diagnostics. Finally it was suggested that the FDA would be open to having a group such as the NBDA organize activities to support of standards development, which could guide development.

Additional thoughts/comments:

  • The group was agnostic about what was selected as a biomarker for proof of concept, but whatever the biomarker, it must be clinically relevant.
  • The biomarker discovery process must start with quality biospecimens and derivative analytes.  The NBDA can take steps to solve this problem by assembling/developing standards/best practices/etc., for biospecimen collection, processing, and storage.
  • The field should examine what is working in the field, e.g., I-SPY 2, to provide a firm foundation for reproducible biomarker discovery.

Mid-Stage Development

This group stated that any effort to define biosignatures needs to start with a clinical question and define the specific endpoints that need to be addressed.  It suggested that the NBDA become a partner in the I-SPY 2 trial in order to feed exploratory biomarkers into the pipeline.  It also recommended taking a tumor-agnostic approach that would look most exhaustively at perturbed pathways across tumor types, particularly the Wnt/β-catenin and PI3 kinase pathways.  This group also said that the NBDA should work with the FDA to define mechanisms for biomarker qualification and validation.  The key will be to define levels of evidence that includes biological relevance and functional validation using patient-derived xenografts and other approaches.

The group felt that partners for the NBDA should include pathologists, innovative statisticians, the FDA, and representatives of the pharmaceutical industry.  It was also suggested that payers must be included, so that when the pipeline starts producing clinically validated biomarkers and assays payers are informed in terms of assay performance characteristics and standards.

In terms of other resources, this group said that NBDA should establish a precompetitive process for access to tissues and data across different tissue banks.  This process should be more interactive than it is with TCGA data, and the databank should be populated with patient data that includes data on patients who fail therapy.  Doing so will require a strong information technology infrastructure.

Additional thoughts/comments:

  • Define the clinical question that a biomarker should answer and that will in turn define what metrics the biomarker needs to meet for eventual use.
  • Start thinking about intellectual property now and get those policies in place.
  • Before developing an assay, demonstrate an understanding of the biology involved.
  • Define standards of performance.
  • Look for biomarkers in pathways that are tumor agnostic.

Late-Stage Development

The group was agnostic regarding the class of biomarker and the disease as long as a biomarker or biosignature that reaches this stage meets defined criteria in terms of feasibility of adoption, clinical relevance and utility, and technical validity.  The group recommended that the NBDA take a parallel approach that includes partnering in an I-SPY 2-type trial and conducting a trial in the unmet medical need space.  The group noted that the latter is a place where the NBDA could make a big impact.  In terms of criteria for development, the group said that candidate biomarkers for late-stage development should meet metrics for feasibility of adoption including a low analytical cost, clinical relevance, and utility, including fit-for-purpose clinical utility and technical validity.

This group further discussed the resources that would be needed to move biomarkers through late-stage development.  In particular, the need for high-quality biospecimen banks, data storage and analytics, metrics of success, and standard operating procedures for the entire late-stage process were viewed as critical.  It was also emphasized that the NBDA should work with patient groups to maximize patient recruitment to ensure that clinical trials would be appropriately powered.  Clinical trials should also include pharmacoeconomics, particularly for biomarkers that address an unmet medical need.  The group also recommended that one valuable activity for the NBDA would be to define a glossary for biomarker, biosignature, validation, verification, and other terms.

Additional thoughts/comments:

  • Be agnostic in terms of biomarker class and disease, but ensure that the strategy addresses an unmet medical need—and choose an initiative where the NBDA can have a significant impact.
  • Make sure that all participants are equal partners in precompetitive areas of investigation.
  • Engage advocates to speed the process.
  • Develop assays with minimum complexity to increase the feasibility of adoption.
  • Focus on the term “biomarker” rather than “biosignature” because of regulatory concerns about the complexity of the latter.

(Return to the top)

Discussion

The workshop participants engaged in a lively discussion following the three group reports.  During this discussion, it was noted that a potential strength of the NBDA is that it can catalyze both the technical and equally important organizational change needed to have a major impact on biomarker development.  The participants agreed that the NBDA should highlight this point as it launches its efforts as it will help recruit those individuals and organizations that are most likely to help change the world of biomarker development.  They also stressed that at the same time, the NBDA has a tremendous opportunity to influence the back end of biomarker development by working with the FDA, CMS, and international regulatory agencies, all of which are still learning how to regulate biomarkers.  One participant noted that the NBDA should start a discussion about remote biomarkers and real-time sensors.

Reiterating an earlier point, it was suggested that it might be interesting to have groups compete in a constructive manner as a means of establishing specific standards.  Another idea was that working in the precompetitive space could be advantageous in that a biomarker could be approved that could then be used for many products, rather than in assays approved one at a time for one product and having to seek approval again for the next product.

Closing Comments

Dr. Barker closed the workshops by stating what the NBDA will not become:  the NBDA will not be a pharmaceutical or diagnostics company, an academic organization, or an advocacy group or part of any of these fine organizations.  Rather, the NBDA will serve as a convener to build networks of stakeholders who can analyze and identify critical barriers, issues, and problems at all phases of the biomarker development pipeline and work together to seek solutions.  The NBDA will integrate expertise, data, knowledge, resources, and capabilities to both develop standards and enable decision making at key transition points in the process.  The overall goal of the NBDA’s efforts is to enable the paths to achieve the much-hyped precision medicine that has become our shared vision for the future.  The NBDA will engage industry, the FDA, payers, investors, and patient groups to become partners and participate in the early stages of the development of the NBDA and subsequently in its operations.

To move from where we are now in biomarker science, to where we must ultimately be, will require predictable standards-based processes for the development of biomarkers, and ultimately more complex biosignatures.  “If we don’t unite to achieve these goals, then the NBDA will not achieve its mission,” she reiterated.  The NBDA will serve as the enabling construct for a new movement that unites organizations and people to create evidence end-to-end models for biomarker development.   Where information exists, and it is of the highest quality, the NBDA will not re-invent wheels. To achieve its goals, the NBDA will have an infrastructure, staff, and resources and engage trans-sector knowledge networks.  She also affirmed that the NBDA will work with the FDA to share findings and include representatives from the FDA in future workshops.

In closing, Dr. Barker reminded everyone that when Francis Collins initiated the Human Genome and when she and Dr. Collins, who at that time was the Director of the National Human Genome Research Institute, started TCGA, many opined that it was impossible, that it was too costly, that nobody would share data or resources or follow standardized procedures, that the necessary culture changes were just too big.  “We will hear the same arguments for the NBDA, but those arguments were not acceptable then, nor are they now.  We can change the world, and in doing so, we will hopefully enable game-changing progress in the development of biomarkers, with all that means. The NBDA is a natural solution to a mind-numbing problem that will truly benefit patients and impact the quality and cost of healthcare.”  She thanked the attendees on behalf of the Organizing Committee and reiterated that the participants in this and future NBDA workshops would achieve the seemingly impossible task of collaboratively defining the real barriers and problems in biomarker development and exploring what the NBDA must do to create solutions.

It was agreed that the next workshop would be held early in 2013 to begin the process of barrier identification and assembly of information and knowledge around the affected “modules” of the pipeline. The overall analysis will begin with early and translatable discovery.

(Return to the top)

Links to Speakers

A Status Report on Biomarkers
Dr. George Poste D.V.M., Ph.D. 

Panel : Translatable Biomarker Discovery
Dr. Carolyn Compton MD, Ph.D.

Robust Biomarker Discovery
Dr. George Vasmatzis Ph.D.

Multi-Generational Data Sets
Dr. Joe Vockley Ph.D.

A View from Industry
Dr. Anahita Bhathena Ph.D.

Panel: Assay Development and Performance
Dr. Robert Penny MD, Ph.D.

Lessons Learned from HER-2
Dr. Abigail McElhinny Ph.D.

Complex Assay Development
Dr. Kevin Halling MD, Ph.D.

Developing Assays in Translational Medicine
Dr. Jannik Andersen Ph.D.

Lessons from Alzheimer's
Dr. Eric Reiman MD

Are there Better Biomarkers
Dr. A. D. Anbar Ph.D.

Big Data Problems
Dr. Andy Hospodor Ph.D.

Qualifying vs Validating
Dr. Laura van't Veer Ph.D.

Reflectons on I-SPY 2
Dr. Donald Berry Ph.D.

Biomarker Driven Clinical Trials
Dr. Karen Anderson MD, Ph.D.

Biomarkers for Colorectal Cancer
Dr. Raymond DuBois MD, Ph.D.

Biomarkers for Multiple Myeloma
Dr. Rafael Fonseca MD

Biomarkers for GBM
Dr. Michael Berens Ph.D.

Biomarkers for Diabetes
Dr. Randall Nelson Ph.D.

Working Groups:
Early-stage Development
Mid-stage Development
Late-stage Development

Workshop I Links