Real World Evidence in clinical trials offers huge potential to change how studies are run. Today’s advanced technologies such as artificial intelligence platforms, commoditized data collection tools and large integrated healthcare IT infrastructures allow us to go one step further in collecting and utilizing data. However, there are clearly still huge hurdles to full implementation of RWE in clinical trials. Here we look at three case studies of real world evidence being successfully used in clinical trials as well as what data can be used in RWE studies.
Case Study 1: Salford Lung Studies
The innovative Salford Lung Studies are often showcased as great example of an RWE-based study. The GSK sponsored study examined the safety and effectiveness of a new treatment for chronic obstructive pulmonary disease. It involved over 2,802 patients treated by their own GPs in everyday clinical practice. The study had low exclusion criteria: 90% of screened patients were included in the study to replicate a representative sample of everyday practice. The goal of the study was to collect data that reflected medicine taking behaviors and collect valid data with minimal disruption to normal care, while enrolling a large generalizable proportion of eligible population. Deemed a success, 93% of participants remained for the duration of the study, and a number of conclusive clinical results were attained.
Case Study 2: Avelumab approval
Real-world evidence also played a key role in the recent approval of avelumab, a monoclonal antibody used to treat metastatic Merkle cell carcinoma (mMCC), a rare aggressive skin cancer. Since there are no standard of care for mMCC, investigators used data generated data from EHRs on observed clinical outcomes in a patient population that received chemotherapy to establish a benchmark for chemotherapy efficacy in a real-world setting. Researchers identified patients who responded to avelumab, and documented the benefits by contrasting it to the benchmarked data. Last year, the FDA granted accelerated approval to avelumab.
Case Study 3: PatientsLikeMe and amyotrophic lateral sclerosis (ALS)
Data from RWE can also be used to save costs and concentrate clinical studies on more promising candidates. The case of PatientsLikeMe and amyotrophic lateral sclerosis (ALS) is an excellent example. PatientsLike me as an important personalized healthcare network, with a large ALS community. Using its database, researchers found that 9% (348) of patients with ALS in the PatientsLikeMe community reported using lithium carbonate, a drug which had shown promise in a small study (16 treated patients, 28 controls), but which did not have regulatory approval. This created an opportunity to create a larger observational study, using data gathered from these patients, which was matched to multiple control groups. In the end, no difference in disease progression was observed after 12 months between the overall study group and those patients in the lithium carbonate treatment group (78 patients). Subsequent randomized studies reached the same conclusion that there was no clinical effect in the overall population.
Which data can be used in RWE studies?
Not all data is acceptable for RWE-studies. Hence, it is the submitter’s responsibility to show that the data is of sufficient quality. One way to do that is to follow the Hahn framework, which consists of three components:
- Conformance – Does your data conform to specified regulatory standards?
- Completeness – Relates to frequencies of data attributes present in a data set
- Plausibility – Associated to the truthfulness of data
The issues around conformance are crucial, as data submitted to regulatory agencies need to follow the appropriate data standards. As an organization, to be able to work with RWD across multiple sources, data may need to be put into a common format, with common representation (terminologies, vocabularies, coding schemes). The FDA recognizes the importance of developing data standards to maximize the utility of RWD and is working on identifying relevant standards and methodologies for collection and analysis of RWD. Currently, the FDA and EMA both already have a number of guidance documents on the use, sharing and storing of electronic data. They provide recommendations on how to store and capture data and as well on the authenticity and reliability of your data storage. This can include implementing audit trails for electronic records, and how to archive records that are pertinent to clinical investigations.
Completeness has long been seen as a roadblock to REW adoption: critics have long believed that consumers by themselves are likely to submit incomplete data, or likely to forget and / or omit information from their records. This why the FDA require patient registries used in RWE studies have sufficient processes, such as those to gather follow-up information when needed, to ensure data quality, and to minimize missing or incomplete data.
Plausibility focuses on the believability of data. Where the first two criteria focus on the structure of data (conformance) and its presence (completeness), plausibility is uniquely concerned with the actual values being shared. This is another long-standing roadblock to the adoption of RWE, as critics fear that patients would fudge values rather than reporting truthfully due to their own self-interest. The automation of data collection (through wearable devices for example) alleviates this concern, as long as proper guidelines to validate the source and reliability of the data is supplied.
About the Author: Jean-Francois Denault, MBA has been working with innovators and entrepreneurs in life sciences as a professional consultant for over fifteen years. He has worked with over 50 different clients in life sciences located all over the world. He has written a dozen articles for various publications and is the author of two life sciences marketing handbooks.