57 AI-Powered Opportunities in Clinical Trials
The only consumer vertical where the user really doesn’t want to be a user.
Photo by Tima Miroshnichenko
The story of a client and his consultant
Clinical trials are run by professional service organizations called CROs.
There are two kinds of consultants in professional services—good ones and bad ones.
Ask a good consultant what time it is, she’ll glance at your watch and say, “It’s 5 PM, Danny,” bill you $1,000, and leave.
Ask a bad consultant what time it is, and she’ll do the same—except she’ll steal your watch, take the $1,000, and run.
That’s a story from the CEO of one of my clients who built a multi-million-dollar construction project management business in Iran during the Shah’s time. He sold a lot of professional services to the Iranians.
It stuck with me because it reveals a fundamental truth about consulting: value is often misaligned with need.
And nowhere is this misalignment more evident than in clinical trials.
Introduction
Most consultants steal your watch, so you know what time it is.
Most management books teach systems, so you don’t screw up.
I have plenty of experience at that and so does every other entrepreneur.
The only problem is that no one tells you what to do, once you screw up.
In this week’s edition of “Turn expertise to freedom”, I’ll talk about 57 clinical trial screw-ups.
And how these are 57 business opportunities.
In 2020 We All Became Clinical Trial Followers
Starting Feb 2020, with the outbreak of the COVID-19 pandemic, the importance and relevance of clinical trials to our daily lives suddenly became clear and obvious. Billions of people hung on every shred of news regarding Pfizer, AstraZeneca, and Moderna clinical trial progress in releasing a safe and effective vaccine. Discussion online, and in person — peaked.
When the AstraZeneca vaccine trial fumbled on mis-dosing of 20% of their patient population, there were those who rejoiced, and those who were dismayed.
How could such a well-run and sophisticated pharmaceutical company with great information systems get something as basic as dosing wrong? I’m not an insider and I don’t know.
The bottom line is that AstraZeneca did not make it to the FDA finish line with Moderna and Pfizer.
Clinical trials are complex scientific experiments with thousands of moving parts.
Vaccine trials happen to be relatively simple in comparison with oncology trials — but they are still complex operations.
A lot of things can go wrong.
Perhaps what happened to AZ is on my list of 57 ways clinical trials can go wrong.
Perhaps what happened to AZ was a Black Swan.
One man's problem is another man's business opportunity
My name is Danny Lieberman and I’m unretired.
We collected over 10M signals on our platform and I have some perspective as to issues in clinical trial operations.
I analyzed the problems in clinical trials encountered by our customers in 70+ clinical trials in Europe, Israel, the US, India, and Brazil over a 5 year span.
I found 57.
These issues are process and project management-related and are good fits for expert humans in clinical research to solve with AI assistance.
Here goes.
57 opportunities to turn your expertise into an AI-driven one-person venture
Before I get started on my list of 57 ways to screw up a clinical trial (including trials sponsored by major drug companies like AZ and run by major CROs like IQvia - I’d like to start with an observation:
Clinical trials are the only consumer vertical where the user really doesn’t want to be a user.
The patient is the only stakeholder in the mix who really doesn't want to be there.
Unlike the mobile phones, computers and airline flights where the end-users really want the products and competing vendors fight to get the user the best product at the best price.
The first problem and the elephant in the room starts with the primary objective of a study:
Clinical research should ultimately improve patient care.
For this to be possible, trials must evaluate outcomes that genuinely reflect real-world settings and concerns.
However, many trials continue to measure and report outcomes that fall short of this obvious requirement.
A clinical trial is a scientific experiment designed to test safety and efficacy of the treatment on humans.
If the drug company doesn’t nail the planned safety and efficacy outcomes then their mission fails.
This is illustrated below (see Why clinical trial outcomes fail to translate into benefits for patients)
Uncertainty regarding choice of CRO and vendors. Even though trials are years in planning, vendors are usually selected a month before the first patient is recruited.
Uncertainty regarding the regulatory pathway of the treatment; i.e. how do you intend to get FDA/EMA to approve your treatment and what data do you actually need? What data do you need for reimbursement and how do you collect it during the trial?
Uncertainty regarding the SAP (statistical analysis plan). You’d be surprised how many sponsors gloss over this.
Uncertainty regarding the data management plan
Uncertainty regarding the study design/protocol (see outcomes above). In 70+ studies I was involved in, all had at least 1 protocol amendment, often in the first 3 months of the trial.
Uncertainty regarding the case report forms/clinical data model. You’re a medical scientist not a computer scientist. The case report report forms used for data collection rarely match the precise requirements for data in the clinical protocol, because the clinical protocol itself is a Word document not a data modeling document.
Uncertainty about when the study will start / end. Yes - the protocol has something to say about this but it can be fairly fluid.
Uncertainty about probability of success. The major decisions about a $100M study are done by a small group of very senior executives in the drug company. Decisions are affected by cognitive biases: confirmation bias, self-serving bias, availability bias, timing bias.
Uncertainty when IRB (ethics committee) approval will be received
Uncertainty regarding site selection. Surprising things can happen — like wanting to use sites in the country of Georgia and then get stuck for 3 months because of national elections shutting down the government.
Uncertainty regarding patient recruiting
Waiting until the last month or 2 before the IRB to get quotes from vendors
Vendors working with external clinical consultants who are not the decision makers (see above) creates problems and conflicts of interest.
Uncertainty regarding total cost of ownership of the project
Unsure about the impact of changes
Trying to get the lowest price and getting the lowest quality as well.
Quote process takes time and management attention
Delays agreeing on the contract with the eClinical vendors - (there are usually 4-5)
Delays signing contracts due to lawyers
Lack of attention to complex and lengthy vendor-assessment processes; it’s hard and the executive management is busy running the company. Not every project is COVID-19 with money no object.
The sponsor changes the protocol / case report forms after receiving the vendor quotes
The sponsor’s clinical team decided to change document/protocol names
This creates mismatches between the protocol and the case report forms
And generated lots of email traffic which creates more confusion and delays
There is uncertainty regarding the risk assessment of the study protocol. The uncertainty ranges from “What is risk assessment, to Who will do the risk assessment, Who will fund it and how long will it delay the project and drug to market?”
The sponsor, surprisingly so, may have a lack of clarity regarding GCP (Good Clinical Practice) risk and how to mitigate.
The best practice in the industry is looking at pieces of paper and updating Excel.What kinds of alerts on GCP violations are needed by the sponsor during the clinical trial?
What data management reports are needed to run the study?
Is there an issue of accountability because you are working with an external consultant who may not be available for timely resolution of questions?
Is there vagueness regarding patient randomization and drug supply chain?
Is there vagueness regarding data validity checks in the database ?
Is the patient-reported outcomes app not validated with patients before onboarding?
Are the case report forms not validated with site coordinators/investigators before onboarded?
Does the drug company have a clear picture of the cycle time for designing the clinical data database model?
What about additional cycle times for developing case report forms, data management reports and data extracts for the biostatisticians?
How long do you have to wait for the vendor to develop and QC data validation checks?
And what about internal EDC vendor team delays?
Do you have a perception that the vendor EDC team is not updating you on a regular basis? You are probably right in assuming that they have some staffing issues.
Do you think that your clinical trial is delayed because 30% of your CRAs left during the trial? Ya think?
Are you running delayed user acceptance testing of the eClinical platform because your team is blocked with other clinical operations-related issues?
Are you unclear regarding delivery-time impact of changes to the clinical protocol and case report form during development?
Did you just now think about multi-lingual support for non-English-speaking patients (Spanish, Russian, Arabic, Amharic….)?
You never asked what additional data reporting tools the vendor could provide you that would save you time and money downstream?
Did you ever define roles and responsibilities for a clinical data management team?
You know you need training on the alphabet soup of the clinical trial platform supplied by 6 different eClinical vendors — EDC, ePRO, CTMS, eCOA, CTMF, RTSM? How and when will it all happen?
Are you waiting for the last minute for training and then going to your research sites with a baseball bat?
Did you get insufficient training for user management and it slows you down when you need to onboard new users?
Did some of your sites and investigators miss out on the training sessions and makeups?
Your clinical research sites don’t have single sign-on and they have to deal with Post-Its and multiple logins — slowing them down in caring for patients.
Your site coordinators don’t how to use the patient mobile app themselves
At some point you will have delays due to technical issues with the statistician not being able to parse data extract files
You might discover at the end of the clinical trial that you did not collect the primary efficacy endpoint or safety endpoints or Quality of Life data you need for reimbursement approval. Argh.
The clinical protocol amendments were not properly documented and implemented in the EDC. Now you have issues with reconciling the misconfigurations and squaring away the statistical analysis — since the data structure at the beginning of the study does not match the data structure at the end of the study.
Your investigators and study monitors are not able to login because accounts were locked or they forgot how to use the system, because it’s 6 months after training.
You need reports for patient compliance and you need someone else to extract data, process and prepare the report. If you think you are being held hostage by your CRO, you are right.
The data manager assigned by the CRO to manage your data and produce reports left 2 weeks before the end of the study. Talk about being up the creek without a paddle.
Conclusion
Each of these 57 failures represents a solvable problem by a person with expertise.
If you’ve worked in drug development, you’ve probably seen most of these issues firsthand.
You already know what’s broken.
With your expertise and AI tools, you can fix issues at scale.
This is an opportunity for you to build a one-person, AI-driven venture that solves a real problem — without requiring massive teams or investment.
Want to see how to turn one of these 57 failures into a business?
Apply here to my program: Turn your expertise into freedom