IMAGING TRENDS, STRATEGIES, AND BEST PRACTICES
ERT’s Brett Hoover and Amit Vasanji examine the massive growth trends in demand for imaging, and discuss how sponsors and CROs can develop strategies and incorporate technologies that enable them to achieve better, more thorough, and less costly imaging data.
Access Trial Better Podcast from your favorite channels:
Brett Hoover, Product Management team lead for ERT’s imaging product line, is joined by Amit Vasanji, Chief Technology Officer of imaging at ERT, for a look at imaging trends, strategies, and best practices in the clinical trial industry. How can using imaging effectively reduce cost? What impact will innovative imaging approaches have on a trial? What do sponsors need to know in order to maximize the benefits of imaging?
The Growth of Imaging in Clinical Trials
Over the last ten years, more advanced technologies have enabled imaging to shift from a qualitative to quantitative practice. Automated imaging analysis can reduce costs and provide consistent, objective data, improving patient safety and treatment efficacy. These developments, among others, have made imaging data more valuable to regulators and other audiences, including patients, payers and providers. The expectation for high-quality imaging data as part of a trial is more prevalent than ever before.
Best Practices for Using Imaging in Clinical Trials
It isn’t possible to maximize the implementation of imaging in a clinical trial by waiting until the last minute. Case studies that demonstrate a successful use of imaging make apparent the importance of two things: planning and equipment auditing. Waiting to address imaging requirements or audit imaging equipment available at study sites are two common missteps.
Finding the Right Imaging Partner
In order to ensure they’re using the most innovative, effective imaging techniques, sponsors and CROs should partner with an organization that offers a deep understanding of how to work with imaging technologies to develop custom protocols. Flexibility and creativity lead to effective imaging partnerships that reduce costs and provide accurate data on time.
[intro] Welcome to the Trial Better podcast series. This week we’ll discuss imaging trends, strategies, and best practices with your host, Brett Hoover and featured guest Amit Vasanji. Stay tuned to Trial Better.
Brett: Hi everyone and welcome to this installment of the Trial Better podcast. My name is Brett Hoover and I lead the product management team here for ERT Imaging. This week. I’ll be your podcast host, but far more exciting than that is our guest speaker today, Doctor Amit Vasanji. Now, before we get to Amit, which is the exciting part, let’s talk about the three topics we’re going to focus on. Topic number one is the evolution and growth of imaging in clinical trials. Number two, we’re going to talk about how you leverage imaging in clinical trials. And last but not least, we’ll finish with how to improve imaging data in your clinical trial through the use of innovative technology.
Now with that said, let’s get to the meat and taters. Let’s welcome Amit Vasanji. Amit, welcome to the Trial Better podcast. If you don’t mind, take a moment or two to kind of introduce yourself so our audience know who we’re talking to.
Amit: Sure. So I have about 17 years of experience running clinical trials, doing basic research. I also develop algorithms specifically for basic science in clinical trials. And I oversee a group of scientists, mostly PhDs and MDs that set up the clinical trials for the protocols for the clinical trials, so basically the charter documents for these trials.
Brett: OK. So Amit, I think I’ve heard you have both a preclinical and clinical background, meaning you kind of see both sides of the coin and you’ve got to an interesting translational perspective for all of the studies that you work on?
Amit: Yeah, I mean, in terms of research, clinical trials and basic science aren’t really that different, it’s just who gives you the data. So most of the basic science is from animal studies and we write algorithms, mostly in those trials we write algorithms, because it’s really hard to do sort of qualitative analysis because most of those are peer reviewed. On the clinical trial side it’s more grading and sort of qualitative analysis, we’re trying to change that in our approach, so they’re not that different.
Brett: Gotcha. OK. Interesting. So let’s get started. Now, technology innovation continue to reshape how clinical trials are performed and this is particularly evident when we talk about imaging in drug and device studies. Now, over the past 10 years, imaging has moved from being mostly qualitative assessments to being more true quantitative measurements. I think this is driven by the increasing need to visually demonstrate safety and efficacy in light of new treatments to a range of stakeholders. You know, folks like the FDA, patients, providers, so on and so forth. Now, Amit from where you sit what impact do you think technology and innovation is having on our industry?
Amit: So there’s been a rise in the amount of quantitative analysis in imaging for clinical trials. In general, imaging has increased about 700% in clinical trials since around 2000.
Brett: Oh wow.
Amit: So there’s a massive increase or impetus by the FDA to get imaging in a lot of these trials. And the reason is it’s more of a visual sort of tangible evidence of what’s happening in these patients. Rather than looking at lab data, you can actually see these things happening real time with these subjects. So there’s a drive to do it in a more quantitative way. In terms of qualitative, which is the traditional approach versus quantitative, you’ve got these images and you’ve heard people say that an image is worth a thousand words, I’d say an image is worth a thousand end points. There’s a ton of data within these images that you can gather. And so why not use the information that’s in the images, rather than doing a subjective assessment. Because you could do continuous variables in these images, rather than saying, yeah, I think it’s a good or bad, thumbs up, thumbs down.
And if you are a subject of these trials, what would you want? Right, you would want someone to tell me how much better am I getting? Or how much worse am I getting? Not simply, yeah, I think you’re OK and you’ll be OK for the next three or four years. But as a doctor, I would want to tell you how much better you’re getting.
Brett: So Amit, it sounds like what we’re doing is transitioning from opinion to more data driven evidence?
Amit: Right, right.
Brett: Alright. Well, you know, another driver of growth in the industry that I think we’re seeing is imaging’s unique ability to help sponsors differentiate their new treatment from the standard of care. Now, when we combine purpose built image analysis software into the mix, imaging provides the means to justify a novel endpoint or endpoints which support therapies efficacy claim for regulatory approval.
Now I would say this is particularly important when gaining buy in from payers, insurance companies who want to know the stuff’s working and from providers who quite frankly, doctors and nurses, they want to know that they can have confidence in the treatment they’re recommending for their patients. I like to think imaging plays a big role in this. But in addition to that Amit, what trends are you seeing in imaging for clinical trials?
Amit: I think it’s just the quantitative aspect of it. If you’re taking these images and you’re dosing patients with radiation, you want to actually have some value for doing that? There’s a risk for these subjects in getting these scans. And then the on the sponsor side, they’re paying a ton of money for these acquisitions. In fact, I’d argue that the acquisition, the amount they pay the sites for those images and then the amount that they pay the centralized service, like a CRO like us in analyzing those images is pretty high, it’s the majority of what we charge our sponsors. So you want to get the most value out of that.
And as I mentioned before, you want to be able to tell what’s happening with that subject in a continuous way rather than saying — on a scale of zero to 10, is the patient getting better or worse? For example, if you had a lesion in a patient and you can actually measure how big that lesion’s getting over multiple time points, I can tell you it’s grown by 10% or 20%, rather than just saying, on a graded scale it’s stable disease or progressive disease or partial response, complete response, which are some of the categories that you have in some of these cancer trials. So I think there’s more of a shift now towards that sort of paradigm, rather than doing sort of bucketed categories.
Brett: Right. So I’m guessing it’s pretty hard to show a longitudinal improvement in a drug if you’re just going by opinion, bigger, bigger, smaller, bigger, what does it really get you?
Amit: Right, right. And in doing that, you’re adding this adjudication where if you bucket things together…. Let’s say one reader said a subject had a progressive disease, which means that the subject’s cancer’s getting worse. And another reader says, oh, I think they have stable disease. So you’ve already bucketed it in those two categories. Now you’re going to send it to a third reader who’s going to say, agree with one or the other. So all he has to choose from is one of those categories. Why not just say it got 10% worse or 10% better. You’ve got a definitive answer. You don’t need to take it to another reader.
Brett: So the idea there is that if we can be more quantitative and take some of the subjectivity out of the read, there’s a possibility to reduce the need for maybe a two plus one reader paradigm?
Amit: Right. And then the other way you can do that, and this is our sort of approach is to use image analysis to improve the process. So right now you’ve got two readers or two readers and an adjudicator going in and making manual assessments for where these lesions are. And these categories that are in existence right now are mainly because we don’t want to put too much burden on these readers because it takes too long to make these assessments. We have some trials with 17,000 time points in it. So these readers have to go in and manually annotate every single image. It takes them a really long time. What if you had an algorithm that could auto detect all of those lesions? Then you give it to the reader, the reader still goes in and doesn’t over read, so they say — oh, OK, I agree with the algorithm here, or I’m going to make an adjustment to the algorithm here, or I missed something here. Because nothing’s going to be perfect, right? But at least you’ve improved the efficiency of that reader. You’ve also reduced the variability, because you’re not relying on the reader’s experience, which could be fraught with bias.
Brett: You’re effectively focusing the reader’s clinical experience on the one part or one step within the image evaluation workflow where it’s truly important, but not where it’s going to add a lot of bias or variability.
Brett: Alright. Well, moving on from that, and I think this is quite related. Let’s talk about money. You know, money is a big driver in a lot of behaviors we have, in many aspects. But in clinical trials there’s a huge push to reduce the overall cost of research and ultimately, the cost to get a drug or device through regulatory approval. Taking on what you kind just said there, how do you think imaging helps reduce costs in clinical trials?
Amit: Well, specifically image analysis can, like I said, improve that efficiency. So if all a reader had to do is go into a case and look at everything that an algorithm already segmented, so basically the algorithm’s already defined where all the lesions are and the reader’s just having to go in and say, “oh yeah, I agree or don’t agree,” you’ve drastically improved the reading time. You can also do that with customizing software so that it allows the reader to go through the workflow. So we’ve got, I don’t know, hundreds of different criteria that you can use for oncology for example. With device trials there’s probably another a hundred, depending on the type of indication you have. Though, if a reader, an expert, a radiologist that’s going in to making these assessments for these multiple trials they get confused all the time because they don’t know which trial they’re reading for, which criteria they should be reading for and there’s a lot of math that’s involved in doing some of these categories, defining whether a patient it fits in one or another.
Brett: Like partial response, complete response?
Amit: Right. And so if you customize the workflow so they can only go through that workflow and it has edit checks as they go along, they can’t make mistakes. Then you let the software do a lot of the automated analysis for the math, so the reader doesn’t have to do that. On top of it if you add the analysis, you actually improve not only the variability that you would have between readers, but their efficiency. They don’t have to do things manually, annotate or do a lot of time intensive sort of tasks.
Brett: So in terms of how that rolls up in the cost, it sounds like if you can take that approach, you’re going to get faster reads, right? You’re going to reduce the errors that are driven by the fact that the readers are just human. You know, humans are inconsistently inconsistent, we’re good at it, part of thinking. And last but not least, I’m guessing if you get fewer errors, you’re probably going to have fewer queries related to the reads and that all rolls up to just less time and effort and quite frankly less hassle near the end of the study.
Amit: So one of the biggest issues with these trials is data management. And it’s basically resolving those queries you were talking about. So if a reader makes a mistake, it has a large impact in how quickly you can deliver the data back to the sponsor, because data management would have to go in and fix all the queries. And if you think about it, let’s say a subject had 10 different time points and then time point number five or let’s say worse, time point number nine had an issue in it, you actually have to reset all the other time points.
Oh sorry, the other way around. So let’s say time point two had an issue, all of the followers that were already read would have to be all reset.
Brett: So you have to reset back to time point two and if there were 10 time points, they have to reread those eight or so time points.
Amit: Right, because they’re all dependent on the previous time point. Or it could be. And so it has a large impact. If you did that for 20% of your cases and you had 17,000 time points, that’s a lot of…
Brett: That’s a lot of work and effort. That’s a lot of cost.
Amit: So if you do a lot of that upfront and you do this edit checks and you customize the workflows for these criteria, you won’t have those issues.
Brett: So the approach you’ve taken, your team has taken is basically rather than getting really good at fixing problems, just prevent them.
Amit: Right, right. And that’s why having scientific input in these trials upfront instead of waiting and kicking the can down the road is critical for these trials. It might have a little bit of impact on how quickly those trials get out, initially, but you set it up to work correctly and not fail, data runs.
Brett: I’ve got a question later about the importance of science in these studies. So I think you’re totally queuing me up here. So let’s move on to the next question where, Amit, there are clearly a lot of positives of implementing imaging in a clinical trial, you pointed out a handful there. I think a key benefit that comes to mind is how imaging gives us a more comprehensive look at the safety and efficacy of a treatment. For example, if you take advanced imaging and analysis protocols, they can be useful across a wide range of therapeutic areas and indications, especially if we take an artificial intelligence approach or if we tailor the image analysis software to the actual needs of that study, don’t assume one imaging study is just like the other.
So this is particularly the case for rare diseases and medical device studies wherein the study execution requires a more innovative approach. So my question to you is, can you think of a specific example wherein you guys have taken a more innovative, creative approaches to imaging and how that’s benefited the outcomes of a study?
Amit: Yes. So we’ve actually done it quite a few times now. One specific example is with a subset of a disease called an interstitial cystitis where you have lesions in the bladder. They’re extremely painful for the subject and the only way to currently treat them is to basically cauterize the lesions.
Brett: With heat?
Amit: Correct. And in order for you to do that, you have to distend the bladder so much that these lesions stretch and they’re all attached to nerves, it’s really painful for the subjects. And mostly, it’s in women. And actually that the disease has a high rate of suicide as well, just because of the amount of pain…
Brett: The pain, the lifestyle they have to endure.
Amit: Right. So we work with a company that actually had a treatment where they inserted a device and the device alluded a drug that basically healed these lesions and in a phase one early trial it showed that these lesions disappeared.
Brett: So this is a combination product, drug and device.
Amit: Right, right.
Brett: Ah OK.
Amit: But we realized that even the KOLs — the key opinion leaders — that have a lot of experience in this disease, don’t really know what they look like in a video. So the traditional way to do these scans is basically insert a cystoscope into the bladder and then sweep through the binder and take images. So it’s like a movie of the…
Brett: Like the video of an inside of a basketball.
Amit: Right, exactly. So what we did was determined… And we have an algorithm that basically takes those frames and stitches them together, sort of like an iPhone when you do a panorama. So it stitches them all together, you get one big image that’s sort of a locator map of the inside of the bladder.
Brett: Ah locator map. I remember those back in elementary school.
Amit: So we gave five KOLs like a hundred maps from a hundred different subjects, multiple time points, and we allowed them to just manually annotate where they saw the lesions. We only got about a 40% percent agreement on where these lesions were. So it was terrible, they couldn’t even agree where those lesions were.
Brett: These are the top five KOLs for that disease on the planet?
Brett: And they couldn’t agree with themselves or agree with each other?
Amit: Both. We actually had them re-read some of the same cases and they couldn’t agree on the same lesions that they called before, that they saw it again.
Brett: So how do you convince the FDA…. As a sponsor, how do you convince the FDA to allow a study like that to happen if it’s clear that the traditional way of approaching the reads is just not going to work?
Amit: So what we did was we wrote an algorithm, because our assessment was that an algorithm could actually find these lesions more consistently than the readers could. So what we did was we took all of those five KOLs and all of their measurements and we overlaid them on top of each other. And so we did a consensus map. So basically sort of like “four out of five dentists agree” sort of thing, so that’s exactly what we did.
Brett: You assume the fifth guy or gal is wrong and the four agree.
Amit: Right. So we then train the algorithm to find the four out of five agreements. Then we wrote a validation report that says in terms of agreements with the four out of five, the algorithm performed as well. So we actually did find it performed about 80% of the time and agreed with those four out of five. And then we presented that in a report and then provided it to the sponsor who then submitted it to the FDA. So in that particular trial, the algorithm was actually used as the end point, not the readers.
Amit: Yes. The FDA agreed that the readers had too much variability.
Brett: So that seems like evidence that the FDA isn’t all about saying no. The FDA they’re humans and they’re scientists just like us, what they want you to do is to convince them that you’re taking a safe and effective approach to gain the data that they need to feel comfortable telling you, yes.
Amit: Right. Basically, you need to show them that the approach that you’re taking is validated, and not just for one or two images. You have to show across a number of subjects that these algorithms are working as you expect. But also in most cases, you don’t want the algorithm to give you the final output. It just second diagnostic medicine, right? What if you went in and you had… So there’s an algorithm for breast CAD for example, which is a computer aided diagnosis. You wouldn’t want software to tell you whether you had cancer or not. You want somebody who’s an expert to look at the images that the software generated and say, OK, based on my opinion and the software, I believe that you have cancer.
Brett: So let the software do all the leg work? And at the end of the day between the software and final judgment is the clinician, is the clinical expert where the biased and sometimes convoluted thinking of human actually adds the most value.
Amit: Right. So in this particular study with the bladder lesions, we not only developed the acquisition, so how quickly they could scan these images, we took the 10 minute normal sort of assay, which again is painful for the subject, reduce it down to three minutes.
Brett: Three minutes down from?
Amit: Ten minutes. So basically we told them exactly how to sweep their bladder, we then had an algorithm that stitched them together and then the software automatically found all the lesions in the bladder.
Brett: So you guys weren’t necessarily trying to reduce the image acquisition protocol time. You just want to get a better image. But along the way to getting a better image, you also created a better protocol which was less harmful to the patients.
Amit: And the algorithm does additional outputs, not just whether they found a lesion or not. So it tells you on every given lesion how big it is, how red it is. So whether it’s aggressive or active, whether it’s bleeding or not, whether it’s regularly shaped. So all of those metrics come out over the algorithm automatically. So a reader trying to make those assessments would be almost impossible.
Brett: I was just going to say the amount of time it would take for the reader to pull out all that data, assuming they even could, would be either cost prohibitive or time prohibitive.
Amit: Yeah and that’s where we find the use of algorithms to be the most important — where it’s almost impossible for a reader to define these end points or it will take way too much time. We have a study looking at micro aneurysms in the eye. There’s literally on a given image, 500 to a thousand of them. So for them to manually annotate them… [crosstalk] Yes. The algorithm can find them automatically.
Brett: Gotcha, interesting. Alright, well, thinking of another example of how folks have gotten creative and tailored their approach to imaging and how that’s had a profound impact on data quality and even operational throughput. I’m thinking of oncology trials, specifically immunotherapy studies, where a sponsor will come to you and say — hey, I don’t want to just do one tumor assessment criteria, I want to do two.
So for example, RECIST and irRECIST, In this case. I’ve seen situations where we’ve used image processing and analysis software tailored to both the reader workflow, the paradigm and the dual criteria approach. And quite frankly, as a result of that, we’ve been able to improve data quality by lowering, and lower study costs by reducing reader error rate, adjudication rate, and quite frankly, sometimes even the need for a two plus one paradigm. That’s rare. But it is possible.
My question for you Amit, is a case like this doesn’t happen because someone didn’t take planning and involving imaging planning, didn’t take that seriously. You know, a lot of times when folks come to a clinical trial, CRO, they’ll figure it in at the very end of the planning phase. My question to you is that what advice would you give to a sponsor thinking about incorporating imaging into their study such that they can maximize the benefit?
Amit: So every study starts with a study protocol. So it’s what they are basically submitting to the FDA and saying, this is exactly what we want to do for this trial. The imaging section isn’t usually that large in there and it’s pretty much open ended. In fact, a lot of them just say — refer to the imaging charter from your CRO.
Brett: So basically an appendix with one sentence.
Amit: Right. So what we have to do is basically dissect the critical, determine what the mechanism of action is, so how that drug is actually working. And then we make suggestions on what the final end point should be. Sometimes they’re already laid out there, but they’re not described fully and they don’t always account for some of the issues with workflow that you might have. So we give the sponsor some of our own input on what we think the output should be and the workflow should be in the trial. And they sign off on that. And that’s all included in the imaging charter. So it’s critical that you do that upfront in any of these trials.
And we don’t claim, as ERT, we don’t claim that we know everything either. So we will take scientific input from outside groups as well. So if we know that there’s a specific expert in this particular type of cancer, we’ll bring them in. We will contract with them, they will actually help us design some of these protocols and then we will also give that input and provided it to the sponsor as well.
Brett: So you and your team took a more collaborative approach. And it sounds like you’re not afraid to be firm when you know you’re right and to be open and collaborative when you know there’s other possibilities.
Amit: Right. And like you mentioned before, there’s a lot of studies that do multiple criteria. In fact the FDA will only accept RECIST 1.1 as the criteria and endpoints.
But they’ve realized and a lot of the sponsors have realized that, in these drugs you’ll get an initial response where the lesions and these tumors are actually growing, because they are essentially dying, but they fill with water or fill with material that’s byproducts of the actual toxicity of the drug treating the cancer. So they have a phase where they grow and then they shrink back down, called “pseudoprogression.”
Brett: Pseudoprogression. OK. I’ve heard of that.
So there’s a lot of criteria that take that into account, but RECIST 1.1 doesn’t. So you end up with progression and then the subject is basically taken off of the trial based on whether the site recommends that or not — the patient care at the site.
What we’ve done is allowed our system to do multiple criteria at the same time. So the reader doesn’t have to do two different criteria and read twice. We take the commonalities between each of those criteria and incorporate into the same read.
Brett: Interesting. So by doing one read and overlapping the criteria where they’re common, it’s only the differential between those two that’s actually the extra work. So I imagine that goes back to our concept of reducing the cost in studies and also improving the turnaround time for the reads. So basically you can do two criteria without being completely punished for the extra effort, cost, or time that comes along with that. Oh, interesting. OK.
You know, a common theme I’m hearing in a lot of your responses to the questions, Amit, seem to be science. My background as a former scientist and stem cell researcher, I have a fair amount, a fair appreciation for how science should be the foundation for pretty much everything we do in clinical trials. I personally think it behooves a sponsor to seek out partners who are likewise committed to scientific rigor in how they implement imaging.
Now I know you’ve got a strong scientific background and a long track record of successful clinical trial and clinical research support.
My question for you though, Amit, is, in light of this, what do you think a sponsor should look for when trying to find the right imaging partner — science or otherwise?
Amit: I think someone who actually values the design of the trial up front, like, basically the input from the science, making sure that charter documents for the design of the trial is in line with the study protocol, and selecting the appropriate readers for that particular indication. So if you’ve got renal cell carcinoma, finding the reader that’s an expert in renal cell carcinoma is critical. Because there’s a very defined sort of visual assessment in these patients, in the CT scans.
Also qualifying sites, making sure that they’re set up correctly and have the right equipment. Because you’ve heard the term “garbage in, garbage out…”
Brett: Oh, yeah.
Amit: And it definitely applies to imaging. So if you’ve got a really badly acquired image, there’s no way a reader, doesn’t matter how good they are or how experienced they are, they’re not going to be able to make an assessment.
So, the worst thing you could do is have all of these scans done and the outcome be not valuable, which is basically means I can’t read it.
Amit: So getting that all set up up front before the sites even start acquiring images is important. So what we do is we have a qualification process where the sites would submit example images to us. They could be historical images from other subjects if they’re allowed to send it to us. They could be scans that they do on the first subjects. The acquisition of the first subject time point, we’ll do assessments to make sure that their equipment is adequate for the type of endpoint.
In some cases, depending on the modality, we also qualify that the technician who’s acquiring them. For a CT or an MRI, the technician’s not that important because you’re basically setting up the parameters in the system in the scanner and the console, and then the patient just lies down flat. They’re usually emotionless.
So for techniques like x-ray and ultrasound where the positioning of the subject is critical…
Brett: Ultrasound, yeah.
Amit: Ultrasound, and particularly where, actually a technician — and if you’ve had a wife that’s pregnant — you’ll see the technician move the ultrasound probe around. If you’re not acquiring the right region, or you don’t have the right orientation of the probe. You get a completely different image and you won’t be able to assess it if you miss something.
Brett: So training will be particularly important for those modalities that are more art than science. Because I’ve seen folks do ultrasounds before and I swear they’re doing the same thing in different ways.
Amit: Yeah. And they can change it based on the subject’s anatomy. So it’s years of experience that these sonographers have in acquiring these images and you want to make sure that you don’t have sonographers that have limited experience doing some of these trials because you’re not going to get the images that you need.
Brett: Right. Or at the very least you have to standardize the training so you take sort of that experience burden out of the equation.
Amit: And then there are certain modalities like MRI that are really hard to set up. Because CT scans are sort of pretty similar across the board, across multiple hospitals. MRIs, depending on how old the machine is, can’t do certain things that you want them. They’re called “sequences” which are basically types of acquisitions that you do for these trials. And in some trials we ask for very specific sequences, but not all pieces of equipment can do it.
So luckily we have a physicist on board that will actually tailor the acquisition protocol for that particular machine. But it requires a lot of upfront setup to do that.
Brett: So that can be helpful in one of those situations where you’ve got either a few number of sites that can actually enroll those patients, or you’ve got one particularly high rolling site with lots of patients, but they may not have the right imaging equipment. So we can get creative to get a feel for it. How do we do the best we can within the confines of what we’re given so we can help that sponsor and get access to that patient population?
Brett: OK. Alright. Well, Amit, if you can believe this, I only had six questions for you today. That’s actually all of them. I thoroughly enjoyed the conversation. I want to say thank you for your time and for your insight. And you know, to the folks out there in the audience, the three things we covered today were growth in imaging, how it’s grown as a modality, and as an assessment in clinical trials over time. We talked about how to improve imaging data through the use of technology. And last but not least, we talked about best practices for implementing imaging in clinical trials.
This is a reminder. I’m Brett Hoover. I’m this week’s a podcast host. And I ask all of you to stay tuned for our next installment of the Trial Better podcast series. Thank you
Amit Vasanji, Ph.D., is Chief Technology Officer, Imaging at ERT. Amit has over 17 years of experience in basic and clinical research image acquisition, processing, analysis, visualization, and biomedical software engineering. His research has been published in a broad array of peer-reviewed journals and he is frequently invited to speak at international events on effective processes for optimizing imaging results in clinical development. In his current role at ERT, Amit oversees the scientific feasibility and design of ERT’s image management solution and is responsible for the development and integration of customized image analysis algorithms into clinical trial workflows.