Edwin Lindsay 0:05
Ed, thank you very much. And I'm Edwin Lindsay from CS Life Sciences. We are a quality regulatory clinical consultancy, and we're going to talk about borderless data in clinical trials. Angela, who's one of the panel, introduced herself.
Angela Paterson 0:22
I'm Angela Paterson. I also work for CS life sciences, and I look after all of our clinical trials, clinical data, post market data collection globally.
David Vu 0:36
Okay? Ed Wade and Angela, hi everyone. My name is David Wu. I'm the director for the biomedical data hub hosted by a star Singapore. Very much appreciate the chance to be here with LSI Asia for the inaugural symposium. And thank you very much to CS Life Sciences for sponsoring as well. Thank you.
Edwin Lindsay 0:59
So we're going to talk about borderless data. And when we're collecting clinical data as it and the current climate, people are wanting more and more data, for regulatory approvals, for evidence, and not just for pre but post as well. Like Europe, you have ne collect a lot of clinical data, but people are always asking, can we collect it in different countries? Is all data equal where you've got data, in Asia, Japan, UK, US, but it comes across that sometimes you all need the same data, or it's all different data. Where do you see in what way is the best way for starting to collect data that you can use everybody's data?
David Vu 1:41
Yeah, yeah, maybe I'll start to jump in with that. So from the, you know, from the Singapore ecosystem perspective, from the Singapore government perspective, we see data in two broad categories, right? And it really depends on who's generating the data. First of all, you have data coming out of our healthcare systems. You know, this is typically what you think about in terms of medical records data, it's patient identified and it's very high volume data. The other category is research data, right? These is typically smaller data sets cohorts. They're de identified most typically, and they're consented and curated as well. Each one of these types of data have different uses to them, in terms of evidence generation, sometimes you need the larger, high volume data sets, and you are willing to or able to sacrifice from a quality perspective, right? On the other hand, sometimes you need consented data, sometimes you need very highly curated data, right? And so that's why it's important to understand the type of data that you need, because there will be different sort of regulatory and governance considerations that you need to take into account in order to access the data. And then, of course, Edwin, you alluded to data coming from different countries and regions, and in terms of validation, that is also very much a consideration, especially now that we're getting into areas like AI, for example, using data sets to do trial selection, either for devices or therapeutics, data that was used to train a model, for instance, in in San Francisco, may or may not be applicable to clinical practice in Singapore, Vietnam, in other Asian markets. And so minimally, it's important, and regulators will acquire revalidation of the models, if not outright retraining of data, of models.
Angela Paterson 4:08
I think the biggest thing in clinical data is we ask about one size doesn't fit all. I always say that all data is not created equal, and the reason for that is in the US. When you go to fda, if you go with European clinical data, they say, Where's your American data? If you go to Japan with American data, European data, they say, Where's the Japanese data. So there are countries where you could have done a 200 patient study in Europe, and it's not going to get you through FDA, and it's definitely not going to get you through the PMDA in Japan. There are ways around that in what we call global studies, which are slightly more expensive because you're talking to different agencies in different countries to get regulatory clearance for your study. But you can talk to FDA, you can. PMDA, and you can work out a patient cohort, which can be majority us and a minority in Japan. Or you can even have a third arm, which can be a European arm. So you can design your study in such a way that you can make the data one size fits all. But when it comes to startups, they tend to have very limited budget, therefore they need to pick what they can do to get the most likely regulatory clearance first, and that is nine times out of 10 US data, because that US data can then be used going forward to get you clearance in Europe. So you there are ways to use data like that. In a post market perspective, it's it's difficult collecting data because a lot of people are not interested in filling in forms and questionnaires and providing information about their patients. And what you tend to find is that countries like Singapore, Thailand, Malaysia, are really good at collecting clinical data, because the clinicians want to be able to write papers and get on podiums at international conferences and talk about their practice, and if you can join up with Someone, a key opinion leader in a big Japanese Singapore hospital or something like that, they will publish about your product and that is good quality data. There is a hierarchy of clinical data, where data coming out of countries which is really good quality but is not internationally accepted, especially in the US, and that would be places like Thailand turkey. You need to work really hard to get FDA and US clinicians to accept that data. So it's a it's a very difficult arena, and if you have all the money in the world, then a global trial is the way to do it. However, most startups need to start somewhere, and it's usually us studies.
Edwin Lindsay 7:31
So following on from that, and you're talking about Global Studies and the collection of data, David, you mentioned clinical flow, clinical processes, you throw in demographics. There's different ways of working in different countries, with devices, diagnostics, where do you see the clinical trials been set up with that? How do you get the best out of these trials? If you setting up a global study, and you have a lot of finance, very easy, you can start to talk, but when you start up, or you're just trying to collect enough data to get your product to market. How, what kind of strategies do you look at with regards to governments and understanding, what data can I hopefully use across a number of countries, but maybe not all, but especially as I say, process flows, clinical flows, different ways of working, and then obviously demographics, because everybody's not this. It's not one size fits all for when it comes to demographics at times. And you've got the you've got different devices, you've got ones that it doesn't really affect with demographics, but then you've got wearables with skin color, size, etc, as well. So how do you find the best strategy for managing that?
David Vu 8:39
Yeah, yeah. Edwin, it's a great question. I'll do my best to answer that question from Singapore perspective, and I'd love to get both of your perspective on how other markets are tackling this issue. Certainly here in Singapore, we see and recognize the value of biomedical data, both for research as well as for innovation, right? And so and we also have reached a point or maturity in our ecosystem where, you know, we are very good at generating data, right? So our healthcare, IT stack, is becoming more unified. We have a national electronic medical record, single patient identifier, across the patient population. Fairly soon we'll be on Epic as our medical record system, from from at least a nationwide perspective, in our public health care system, which serves the majority of Singaporeans. So from that perspective, I think we've we've reached sort of that level of maturity. Now we are making investments in platforms like the one that that I'm the director of the biomedical data hub. Our remit is really to make that data maximally available. Well, you know, for research and for for for innovation, right? And so there's a technical aspect to that, but there's also bioformatics and expertise aspect to that. So as innovators are looking to design trials to answer either R and D questions or research questions. Teams like mine can be of service, potentially, to collaborate, and you know, we're very interested in supporting our startup and SME, small, medium, enterprise ecosystem as well and so often, you know, we're able to provide grants or provide services, take royalty, future royalties and equity in lieu of payment, you know, immediate payment, for example. So some, those are some of the ways that I think we are working in our local ecosystem to make data more available, hopefully that, you know, kind of answer a question from the from the Singapore perspective, yeah, Angela,
Angela Paterson 11:18
I spoke about trial design before, and it really is down to trial design. The world in terms of where people live, is actually a small place. You go to the US, you can probably find someone groups of patients, cohorts of patients in health care systems from anywhere in the globe. So if you really work on design in that study where you have different cohorts from different countries, different nationalities, different races, cultures, creeds, then you are likely to be able to really achieve that global study, even within the US healthcare system. The difficulty there is Japan has its own unique way of doing things, and when you talk to Japanese clinicians, they say, but we do things differently here. So again, that's down to design of the study. Do you have Japanese trained surgeons currently working in the US? Can they be involved in your study? And it may take make your study take slightly longer. Recruitment might take slightly longer. The design of the study might take longer. However, you end up with a truly global study that you could have run in one country. And the other thing, sorry, the other thing there is European Studies, historically, there was a real fear about asking questions about race. It was a completely unacceptable question to ask, and we're having to push forward and say no, you must ask because otherwise your data only works for Europeans. So in the US, it's always been the case. There's boxes to tick for race in Europe, up until maybe 10 years ago, nobody dared to ask that question. So that is something else that in Europe, we have had to learn to ask those questions.
David Vu 13:26
Yeah, I think it's a really great point Angela, and this is where I think it's, it's important to have good advice, right, either from my advisors, you know, like, like yourselves, or from PIs, you know, within the Singapore context, for example, because, you know, there's data that you need to collect in order to maximize the value of your data for the immediate, you know, study that you want to perform, but also to make sure that the data has value for for subsequent use and further interrogation and understanding and designing things like your IRB and your patient consent are, you know, kind of equally important to making sure all the technical aspects are in place. And you know, one thing we're seeing, one trend we're seeing, you know, just to kind of dovetail on another point you made, Angela, is this, this idea of of of multinational or multilateral studies, right where investigators are now looking to combine cohort data from, you know, multiple countries, these global studies, you know, in order to maximize the value, and you know, there, we're seeing challenges around things like data sovereignty, right, and interoperability of data across, you know, a national borders. Area, you may have various types of statutory requirements or regulatory requirements when it comes to treatment of data. I'd love to get some of your perspectives, you know, on on how researchers are are handling this. We've kind of taken sort of a technical we've tried to try to throw technology at this, you know, things like privacy, preserving data, data technology, synthetic data, federated learning. But I'd love to see, you know, kind of hear about, you know, how some of your collaborators are working on this problem.
Angela Paterson 15:39
So I think the biggest thing is when you need to select the right principal investigator, chief investigator, one that can actually communicate with other people in other countries, different clinicians in different countries, and get them talking. You know, they become like almost an advisory board, because what you have is all data is not equal, but also things are not measured the same in different countries, or different operating techniques, different facilities available, there's different measures of things. You know in one country, an MRI scan might be what they do every day. In another country, they don't do that. They do CT. And it's about what imaging can everybody agree on? Because you can't compare apples and bananas in a study, you need to everything has to be the same. So, yeah, you've got all these different patients, but they all need to be treated the same. They all need to have the same treatment pathway. They need to have the same imaging. Otherwise you don't have anything to compare. So there's the real it's in the planning, it's in the design, and it's also talking to the regulators, because you could have designed what you think is the best study, and if you haven't been to FDA, PMDA to discuss the design of your study and have them agree that it's a good design or have their input, then you get to the point where you submit, thinking you've got wonderful clinical data, and somebody At one of these agencies says, Sorry, no good. And that's a disaster, especially for the startups, because the money has already been spent, and trying to find an investor who's going to fund you again when you made a mistake is not easy.
Edwin Lindsay 17:34
So following on from that post market when you need clinical data in the marketplace to keep the product on the marketplace. For claims, if the standard of patient care is different in different countries, you can control it in a clinical study. For approvals, how, what? How do people who do you think people should look at it to collect that data, to keep the product on the marketplace, to collect that data. If there's somebody uses a C some countries use CT, the other one that uses MRI. How do you manage that to be able to convince the regulators that the data is still relevant? Because the if the follow back through the clinical trial, you only use the MRI, but why are they now using CT? So you another example is a lot of diagnostics that I've worked with, we've the clinical process flow is completely different from the US to Europe to the UK. They've got different ways of collecting samples. In the US, you've got small units where people go along, give the sample and it's tested there, and the UK or Europe, it's like shipped for an hour, it's then processed. So you've got these different ways of working. Clinical Trial you can control it's fairly you can control it. But when you start to click clinical data in the normal world, how do you what advice would you give?
David Vu 18:52
Yeah, I think that thing for this one, Edwin, very critical to incorporate any types of post market surveillance requirements into the product design up front, right and then making sure that you have the infrastructure and the procedures, the SOPs to monitor the telemetry or the data coming in from the post market use of the data and trigger You know the appropriate responses, and then looping that in, and I'm sure you guys could tell me better than I can, but looping that into, you know, complaint handling procedures, etc. And I think regulators definitely going to be looking at things like AI, model, performance, post market. You know, very carefully. You know, as as these solutions, especially generative ones, begin to enter into we haven't had an LLM clear yet, but as they enter into clinical practice, you know. Certainly it's going to be top of mind. Yeah, I hope, I, you know, I hope that answered your question
Angela Paterson 20:07
in the in the IBD space, it's easier, because you can conduct validation and verification of your sites. You know, you can show that the results, whether it's an instant test, tested on the site within minutes, or you need to send the sample for a couple of hours. You can do some work to show that the results are equivalent, slightly different with an implant. However, there are differences. There's economics. Globally, some countries do not have the same access to imaging as they do in the US and Europe. That doesn't mean these patients don't get treated. It just means that the imaging will be slightly different. And all it means is people at the companies, the clinical people that are working on the post market data, you need to, need to be get to the point where you can become a little bit creative. How do I compare these two completely different data sets? What is a positive outcome look like in CT? What does that outcome look like in MRI? Nothing's impossible. It's just like it's just more difficult. People measure all sorts of things in different units, different ways and but there's always a way to sort of normalize that data and bring it together so that you can actually compare one with the other without them being the same.
Edwin Lindsay 21:41
Okay. Thank you. And following on from that, now, there's a lot of devices been designed and developed with bank data, data, it's already that you've got in the hospital. With regards to that, how do we know that the data that you're putting in, and it could go to the buzzword everybody's got is the use of AI, and trying to speed up that access to data, and analyzing that data, trying to validate your device, number of AI type devices where they've got 1000s of samples there, the software is analyzing it. But how easy is it to get to that data? How easy is it from a retrospective point of view, from taking, from helping develop products, when you've got the data that's there for analyzing it and comparing it, you
David Vu 22:31
want to start out this, or,
Angela Paterson 22:33
I think things like, it's sometimes quite difficult to get data out of hospital organizations. And the things that's what that's what you specialize in, the data becomes locked in quite often, and there are not many people out there facilitating, extracting that information from the hospital systems, especially in a lot of Europe, it's public health information. You know, there's GDPR protection for everyone. So it can become very difficult to get the data out. The easiest way that I find to get the data is through the clinicians, you know, go speak to key opinion leaders, senior clinicians, even some really young, fresh surgeons that are want to write papers and get on podiums and talk you go talk to them, and you say, what data do you have in your hospital? And they are usually more equipped to get that information out of the hospital system and publish on it than a device company ever would be, and that's where the relationships between the device companies and the clinicians are absolutely pivotal in that post market phase. Because if you don't have those relationships, the first time you really hear about your device in use is when somebody complains, and that's a whole different problem. So I know that your whole business is around getting this data, isn't
David Vu 24:06
it? Yeah, yeah, it is Angela and but I fully agree with you. You know, it's a lot of data flows, at least historically, has been through relationships. I mean, if I could talk about the evolution, maybe from a product development standpoint, you know, set aside post market separately, because there, there do need to be controls per post market. But from a development standpoint, I, my experience, especially with AI, has been, you know, the first generation of AI, Al goes models were developed using public, publicly available data, right? NIH and others made massive imaging data sets available, chest X rays, mammos, things like that. So the first generation models were kind of based on that. The problem with that approach is. That is, everyone was trading off of essentially the same data, right? And so there was very little differentiation, right? Then there was kind of a second wave concurrently, right, that that used data following relationships, just like you talked about, you know, I worked on a project in my last role with a star Singapore, which involved a key opinion leader in a specific clinical domain, I better not say, because Singapore is a very small place, but, but this, this, this, this, K, L, yeah, these guys know who I'm talking this Kol, head of department, was able to use his international standing to pull data from Singapore, of course, from Europe, from Latin America, and they developed this amazing International data set, which they were able to train models on. And so that was an extremely powerful relationship. I don't think that would work nowadays, because they were literally emailing these data sets around, so say, but yeah, our last panel would have vomited all over that. But, and now there's this third generation right where, okay, we recognize the need to access data, but we also recognize the need to respect patient consent, governance, etc, etc, and, and that's where, you know, I think you see governments at least, you know, Singapore's government, and we modeled our system after the UK HDR, UK and UK Biobank, and with the recognition that, hey, innovation in AI, drug discovery, etc, etc, are going to require access to high quality data, but in a carefully governed and controlled way. And that's kind of the third generation.
Edwin Lindsay 27:15
So following on from that, again, from from that, when you said quality data, with the data that's there some, maybe not in Singapore, but in the UK. A lot of it still paper based, and you're gathering how, how do we validate, or how do we make sure that the date the data is correct going in? Because I've worked with a number of companies in the NHS across Europe when we've been collecting clinical data, and it's, they've got researchers and putting the data. And how do we know that that data is right? Because we're hoping it's right, but you're not allowed to go and validate that.
David Vu 27:49
Well, it's not just Yeah, it's great. It's not just researchers. You know, I read a study in the US that 60% of of of data in electronic medical records is wrong, right or did not reflect the actual clinical encounter, and it's because, and maybe this is a US specific system is medical record systems are first and foremost revenue cycle systems. They're billing. They need to support the billing and reimbursement process, and so to rely on on medical records data to train AI models or to do validation of diagnostics may not necessarily be you know, the best approach. It gives you volume for sure, but quality is an issue, and so this is where I think, and dovetailing off of Angela's point is having a strong clinical investigator embedded into the data curation process is, is critical, because that person is going to have the domain knowledge to, you know, look at the data and say, You know that, you know that there's, there's some quality issues here, or this, this, this result is way off the expected. We need to dig into it. You know, we've seen cases where, you know, there would be a certain, you know, over a longitudinal period of time, over a multi year period, you see, you would see a sudden spike right in, in like glucose readings and things like that. And it took a knowledgeable clinician investigator to go in and say, well, it was because we changed how we measured, you know, in this month, in this year, right? And and that's where, you know, if we did not have that, that insight, we would just be looking at that anomaly and just scratching our heads when. Wondering
Edwin Lindsay 30:00
why? Yeah, because I think one of the things going forward is the regulators are worried about using AI and the data, and they're starting to scrutinize that a lot more like when you because they'll ask about, how was it carried out? How was it validated? What information? Where did you get the information from, especially if it's been retrospective analysis for a new device, I think that's going to be come more and more to the forefront of where you're going forward. How do we show it's good data, and how do we ensure that we can convince the regulators is good data?
David Vu 30:33
Yeah, that's a great question. I mean,
Edwin Lindsay 30:36
how we think,
David Vu 30:38
I think that, you know, we had this team that validated and scrubbed the data and curated it according to these standards. Or is there another, you know,
Edwin Lindsay 30:51
nothing is, I think it's a learning curve. I think we're going to keep going. And one of the things I was always saying with any type of device is using software or is using clinical data was been retrospective, is ensuring that they're always talking to the regulator on an ongoing basis, especially the FDA. It's bit hard on Europe. Singapore, you can still talk to them. Japan, you can talk to them, but talk to the regulators. Keep going, because if some of the companies we've seen, they just go in with the data and go, there you go, and it's rejected, and they've got like a 35 patient efficiency report saying it doesn't make sense to me. So I think one of the key things we tell most startups is, please keep the very wary of these regulators. But if they don't talk to them, they're not going to go very far. And with this, the way things are moving with technology and working with people yourself, I think it has to be kind of in partnership. But the regulators are a major part of that.
Angela Paterson 31:45
And I think what you're saying, David, about the the models, you know, outliers, so, so long as the the models are able to spot the outliers the way that a human would, you know, if you get a bunch of patient data, there's always spurious results that you don't understand. It's okay if it's a clinical trial, because if you're the clinical monitor, you can go pick up the patient records and see what it should have been. You can pick up the actual scans, the results, look at them, figure out what the value should have been, and you can you can correct it and during the study, but when it's retrospective, you're relying on the hospital staff having filled in that information. So when humans are looking at it, you can look and you can say, okay, that value that patient seems a little bit different to what I would anticipate. And sometimes you can investigate, most times you can. So it really depends. If the AI is able to make that distinction between, is this an outlier? Is this not an outlier? And that, again, that comes with the training, doesn't it?
David Vu 32:50
Yeah, I agree.
Edwin Lindsay 32:53
Okay, we're coming close to the end, more just going forward, and just to give kind of startups advice from your experience, what do you think, and what would you tell them for what investors are going to be looking for from a clinical point of view, from clinical trial for your plan, because obviously they look for a return. They look to give you the money up on yourself. The clinical trial is going to be successful. From from your experience, from both of you, what do you see that it kind of gives a selling point to investor. We've heard lots of presentations over the last two days, all about, please show me the money. But how do you convince them to give you the money when you know you've got a massive clinical trial, but you obviously your product may work, may not work, but is there any tips that you can help? It can potentially help invest startups, get show that they are ready for that.
David Vu 33:51
Yeah, you know, I think, aside from the usual things, you know, the product has to work, differentiate all the type of things from a data perspective. And maybe I'll answer, try to answer try to answer your question from an AI perspective, you know, like, I, like, I mentioned before we saw a lot of first generation AI models that were just developed on public domain chest X ray triage. I think was a classic example of that, right? Everyone had a chest X ray algo and then Mammo, I think, you know, pretty close in terms of homogeneity. So, so I think investors will begin to look at the the training data underlying a model as a potential source of differentiation, in addition to all the usual things, clinical utility, competitive landscape, et cetera. But to me, if I had you know if my model. Is trained on a longitudinal data set that is difficult to you know, substantial, unique, difficult to obtain or reproduce, then I would argue that would be a competitive moat. So I'll give you one example of a of a space I did, I worked with some collaborators in wound care. So this is solutions that, you know, would use a smartphone or some type of optical device take a picture of a wound, diabetic foot ulcer or whatever, and, you know, make some type of assessment, fairly kind of typical use case. We saw a lot of these types of models in in the marketplace, and a lot of them, again, were just trained either on a publicly public domain data or B data. That was obtained in a specific patient encounter and then annotated, you know, by a clinician at that moment in time. Right? What we didn't see a lot of, and what attracted me to a one company I looked at was the fact that they were working with a longitudinal wound care database that was developed over the course of many years by a pi that had specific focus in studying, you know, kind of the evolution of wounds and long term outcomes, right? And then so being able to man map have a data set that matched the the images at a certain point of time, but also to outcomes downstream, made a model that performed better in in the intended use, which was to forecast whether a wound would would would develop into a foot ulcer that had to be amputated, you know, for for, for example. So to me, you know, these are factors that I would argue, or I have observed would influence valuation.
Angela Paterson 37:26
For me, it's all about the clear strategy. So if you're looking for funding for a clinical trial, you need to know what that trial looks like. You need to have done a pre sub q sub with FDA. If you're going to the US, have them look at your study design, even just the study synopsis, how many patients? What follow up are you going to have? What's the demographic of those patients? And have that letter back from FDA say, yes, we agree. Same goes for Japan. I would never dream of doing a clinical trial in Japan without having a consultation with PMDA. Have them look at your study. Tell you what they think. Tell you if you need more patients, tell you if you need a different demographic, if you need maybe even it could be different ages, different weights of patients. It could be, could be anything. Japan's a particularly interesting case because of the very aging population. You see patients being treated in Japan that would be denied treatment in most of Europe. So when coming from Europe or the US, you're perhaps thinking, I'm only going to design my study for patients up to 75 years old. If you go to Japan with that, they'll say, Well, what about our What about our people? Our people are need the surgery when they're 90, and they still get that surgery when they're 90, therefore that's the reason to go talk to PMDA and make sure that you've done that. But having done that research, having that strategy, having those communications with the regulators, that's really what you need to show investors that you are serious, that you actually have done your homework, and that you actually know how much your study will cost, because if, if you've just designed the study by yourself, haven't spoken to anyone, the you haven't validated what you're doing, therefore they're unlikely to fund what you're doing, and
David Vu 39:44
that's why you need they need great advisors.
Angela Paterson 39:47
Yes, absolutely,
Edwin Lindsay 39:49
but I think that's time up. But thank you very much. Thank you.
David Vu 39:54
Thanks for hanging with us.
Edwin Lindsay 0:05
Ed, thank you very much. And I'm Edwin Lindsay from CS Life Sciences. We are a quality regulatory clinical consultancy, and we're going to talk about borderless data in clinical trials. Angela, who's one of the panel, introduced herself.
Angela Paterson 0:22
I'm Angela Paterson. I also work for CS life sciences, and I look after all of our clinical trials, clinical data, post market data collection globally.
David Vu 0:36
Okay? Ed Wade and Angela, hi everyone. My name is David Wu. I'm the director for the biomedical data hub hosted by a star Singapore. Very much appreciate the chance to be here with LSI Asia for the inaugural symposium. And thank you very much to CS Life Sciences for sponsoring as well. Thank you.
Edwin Lindsay 0:59
So we're going to talk about borderless data. And when we're collecting clinical data as it and the current climate, people are wanting more and more data, for regulatory approvals, for evidence, and not just for pre but post as well. Like Europe, you have ne collect a lot of clinical data, but people are always asking, can we collect it in different countries? Is all data equal where you've got data, in Asia, Japan, UK, US, but it comes across that sometimes you all need the same data, or it's all different data. Where do you see in what way is the best way for starting to collect data that you can use everybody's data?
David Vu 1:41
Yeah, yeah, maybe I'll start to jump in with that. So from the, you know, from the Singapore ecosystem perspective, from the Singapore government perspective, we see data in two broad categories, right? And it really depends on who's generating the data. First of all, you have data coming out of our healthcare systems. You know, this is typically what you think about in terms of medical records data, it's patient identified and it's very high volume data. The other category is research data, right? These is typically smaller data sets cohorts. They're de identified most typically, and they're consented and curated as well. Each one of these types of data have different uses to them, in terms of evidence generation, sometimes you need the larger, high volume data sets, and you are willing to or able to sacrifice from a quality perspective, right? On the other hand, sometimes you need consented data, sometimes you need very highly curated data, right? And so that's why it's important to understand the type of data that you need, because there will be different sort of regulatory and governance considerations that you need to take into account in order to access the data. And then, of course, Edwin, you alluded to data coming from different countries and regions, and in terms of validation, that is also very much a consideration, especially now that we're getting into areas like AI, for example, using data sets to do trial selection, either for devices or therapeutics, data that was used to train a model, for instance, in in San Francisco, may or may not be applicable to clinical practice in Singapore, Vietnam, in other Asian markets. And so minimally, it's important, and regulators will acquire revalidation of the models, if not outright retraining of data, of models.
Angela Paterson 4:08
I think the biggest thing in clinical data is we ask about one size doesn't fit all. I always say that all data is not created equal, and the reason for that is in the US. When you go to fda, if you go with European clinical data, they say, Where's your American data? If you go to Japan with American data, European data, they say, Where's the Japanese data. So there are countries where you could have done a 200 patient study in Europe, and it's not going to get you through FDA, and it's definitely not going to get you through the PMDA in Japan. There are ways around that in what we call global studies, which are slightly more expensive because you're talking to different agencies in different countries to get regulatory clearance for your study. But you can talk to FDA, you can. PMDA, and you can work out a patient cohort, which can be majority us and a minority in Japan. Or you can even have a third arm, which can be a European arm. So you can design your study in such a way that you can make the data one size fits all. But when it comes to startups, they tend to have very limited budget, therefore they need to pick what they can do to get the most likely regulatory clearance first, and that is nine times out of 10 US data, because that US data can then be used going forward to get you clearance in Europe. So you there are ways to use data like that. In a post market perspective, it's it's difficult collecting data because a lot of people are not interested in filling in forms and questionnaires and providing information about their patients. And what you tend to find is that countries like Singapore, Thailand, Malaysia, are really good at collecting clinical data, because the clinicians want to be able to write papers and get on podiums at international conferences and talk about their practice, and if you can join up with Someone, a key opinion leader in a big Japanese Singapore hospital or something like that, they will publish about your product and that is good quality data. There is a hierarchy of clinical data, where data coming out of countries which is really good quality but is not internationally accepted, especially in the US, and that would be places like Thailand turkey. You need to work really hard to get FDA and US clinicians to accept that data. So it's a it's a very difficult arena, and if you have all the money in the world, then a global trial is the way to do it. However, most startups need to start somewhere, and it's usually us studies.
Edwin Lindsay 7:31
So following on from that, and you're talking about Global Studies and the collection of data, David, you mentioned clinical flow, clinical processes, you throw in demographics. There's different ways of working in different countries, with devices, diagnostics, where do you see the clinical trials been set up with that? How do you get the best out of these trials? If you setting up a global study, and you have a lot of finance, very easy, you can start to talk, but when you start up, or you're just trying to collect enough data to get your product to market. How, what kind of strategies do you look at with regards to governments and understanding, what data can I hopefully use across a number of countries, but maybe not all, but especially as I say, process flows, clinical flows, different ways of working, and then obviously demographics, because everybody's not this. It's not one size fits all for when it comes to demographics at times. And you've got the you've got different devices, you've got ones that it doesn't really affect with demographics, but then you've got wearables with skin color, size, etc, as well. So how do you find the best strategy for managing that?
David Vu 8:39
Yeah, yeah. Edwin, it's a great question. I'll do my best to answer that question from Singapore perspective, and I'd love to get both of your perspective on how other markets are tackling this issue. Certainly here in Singapore, we see and recognize the value of biomedical data, both for research as well as for innovation, right? And so and we also have reached a point or maturity in our ecosystem where, you know, we are very good at generating data, right? So our healthcare, IT stack, is becoming more unified. We have a national electronic medical record, single patient identifier, across the patient population. Fairly soon we'll be on Epic as our medical record system, from from at least a nationwide perspective, in our public health care system, which serves the majority of Singaporeans. So from that perspective, I think we've we've reached sort of that level of maturity. Now we are making investments in platforms like the one that that I'm the director of the biomedical data hub. Our remit is really to make that data maximally available. Well, you know, for research and for for for innovation, right? And so there's a technical aspect to that, but there's also bioformatics and expertise aspect to that. So as innovators are looking to design trials to answer either R and D questions or research questions. Teams like mine can be of service, potentially, to collaborate, and you know, we're very interested in supporting our startup and SME, small, medium, enterprise ecosystem as well and so often, you know, we're able to provide grants or provide services, take royalty, future royalties and equity in lieu of payment, you know, immediate payment, for example. So some, those are some of the ways that I think we are working in our local ecosystem to make data more available, hopefully that, you know, kind of answer a question from the from the Singapore perspective, yeah, Angela,
Angela Paterson 11:18
I spoke about trial design before, and it really is down to trial design. The world in terms of where people live, is actually a small place. You go to the US, you can probably find someone groups of patients, cohorts of patients in health care systems from anywhere in the globe. So if you really work on design in that study where you have different cohorts from different countries, different nationalities, different races, cultures, creeds, then you are likely to be able to really achieve that global study, even within the US healthcare system. The difficulty there is Japan has its own unique way of doing things, and when you talk to Japanese clinicians, they say, but we do things differently here. So again, that's down to design of the study. Do you have Japanese trained surgeons currently working in the US? Can they be involved in your study? And it may take make your study take slightly longer. Recruitment might take slightly longer. The design of the study might take longer. However, you end up with a truly global study that you could have run in one country. And the other thing, sorry, the other thing there is European Studies, historically, there was a real fear about asking questions about race. It was a completely unacceptable question to ask, and we're having to push forward and say no, you must ask because otherwise your data only works for Europeans. So in the US, it's always been the case. There's boxes to tick for race in Europe, up until maybe 10 years ago, nobody dared to ask that question. So that is something else that in Europe, we have had to learn to ask those questions.
David Vu 13:26
Yeah, I think it's a really great point Angela, and this is where I think it's, it's important to have good advice, right, either from my advisors, you know, like, like yourselves, or from PIs, you know, within the Singapore context, for example, because, you know, there's data that you need to collect in order to maximize the value of your data for the immediate, you know, study that you want to perform, but also to make sure that the data has value for for subsequent use and further interrogation and understanding and designing things like your IRB and your patient consent are, you know, kind of equally important to making sure all the technical aspects are in place. And you know, one thing we're seeing, one trend we're seeing, you know, just to kind of dovetail on another point you made, Angela, is this, this idea of of of multinational or multilateral studies, right where investigators are now looking to combine cohort data from, you know, multiple countries, these global studies, you know, in order to maximize the value, and you know, there, we're seeing challenges around things like data sovereignty, right, and interoperability of data across, you know, a national borders. Area, you may have various types of statutory requirements or regulatory requirements when it comes to treatment of data. I'd love to get some of your perspectives, you know, on on how researchers are are handling this. We've kind of taken sort of a technical we've tried to try to throw technology at this, you know, things like privacy, preserving data, data technology, synthetic data, federated learning. But I'd love to see, you know, kind of hear about, you know, how some of your collaborators are working on this problem.
Angela Paterson 15:39
So I think the biggest thing is when you need to select the right principal investigator, chief investigator, one that can actually communicate with other people in other countries, different clinicians in different countries, and get them talking. You know, they become like almost an advisory board, because what you have is all data is not equal, but also things are not measured the same in different countries, or different operating techniques, different facilities available, there's different measures of things. You know in one country, an MRI scan might be what they do every day. In another country, they don't do that. They do CT. And it's about what imaging can everybody agree on? Because you can't compare apples and bananas in a study, you need to everything has to be the same. So, yeah, you've got all these different patients, but they all need to be treated the same. They all need to have the same treatment pathway. They need to have the same imaging. Otherwise you don't have anything to compare. So there's the real it's in the planning, it's in the design, and it's also talking to the regulators, because you could have designed what you think is the best study, and if you haven't been to FDA, PMDA to discuss the design of your study and have them agree that it's a good design or have their input, then you get to the point where you submit, thinking you've got wonderful clinical data, and somebody At one of these agencies says, Sorry, no good. And that's a disaster, especially for the startups, because the money has already been spent, and trying to find an investor who's going to fund you again when you made a mistake is not easy.
Edwin Lindsay 17:34
So following on from that post market when you need clinical data in the marketplace to keep the product on the marketplace. For claims, if the standard of patient care is different in different countries, you can control it in a clinical study. For approvals, how, what? How do people who do you think people should look at it to collect that data, to keep the product on the marketplace, to collect that data. If there's somebody uses a C some countries use CT, the other one that uses MRI. How do you manage that to be able to convince the regulators that the data is still relevant? Because the if the follow back through the clinical trial, you only use the MRI, but why are they now using CT? So you another example is a lot of diagnostics that I've worked with, we've the clinical process flow is completely different from the US to Europe to the UK. They've got different ways of collecting samples. In the US, you've got small units where people go along, give the sample and it's tested there, and the UK or Europe, it's like shipped for an hour, it's then processed. So you've got these different ways of working. Clinical Trial you can control it's fairly you can control it. But when you start to click clinical data in the normal world, how do you what advice would you give?
David Vu 18:52
Yeah, I think that thing for this one, Edwin, very critical to incorporate any types of post market surveillance requirements into the product design up front, right and then making sure that you have the infrastructure and the procedures, the SOPs to monitor the telemetry or the data coming in from the post market use of the data and trigger You know the appropriate responses, and then looping that in, and I'm sure you guys could tell me better than I can, but looping that into, you know, complaint handling procedures, etc. And I think regulators definitely going to be looking at things like AI, model, performance, post market. You know, very carefully. You know, as as these solutions, especially generative ones, begin to enter into we haven't had an LLM clear yet, but as they enter into clinical practice, you know. Certainly it's going to be top of mind. Yeah, I hope, I, you know, I hope that answered your question
Angela Paterson 20:07
in the in the IBD space, it's easier, because you can conduct validation and verification of your sites. You know, you can show that the results, whether it's an instant test, tested on the site within minutes, or you need to send the sample for a couple of hours. You can do some work to show that the results are equivalent, slightly different with an implant. However, there are differences. There's economics. Globally, some countries do not have the same access to imaging as they do in the US and Europe. That doesn't mean these patients don't get treated. It just means that the imaging will be slightly different. And all it means is people at the companies, the clinical people that are working on the post market data, you need to, need to be get to the point where you can become a little bit creative. How do I compare these two completely different data sets? What is a positive outcome look like in CT? What does that outcome look like in MRI? Nothing's impossible. It's just like it's just more difficult. People measure all sorts of things in different units, different ways and but there's always a way to sort of normalize that data and bring it together so that you can actually compare one with the other without them being the same.
Edwin Lindsay 21:41
Okay. Thank you. And following on from that, now, there's a lot of devices been designed and developed with bank data, data, it's already that you've got in the hospital. With regards to that, how do we know that the data that you're putting in, and it could go to the buzzword everybody's got is the use of AI, and trying to speed up that access to data, and analyzing that data, trying to validate your device, number of AI type devices where they've got 1000s of samples there, the software is analyzing it. But how easy is it to get to that data? How easy is it from a retrospective point of view, from taking, from helping develop products, when you've got the data that's there for analyzing it and comparing it, you
David Vu 22:31
want to start out this, or,
Angela Paterson 22:33
I think things like, it's sometimes quite difficult to get data out of hospital organizations. And the things that's what that's what you specialize in, the data becomes locked in quite often, and there are not many people out there facilitating, extracting that information from the hospital systems, especially in a lot of Europe, it's public health information. You know, there's GDPR protection for everyone. So it can become very difficult to get the data out. The easiest way that I find to get the data is through the clinicians, you know, go speak to key opinion leaders, senior clinicians, even some really young, fresh surgeons that are want to write papers and get on podiums and talk you go talk to them, and you say, what data do you have in your hospital? And they are usually more equipped to get that information out of the hospital system and publish on it than a device company ever would be, and that's where the relationships between the device companies and the clinicians are absolutely pivotal in that post market phase. Because if you don't have those relationships, the first time you really hear about your device in use is when somebody complains, and that's a whole different problem. So I know that your whole business is around getting this data, isn't
David Vu 24:06
it? Yeah, yeah, it is Angela and but I fully agree with you. You know, it's a lot of data flows, at least historically, has been through relationships. I mean, if I could talk about the evolution, maybe from a product development standpoint, you know, set aside post market separately, because there, there do need to be controls per post market. But from a development standpoint, I, my experience, especially with AI, has been, you know, the first generation of AI, Al goes models were developed using public, publicly available data, right? NIH and others made massive imaging data sets available, chest X rays, mammos, things like that. So the first generation models were kind of based on that. The problem with that approach is. That is, everyone was trading off of essentially the same data, right? And so there was very little differentiation, right? Then there was kind of a second wave concurrently, right, that that used data following relationships, just like you talked about, you know, I worked on a project in my last role with a star Singapore, which involved a key opinion leader in a specific clinical domain, I better not say, because Singapore is a very small place, but, but this, this, this, this, K, L, yeah, these guys know who I'm talking this Kol, head of department, was able to use his international standing to pull data from Singapore, of course, from Europe, from Latin America, and they developed this amazing International data set, which they were able to train models on. And so that was an extremely powerful relationship. I don't think that would work nowadays, because they were literally emailing these data sets around, so say, but yeah, our last panel would have vomited all over that. But, and now there's this third generation right where, okay, we recognize the need to access data, but we also recognize the need to respect patient consent, governance, etc, etc, and, and that's where, you know, I think you see governments at least, you know, Singapore's government, and we modeled our system after the UK HDR, UK and UK Biobank, and with the recognition that, hey, innovation in AI, drug discovery, etc, etc, are going to require access to high quality data, but in a carefully governed and controlled way. And that's kind of the third generation.
Edwin Lindsay 27:15
So following on from that, again, from from that, when you said quality data, with the data that's there some, maybe not in Singapore, but in the UK. A lot of it still paper based, and you're gathering how, how do we validate, or how do we make sure that the date the data is correct going in? Because I've worked with a number of companies in the NHS across Europe when we've been collecting clinical data, and it's, they've got researchers and putting the data. And how do we know that that data is right? Because we're hoping it's right, but you're not allowed to go and validate that.
David Vu 27:49
Well, it's not just Yeah, it's great. It's not just researchers. You know, I read a study in the US that 60% of of of data in electronic medical records is wrong, right or did not reflect the actual clinical encounter, and it's because, and maybe this is a US specific system is medical record systems are first and foremost revenue cycle systems. They're billing. They need to support the billing and reimbursement process, and so to rely on on medical records data to train AI models or to do validation of diagnostics may not necessarily be you know, the best approach. It gives you volume for sure, but quality is an issue, and so this is where I think, and dovetailing off of Angela's point is having a strong clinical investigator embedded into the data curation process is, is critical, because that person is going to have the domain knowledge to, you know, look at the data and say, You know that, you know that there's, there's some quality issues here, or this, this, this result is way off the expected. We need to dig into it. You know, we've seen cases where, you know, there would be a certain, you know, over a longitudinal period of time, over a multi year period, you see, you would see a sudden spike right in, in like glucose readings and things like that. And it took a knowledgeable clinician investigator to go in and say, well, it was because we changed how we measured, you know, in this month, in this year, right? And and that's where, you know, if we did not have that, that insight, we would just be looking at that anomaly and just scratching our heads when. Wondering
Edwin Lindsay 30:00
why? Yeah, because I think one of the things going forward is the regulators are worried about using AI and the data, and they're starting to scrutinize that a lot more like when you because they'll ask about, how was it carried out? How was it validated? What information? Where did you get the information from, especially if it's been retrospective analysis for a new device, I think that's going to be come more and more to the forefront of where you're going forward. How do we show it's good data, and how do we ensure that we can convince the regulators is good data?
David Vu 30:33
Yeah, that's a great question. I mean,
Edwin Lindsay 30:36
how we think,
David Vu 30:38
I think that, you know, we had this team that validated and scrubbed the data and curated it according to these standards. Or is there another, you know,
Edwin Lindsay 30:51
nothing is, I think it's a learning curve. I think we're going to keep going. And one of the things I was always saying with any type of device is using software or is using clinical data was been retrospective, is ensuring that they're always talking to the regulator on an ongoing basis, especially the FDA. It's bit hard on Europe. Singapore, you can still talk to them. Japan, you can talk to them, but talk to the regulators. Keep going, because if some of the companies we've seen, they just go in with the data and go, there you go, and it's rejected, and they've got like a 35 patient efficiency report saying it doesn't make sense to me. So I think one of the key things we tell most startups is, please keep the very wary of these regulators. But if they don't talk to them, they're not going to go very far. And with this, the way things are moving with technology and working with people yourself, I think it has to be kind of in partnership. But the regulators are a major part of that.
Angela Paterson 31:45
And I think what you're saying, David, about the the models, you know, outliers, so, so long as the the models are able to spot the outliers the way that a human would, you know, if you get a bunch of patient data, there's always spurious results that you don't understand. It's okay if it's a clinical trial, because if you're the clinical monitor, you can go pick up the patient records and see what it should have been. You can pick up the actual scans, the results, look at them, figure out what the value should have been, and you can you can correct it and during the study, but when it's retrospective, you're relying on the hospital staff having filled in that information. So when humans are looking at it, you can look and you can say, okay, that value that patient seems a little bit different to what I would anticipate. And sometimes you can investigate, most times you can. So it really depends. If the AI is able to make that distinction between, is this an outlier? Is this not an outlier? And that, again, that comes with the training, doesn't it?
David Vu 32:50
Yeah, I agree.
Edwin Lindsay 32:53
Okay, we're coming close to the end, more just going forward, and just to give kind of startups advice from your experience, what do you think, and what would you tell them for what investors are going to be looking for from a clinical point of view, from clinical trial for your plan, because obviously they look for a return. They look to give you the money up on yourself. The clinical trial is going to be successful. From from your experience, from both of you, what do you see that it kind of gives a selling point to investor. We've heard lots of presentations over the last two days, all about, please show me the money. But how do you convince them to give you the money when you know you've got a massive clinical trial, but you obviously your product may work, may not work, but is there any tips that you can help? It can potentially help invest startups, get show that they are ready for that.
David Vu 33:51
Yeah, you know, I think, aside from the usual things, you know, the product has to work, differentiate all the type of things from a data perspective. And maybe I'll answer, try to answer try to answer your question from an AI perspective, you know, like, I, like, I mentioned before we saw a lot of first generation AI models that were just developed on public domain chest X ray triage. I think was a classic example of that, right? Everyone had a chest X ray algo and then Mammo, I think, you know, pretty close in terms of homogeneity. So, so I think investors will begin to look at the the training data underlying a model as a potential source of differentiation, in addition to all the usual things, clinical utility, competitive landscape, et cetera. But to me, if I had you know if my model. Is trained on a longitudinal data set that is difficult to you know, substantial, unique, difficult to obtain or reproduce, then I would argue that would be a competitive moat. So I'll give you one example of a of a space I did, I worked with some collaborators in wound care. So this is solutions that, you know, would use a smartphone or some type of optical device take a picture of a wound, diabetic foot ulcer or whatever, and, you know, make some type of assessment, fairly kind of typical use case. We saw a lot of these types of models in in the marketplace, and a lot of them, again, were just trained either on a publicly public domain data or B data. That was obtained in a specific patient encounter and then annotated, you know, by a clinician at that moment in time. Right? What we didn't see a lot of, and what attracted me to a one company I looked at was the fact that they were working with a longitudinal wound care database that was developed over the course of many years by a pi that had specific focus in studying, you know, kind of the evolution of wounds and long term outcomes, right? And then so being able to man map have a data set that matched the the images at a certain point of time, but also to outcomes downstream, made a model that performed better in in the intended use, which was to forecast whether a wound would would would develop into a foot ulcer that had to be amputated, you know, for for, for example. So to me, you know, these are factors that I would argue, or I have observed would influence valuation.
Angela Paterson 37:26
For me, it's all about the clear strategy. So if you're looking for funding for a clinical trial, you need to know what that trial looks like. You need to have done a pre sub q sub with FDA. If you're going to the US, have them look at your study design, even just the study synopsis, how many patients? What follow up are you going to have? What's the demographic of those patients? And have that letter back from FDA say, yes, we agree. Same goes for Japan. I would never dream of doing a clinical trial in Japan without having a consultation with PMDA. Have them look at your study. Tell you what they think. Tell you if you need more patients, tell you if you need a different demographic, if you need maybe even it could be different ages, different weights of patients. It could be, could be anything. Japan's a particularly interesting case because of the very aging population. You see patients being treated in Japan that would be denied treatment in most of Europe. So when coming from Europe or the US, you're perhaps thinking, I'm only going to design my study for patients up to 75 years old. If you go to Japan with that, they'll say, Well, what about our What about our people? Our people are need the surgery when they're 90, and they still get that surgery when they're 90, therefore that's the reason to go talk to PMDA and make sure that you've done that. But having done that research, having that strategy, having those communications with the regulators, that's really what you need to show investors that you are serious, that you actually have done your homework, and that you actually know how much your study will cost, because if, if you've just designed the study by yourself, haven't spoken to anyone, the you haven't validated what you're doing, therefore they're unlikely to fund what you're doing, and
David Vu 39:44
that's why you need they need great advisors.
Angela Paterson 39:47
Yes, absolutely,
Edwin Lindsay 39:49
but I think that's time up. But thank you very much. Thank you.
David Vu 39:54
Thanks for hanging with us.
17011 Beach Blvd, Suite 500 Huntington Beach, CA 92647
714-847-3540© 2025 Life Science Intelligence, Inc., All Rights Reserved. | Privacy Policy