Simon Turner 0:05
So for those of you who were here five minutes ago, when we were talking about this flywheel effect and recycling content and things like that, well, here I am again, so happy to be here. Yes, so everyone, this is a critical topic that we're we're starting to see evolving right now in the entire healthcare ecosystem. And you've heard big things happening just across the entire AI use case spaces. Everything from the implementation of quality management systems, the evolution of CE and regulatory guidelines, the implementation of the AI Act, which is currently ongoing, and how all of these things are impacting us across the healthcare ecosystem. Health care, of course, being one of the high risk categories. So it's fundamental we know about this, we've consider this and we'll see how do we apply it in reality, but maybe taking a step back. And again, going into this kind of entire space? What is digital medicine, the way we've kind of been thinking about digital medicine as a scope and a field is, it's where data and computational techniques come together to then generate novel insights across the healthcare continuum. So when we think about it, we imagined the patient really being at the center of this in the kind of precision medicine space, but then you can go upstream into what we call enabling technologies, these the fundamental infrastructures, the AI tools to discover new biology to actually have the adoption, implementation of AI into the hospitals themselves. And then downstream of the patient is, of course, the treatment side. So how are we able to change modulate patient behavior, and therefore actually improve outcomes over time. So without entire kind of preface set, I'd love for our two panelists here to introduce themselves. And I think you'll find that Helene is particularly focused on the latter half of that. And of course, Paul, you're much more on the the actual QMS side of things. So please Helene.
Hélène Viatgé 1:41
So I'm Helene Viatge, the co founder and co-CEO of Agora Health, Agora Health is a Venter Studio dedicated digital health. And we have a strong focus on medical devices. So not all the digital medicines that are described as now, Simon, but really a strong focus on digital medical devices are regulated devices. And we've two main streamline one being tools for physicians, which are often AI based, that are used to stratify your patients predict the trajectories of your patients and so on. And then tools for patients really dedicated to patients, so apps, digital therapeutics, and so on. And the idea really with Agora is to go from all these ideas to actually tools that are used. And we feel that there is a big step missing between the actual development and r&d being performed and the usage on the field by physicians and patients and trying to really feel the gap between innovation and adoption.
Pall Johannesson 2:50
Yeah, pleasure to be here. So I represent the clinical department, if you like of Greenlight guru, we're gonna grow as a software company providing software solutions for medical device companies throughout the whole lifecycle of their device. So QMS, clinical data capture, and what we call the Academy. And I think the reason that I'm here is I just absolutely love data. I'm a data nerd. And I've been in this space, I essentially founded a company called Smart trial that was acquired last year by Greenlight guru, which was that data capture component. And you know, I've been in the space of working with my tech entrepreneurs, medical device companies, and seeing the different waves, if you like, of digital health and eHealth. And all these terms that that we've seen, along the way, blockchain and now AI. And I think it's, this is probably the most intriguing part in my perspective, in terms of applicability. So I'm excited about our discussion. All right, very cool.
Simon Turner 3:49
So we're going to treat this as a conversation. So if anyone in the audience has questions and things, just you know, please do throw your two cents in here, because that's, that's what we like this. So I guess maybe let's, let's kind of reposition things a little bit again. I mean, when we think about investing in technologies, we think about it, there are kind of three fundamental ways of doing this. One is to invest into companies, which are looking at an existing technology with a new channel. You know, Netflix is a perfect example of this. So it's the same existing content we've known for years, decades even, and suddenly a new approach. So you actually have an on demand, for example, what we've classically done in biotech, and medtech, we've done new technology with an existing channel. So we'll develop totally novel therapeutic agents or approaches will develop new medical devices, and then traditionally will plug into the sales and distribution channels that are well known to us. So medical device companies, pharma, for example, perfect example, when we get to where we're at now, in this digital medicine space, we will actually see that there's both a new technology and a new channel risk that potentially plays itself here. So if I kind of break this down, where I see is, is Helene, you're working on that new channel component. And of course, you still need to build a new technology or somebody does and that's how we're you are providing the services to be able to really do that. So when we kind of think about that space and that kind of component of it. What are the critical pieces that we need to start thinking about in terms of regulations, safety, and explainability? And maybe let's start with the data capture the data collection, Pall over to you. What's kind of the fundamental there.
Pall Johannesson 5:15
It follows the principle of garbage in garbage out, right. And I think that's where everybody's mind starts, right? If you want to build continuous learning algorithms, you need to ensure that the data that you're trying to teach on is accurate and has a certain quality standard feel like, and that's probably the first issue, I'm not even sure that people can solve that individual. I think Elon can probably pitch in on that one, too. But then, in terms of the regulation, and this is interesting, the more I thought about it, the more I've you know, come to the conclusion that regulations are, by nature always reactive to technology, right? So we are not in a position from a regulatory perspective, where you could say that this can be approved in that setting or this setting, because quite frankly, just doesn't exist. But I do think that the better the input, the better the output in the foundation of like actually collecting the data. And we were discussing this on a quick call last Friday. And I and we even talked about, you know, people are so excited about technology. But we're a lot of are still using paper, right? We're still in analytic methods, and we're talking about AI, there's something there's a misalignment here between what we want to do and what it is exciting to do, and what we actually have at our disposal.
Simon Turner 6:36
So fundamentally, we've already got this massive issue of, do we get anything but garbage to even build models on as well?
Hélène Viatgé 6:44
Yeah. And I think there are, I mean, on data capture, there are lots of points to address, one being that the physicians actually capture the data during the clinical times, but in different tools than the one that will be used for research and development. So you will find in most countries, physicians, reduplicating, really the data they have collected in the clinical best way for research and development of new algorithm. I mean, in terms of regulatory, I think the everyone is progressing in the right direction. There are lots and lots of new warehouse for health data that is being set up in France, in Europe, I think in Europe, there is the European health data storage. So I mean, thinks and the infrastructure is starting to be in place really to allow for way more adaptive, and way more quick way to access the data. But you will still find so many physicians developing AI based medical devices on data sets that are not always compliant in terms of patient consents. So I mean, again, I think a few months ago, I had a cardiologist coming to me with a nice algorithm really good. Great says that so strong one, and when I dived into the contracts in place for the data collection and the training data set, yeah, I mean, they were being done in the research team used by the research team and no contract in place. You want to go far with that you want to go with FDA or EMA with something like this. So there are still lots of education infrastructures to put in place to allow for simple data capture.
Simon Turner 8:30
So even if you've just generated the datasets, of course, we need to make sure that it can be leveraged in the context of what it was that you can use it. So it's not Research Use Only suddenly translated into this. And I guess this is something that we're seeing is pretty different in digital medicine. Whereas in the past, let's say, when you were developing hardware, or you had some anecdotal evidence for maybe a new biological pathway, you didn't necessarily have to have that patient data or patient consent piece of the puzzle. Now, suddenly, you do. So integrating that already into the first phases of when you're building a product, not even necessarily thinking about the next steps of it.
Hélène Viatgé 9:02
But it takes time.
Simon Turner 9:03
Yeah, huge amounts. But what about the next step of that? So you've built your AI? Now you need to validate it? What does that look like? And where are we kind of seeing the, I guess, the hurdles? Because we are dealing with people to the one hand, this is patient data, of course on the other. And whereas what we what we found interesting in digital medicine is it it kind of sits between medicine on the one hand, which we know traditionally, and then technology on the other, which has more of a run fast and keep iterating approach. Where are we kind of thinking about how this now needs to fit in from that, again, quality management compliance, taking that into account to be able to actually build stuff. But your point also Pall was regulation is very static. Where do we kind of need to find that middle ground that works here?
Pall Johannesson 9:48
I think essentially, some of it has been found in in certain solution types, right? So if you take him it's diagnostics. It is it You can debate if it's AI, or if it's just predictive analytics. Like if you're literally just using your model to, you know, assume something, but it doesn't necessarily learn over time. So it's not continuously learning. It's not a continuous learning model.
Simon Turner 10:13
Pall Johannesson 10:14
Exactly, it's you basically take, you say, I will teach it up to a point where it's has a sensitivity or detects, at a certain rate, for example, a tumor on an image. And then we'll basically make it static. So we'll take that and lock it in place. And then we can go through the traditional validation routes that you would with, you know, medical device software, if you like, or software as a medical device, and then you can get it approved, or you can put it to the market, the component that I and if anybody in the audience has seen anything else that I haven't seen, at least yet, is anything that's continuously learning. Right now, that's just contradicting what the regulation says, like regulation is, we know what happens, like I've validated that this is how I made the device, this is how it's going to act. These are the known risks. This is the probability of failure. And it's acceptable to us as authorities, if you'd like to put this into humans if you'd like. And on AI, there's still just the end, I think, maybe we're overreacting on some of these things. But on AI, it's still this distance between what it's like a black box to a lot of people, right, they don't know what to expect, there's no way of calculating risk other than the risk is 100%. probability of failure, right? Like similar to the regular software standards, that's why you have in the IEC 62304 standard, there is always the last component that can't be software, right? Like software has the probability of 100% of failing. That's what the standard says, That's what that's how we regulated. So if you take AI and say the probability of failing is 100%, you're never gonna get it to patients, which is probably also like not what we want, we want to use technology to advance healthcare the best we can.
Hélène Viatgé 12:04
And I think in terms of the iteration as well, so you certify your medical device. And as as you've mentioned, it's often done, you've learned and then you free, you free, free zero, your model won't move.
Simon Turner 12:17
Now you know, the performance. This is the performance model.
Hélène Viatgé 12:21
Validated and you get your CE mark or your FDA approval. And then if you want to do minor changes on minor or major changes, then you have to go through all the usual notified body communication and process for them to validate. And that takes time. And and we all know that that's fine, but you're way overwhelmed for MDR reasons and so on. So explaining to them that you want to update your model every two months. It's not something they are happy with. I mean, yes, they can charge you and charge you, but they don't have time for that. So you basically fall into a situation where you submit amendments and changes of your of your AI model on a regular basis, and they cannot really follow. So there is a big risk in terms of them saying yes, yes, we validated without really having the expertise to do so. So I think it's also about dealing with the fact that digital solutions, we are used to upgrades in the in an app every week, every day sometimes. And it doesn't fit into the regulatory landscape.
Simon Turner 13:26
That's a really interesting point. Actually, this is going to be a topic, we haven't discussed the practice of panel. So let's go completely off the board here. Nobody you're you're you're bringing up a really interesting point, which is you need to start thinking about quality management and regulatory from a tactical perspective, how much do you play it? In fact, how much do you skirt the issue, but at the same time, be able to create I guess the competitive barriers of validation required to then get your product to market on the one hand, but also get enough let's say buy in for people to reimburse do that. So what are kind of the the nuances you can think about when looking at it and saying, Okay, do I actually go for maybe a clinical decision support solution? Versus a full blown, let's say, class two, class three medical device? Are there? Are there things to play with here? Or is it really just more of a actually depends on the use case?
Hélène Viatgé 14:14
I think there aren't. But yeah, I think it's depends on what you put under your CE marking or FDA approval. Yeah. So depend on the level of details that you give to the notified body in terms of what the algorithm and then if you've scheduled this correctly, and in a clever way, you can probably make the algorithm evolve from time to time and explain or justify that it's a minor adjustments instead of a major one. And the other point is, you have a validation that I said usually if you have your AI regulate regulated so you ever trained training data set and then a validation external validation that said, the louder it is and must complete it is the the quicker you will be to read validating your iteration. So if you want to do new iterations, and you have all the data sets available for you to reuse, that's perfect. If then you you realize that you're missing some data in your external validation that I said, then you're you're gonna
Simon Turner 15:15
Hélène Viatgé 15:16
Oh, sugar. You agree?
Pall Johannesson 15:19
Yeah, absolutely, I think it probably comes down to us figuring out the applicability of machine learning or AI in general, like in our solutions, right? So there's a, you can design your solution so that you unnecessarily evade regulations, but you kind of slide yourself into maybe a little bit of a software, software regulatory class. But you could also, you know, think about other applications for the technology in our healthcare system. So one thing that I personally think we're going to see probably faster than we're going to see, the latest technology in devices being used on humans, is the technology being used on our healthcare systems. So no more, you know, and that's my tech too, right? Like to analyze effective care pathways to figure out where the best outcomes can be found, for certain patient phenotypes, and essentially, at a macro level, rather than a specific level, and is where I would put my money if I was, a betting man.
Simon Turner 16:28
Okay, gotcha. So, if you're thinking about it from a kind of hospital workflow perspective of what is the optimization, you can get there. So you're, you're actually having clinical benefit. But you're also I guess, having an healthcare economic benefit for the hospital for the payer systems. You won't necessarily though impact, let's say, the direct or the AI will directly impact the patients.
Pall Johannesson 16:47
Exactly. And I think it goes back to like, AI sounds so fancy, and I do apologize if I'm like, and I'm not smart enough to even understanding myself all of it. But I would say that, you know, there's been a lot of work done on regression modeling, for, for example, healthcare system. So an interesting project, probably more than 15 years ago now that I was exposed to was analyzing whether you could predict, you know, patient load at a certain hospital center based on geography of that center, based on the prevalence and incidence rates of diseases in the area, and the weather, and you know, the time of year and whatever else variables you could put into it, so that we could then plan our staffing for, you know, for that certain hospital site. And that's not necessarily AI. But if you if you then made that a continuous learning model, it would get smarter and smarter over time, but it's in its essence, it's just a regression model, like it's literally some set of standard coefficients. If you like that, you then multiply the probability of how many patients you're gonna see in a given department.
Simon Turner 17:55
And then you can have that very specific per hospital, I guess, or per function even Exactly. But that brings us really interesting to another topic, which again, I'm a bit of a geek when it comes to these things. So I find this super interesting, but it's it's also that static nature that we were talking about, about a model. So you've trained your AI, its class to medical device, let's say fully validated, and cetera. But you've trained it, I don't know, let's say hypothetically, in Stockholm, on a patient population, which is coming out of Karolinska Institute, for example. And then suddenly, you're gonna go and deploy that now, maybe in other countries or, you know, totally different heterogeneous populations, etc. Do we need translation? Is that just Okay, are we going to get the same performances? Or what are the other kind of hurdles and challenges? Even eventually? I mean, we just had a session before the impact on the business models and such, what kind of your thoughts there
Hélène Viatgé 18:40
Depends? It's, obviously it depends, you have to justify why the population is similar and how you feel that your validation is valid. And if not, then you have to reproduce it. So I'm, in my past business in renal transplantation, we had validation that sets in, in the US, but also in Europe and entering to develop some in Asia, because it's always different populations. And you kind of have to justify as well, that the care pathways between being different so it's not only about the ratio, situation or the demographic, it's also about the care pathway. So if your patients are followed very frequently, or not frequently, what are the tests being done? Because sometimes the data you need for your validation is actually not available in one country because they don't do biopsy, for instance. So it's, it's actually really tricky.
Simon Turner 19:35
So, there's a lot about the data collection already back into this and then hence that might change or the patient populations will be different. So hence, the actual outcomes of the algorithms are totally different then.
Hélène Viatgé 19:46
Yes. And as I think we've mentioned, when we speak as well, care pathway changes. So if you've developed your data, set your algorithm on a way of working and then three years later or five years later whatever you're finally on the market, but then things have changed
Simon Turner 20:03
Standard error, something totally different kinds of
Hélène Viatgé 20:05
All new drugs have come.
Pall Johannesson 20:09
I can't help making the the comparison to like, all other devices, right? Like, it's not that different when you then put it in this perspective where you say, Well, I did my clinical trials on this population in this area, and you want to go to another market to take your device to Japan, they're going to look at you and say, Why are you certain that you have not there are not additional risks introduced from our care pathway, or from our population demographics and things like that. So it's actually not that different from regular devices, when, when push comes to shove, if you like, and that you're gonna get the same questions.
Simon Turner 20:49
From that static perspective, I guess you're you're totally right. But it's as soon as you can say, well, actually, now, when we think about in digital medicine has the power to iterate continuously. If we give it that ability, where suddenly we're saying, Actually, we need to lock it in place, we need to maintain that static nature of it. I guess that's a huge difference, at least in my mindset of what could be the power on the one hand, but isn't on the other, but maybe on that subject? And you've raised a really interesting one, it's how much do you feel that the healthcare system is aware of all of these issues that potentially are being faced? Because I mean, we speak with physicians all the time when we're looking at interesting investments. And they say, Yeah, it sounds great. You know, we've tested it in our systems, and it seems to be working. But fundamentally, they know nothing about how these systems work, do you think that's going to need to change in the future? Or is it still a bit like a therapeutic actually, as long as it works in the patients, you will take it and do what it says and pack it?
Pall Johannesson 21:43
I think it might be if you know, when you look at technology, just in general, in the healthcare, it takes so much time for adoption, I get like we're still working on paper forms, with a lot of the stuff like in a lot of healthcare says we talked about, you know, risk resets in, in France, right, like that prescription prescription and English prescriptions in France, right, that there is still being done on paper. And, you know, so So you have that component. And, and we are fundamentally risk adverse when it comes to healthcare, right? We, it doesn't take more than one failure, one patient to die, one serious adverse event for everybody to throw their hands in the air and say, I'm not touching that thing. Again.
Hélène Viatgé 22:31
Or Europe to change the regulation or change the regulation.
Pall Johannesson 22:34
Simon Turner 22:35
We'd never do that we did right.
Pall Johannesson 22:39
So I think, to me, that's like we are, we need to figure out what what like what is our risk appetite, when it comes to using the technology? And to what end? Right. So if you if you paint a picture of just healthcare and technology and health tech in general, I, usually when I was explaining my business, back in the days, when I was searching for investors, I had to make this I made like a circle and said, well, you'll see that the population is ever aging, we're getting more chronic diseases, why is it getting wealthier, which means we're going to pay for more health care, which means we're going to live longer and get more chronic disease and pay for it. So it's like a circle that keeps spinning and keeps growing. So the market is, unless we do something else, it's just gonna continue to expand in general. And then when you look at, like, people want to live longer and be healthier, to which and can we use technology to do that? And then we asked their willingness to pay for that? And are we willing to accept the risk? Right, at which point do we accept the risk that we get, you know, leave it to AI to decide which treatment I'm going to have for a cancer? Is it after I tried all the others? And then I get to that point, and then it's acceptable for me as a patient? And again, how do we even regulate that so and I know, I've just taken us down a rabbit hole.
Speaker 1 23:57
But this, I think, is the power of also this kind of space, because the reality is you can start framing it and shifting it in whichever way you want to. But you've opened up a really interesting topic there. For me, it's, it's also we're very reactive as a healthcare system today, when we're starting to look at what digital medicine also can bring to this in the future. It's almost becoming proactive, or at least let's say participative, and eventually proactive and eventually moving that entire needle away from just okay, you've fallen ill now we treat now you can actually start taking part in that. Helen, I'd love to get your perspective on this in terms of the requirement therefore for trustworthiness from one I think the clinicians perspective, but also the patient's perspective of these types of solutions, because as you were saying, you've got clinical decision support solutions, you've got digital therapeutics, etc. Is that something that really enters into the game or is it more just a as long as it has a regulatory stamp? Do you think that's that's enough or sufficient?
Hélène Viatgé 24:51
No, I think there is a big gap. I mean, the regulatory stamp is necessary, but it's not it's not enough and also To on your former questions, but the healthcare system, I think the healthcare systems have actually realized that there was significant work to be done on the regulatory part. And they've done it. They've worked on the on the regulation, they worked on the infrastructure, there are still lots of improvements to be made fine. Now they are looking into business models, and how do we actually get the industry to go there. And they are moving towards reimbursing as medical devices, apps, and hopefully AI based solutions for physicians as well. So I'm thinking that they are trying to accelerate the field.
Simon Turner 25:37
So the payers are saying yes to this.
Hélène Viatgé 25:38
Payers, and then obviously, the last point is about adoption. And clinicians, currently, I think are in between, they know there is value beyond those solutions, they know they should be able to use them. But it's still way too complicated for them to be able to use them in their clinical practice. Seamlessly, like really, it's interoperability is a nightmare. And then they have to duplicate work, and so on. So I think we are in this in between situation where the IT system in most clinical institutions, is far behind all the industry. And as a result, they are they are struggling because they want to use these things. I think they know there are more and more clinical evidence available to prove that this these solutions are efficient, and will make patient care better. However, it's not really doable within there. Really work resources. So what they do is mostly they use those tools, but outside any regulated landscape. And they use in house developed tools within their IT system only. So they will never try to replicate this outside and make a business out of it. But they are using this tools, they outcome, I think I think there are more and more convinced about the efficiency of those tools. However, we need to really bridge the gap between the wonderful AI algorithms that are being developed and the paper based clinical data that is actually being used in the in the care pathway.
Speaker 1 27:23
Okay, that's an interesting kind of Stranglehold situation, you need to unlock the one to get the other one. Yeah. So maybe thinking about that, then from a practical perspective, how do you kind of think about what's enough, let's say, provision of trust provision of regulatory insight versus too much. So we now find this happy middle ground, I guess. And I'm thinking about this more from a company's perspective that wants to build something here, rather than the regulator's perspective, which I think will always be much more risk averse. But where would you think about this? How can these companies navigate most efficiently? This regulatory quagmire, I guess, that we've created for ourselves, is there kind of any particular learnings that you've had that you'd say this I wish I'd known 12 months ago, 24 months ago, now that I can apply?
Hélène Viatgé 28:07
I'll start, because I've got one. I think for AI based algorithm, at least, there is a really clever balance to get between the input data and the output data. And you can design the most brilliant algorithm with great productivity and sensibility. But we've 12 input data that are actually not available in the current care pathway.
Simon Turner 28:36
So like patient data complianceas an issue or
Hélène Viatgé 28:39
You don't do this blood tests regularly enough, or you actually when you do the blood tests, you don't collect this specific data. So yes, your algorithm is extremely good with this data. But will it be good without this data and with the data that is already in the corpus where it's bit like when you started by saying innovation in the tech and innovation in the in the process, I think there is a nice balance to get between my algorithm is good enough and better than the standard of care. With this data, and I don't want to reach for this guy, I'm just gonna be better than what's already available. Because if I go and try to improve my algorithm more and more and more, the data that will be needed, won't be available, and you will never be able to deploy your algorithm in current care. So I think the balance really in terms is and I wish I had done that before is between inputs and outputs, and also between like looking at the care pathway in each hospitals because it's never the same despite guidelines and national or international guidelines in terms of campus where you will find differences in most hospital. There are some Under standards, you need to rely on them for your algorithms. So look at that before trying to develop the best algorithm, I would say that this input data to data is quite important.
Pall Johannesson 30:12
Yeah, I would just, you know, maybe just add a little bit of an angle to that is when you, if you want to go into building a health tech solution that has components of, of AI in it, I would say figure out the shortest way to, you know, the money, if you like, figure out the shortest way to revenue, to be honest, because if you can spend a lot of time like you said, improving on algorithms, improving on, you know, integrations into different things. But when push comes to shove, nobody wants to pay for it, if they were not willing to accept the risk you're introducing, or we're not willing to accept your business case, from a parent's perspective, either to the patient or the system. You know, you can develop anything you like, it's not, you're not going to survive as a business. So on that perspective, just, you know, find that balance, like you said, as soon as you can, and even smaller things, and then growing from there, incremental improvements over time, are usually a safer bet, than the big leap that I'm going to completely change the way we do heart surgery, or the
Simon Turner 31:21
2.0 revolution rather,
Pall Johannesson 31:22
Simon Turner 31:23
Yeah. The thing I love about both of your models, and the way you kind of structured your companies is also you're providing this very important service layer, if you will, for everyone now developing novel integration, basically novel applications and such. I mean, on your side on the the data capture the quality management system that goes with it, Elene kind of also that regulatory support layer of it. I think, in the future, what do you think the the space is going to do? Is it going to continuously build its own internal, let's say each company has its own regulatory requirements and regulatory practices? Or is it actually going to be a bit more like we've seen in antivirus software, you know, each company actually focuses on building what it does best. And then we'll buy in a third party service provider. So ultimately, you guys become an underlying platform that everyone else can now build on top of.
Pall Johannesson 32:08
I think, a number I might be completely wrong on this. But I do think that we're going to see more solutions that integrate together in different components of that pathway, I don't think we're going to have some fundamental thing that everybody needs to kind of integrate into as such, but I think that we will see the verticals expand, where somebody gets really good at one thing. And I think one of the, you know, if you like it from a healthcare system perspective, you'll find that even in a country like Denmark, with 5 million people in five different health care regions, they could only agree on three different electronic patient record systems. Right. So they ended up trying to get it down to that. But now implementation is a nightmare, because guess what, they don't work in the same way across these five different regions. So the systems don't match the the care pathway, they, so they're struggling with implementation, instead of going for verticals and saying, Well, you're going to need this component and this component and this component and make sure that you force the vendors to work together for a better outcome. That's what my take on it would be. And I think that's going to be similar in this perspective that everybody wants to own the data right now, which I completely understand there's a lot of value in owning the data set. And if you have a valuable data set, then ultimately it's going to be somebody who wants to get get their hands on that data set to train or to use it in their in their in their business. But you'll find that, you know, I think that the more we open it up, if you like the better it's going to be for us as a healthcare system. Is there a business in that is anybody's guess as good as mine.
Hélène Viatgé 33:49
I think one thing we may see is in the digital world, you at AOL, A while ago, and then you had the explosion of all those technical breaks stripe and so on. So kind of segmentation on just one task to be performed. And now after like all those years, we are seeing the aggregation again, in I mean, solution like Notion and so on. I think in the digital health space, we are still at that stage where some companies think they can do everything on their own. And when you try to deploy an algorithm, you will be asked by physicians, can I have a break for our prescription? Can I ever break for data collection? And can I do this and this and this and never do that? They mean, the physician will tell you I won't be able to adopt your algorithm if I don't have all the bricks around. But I think it will be a big mistake to kind of tried to do everything. And I think we should really and I hope we will see in the coming years, more and more digital health companies going and specific like really By targeting one break, and then being way more into this API integration, stack mindsets that we've seen in the digital field, because that's necessary for the adoption, I mean, you, you won't find that one solution fits all for the physician world, they all need something specific. So you need to be prepared to see with really, this technical breaks, working together,
Simon Turner 35:25
And I guess all that built on kind of the standardized quality management and regulatory compliance. It's, it's funny, one of the things that we've seen a lot is, is a growing kind of, let's say, interest from the healthcare community now of saying, Look, I want this trustworthiness I want this explainability. And what they're valuing more and more as companies that are able to show the metrics of this is the functionality of my algorithm, or the products and services I'm offering, versus a competitor, which they let's just throw it out there. Here's the publication Don't worry about the rest. We know what we're doing. How are you thinking about that tactically, now, as you build a go as well? Is it something that you you've kind of built in and baked into the true offering? Or is it more of a we will do this, but in the fullness of time,
Hélène Viatgé 36:06
No, we want to go with this carrot mindset. And I'm just thinking, I had a conversation with it academic team around bulimic, and they were not clear about the outcome. What do they want to prove? They were like, it's clear, we need to do this, and this will help the patient? And what like, what's the clinical impact? What the, why are you doing this. And if you don't have that in mind, you will go nowhere. And it's not enough to say he will help the patient, you need to know what you want to prove, and that just start collecting the clinical evidence about that. So really, it's we we don't go in any businesses. Now, without having this really clear view on this is the organizational impact or clinical impact. This is what we want to achieve. And this is a roadmap for data collection to prove what we are doing.
Simon Turner 36:58
So I guess we're coming towards the end of our panel. I mean, for me, I've, I've taken a couple of notes here. And I want to try and summarize things. So on the first aspect, data quality, the input of it, you get that wrong, you're basically stuck from day one, I think we can all agree on that. Yeah, this is why platforms such as yours, providing this quality management system, making sure that you know, what is the data heterogeneity, the data ownership, and also the rights that are associated with it is so critical. On the second one, it's when you're actually validating that model, one of the problems that we do face in this space is still this very static nature. So build it, validate it, and then you're kind of stuck with that's the performance you've got until you go through the entire process. Again, if you choose to go down the regulatory path, if not, you have this ability to then try and navigate it from a tactical approach. But then you might have different claims, or at least different kind of costs, and or let's say pricing models that you can get to. And then I guess the third one, and being a bit, let's say, more pragmatic here, it's also that you can't build everything, you need to be able to rely on third party services to then be able to support you in these approaches and things. But I guess ultimately, it all boils down to today, we're in the process of still seeing AI development and adoption happening. We need to continue that kind of regulatory journey of development, but then also the trustworthiness component of being able to say, look, we can demonstrate trust, we can demonstrate utility of these types of approaches. Is that a fair summary? Or did I miss any kind of critical topics? You think, Oh, damn, we need to talk about that.
Pall Johannesson 38:23
Now, I think you're spot on. Exactly. It comes down to making sure that we understand the stakes that the different actors have, and the better we are at that, from the regulatory perspective to the founders, to the startups to the healthcare systems to the patients, the sooner we're gonna find a mutual playground, if you like, and then start evolving from there and
Simon Turner 38:46
grow the standards into the future, I guess for everything we see. Exactly. Excellent. Helene, Pall, thanks very much for taking the time.
Pall Johannesson 38:53