Proprio | Gabriel Jones, CEO

Proprio is using multi-camera imaging to create digital representations of the surgical field through augmented reality and machine learning.
Speakers
Gabriel Jones
Gabriel Jones
CEO, Proprio

(Transcription)

Gabriel Jones  0:03  

Okay, so I'm gonna take a different tack with this presentation, I think you've seen a lot of sort of pitches, we're going to talk about the future of data, the definition of data, future of surgery, augmented reality, computer vision. But this is really not a pitch. This is more of a call to action. So if this call to action speaks to you, I'd love to talk to you about what Proprio is doing. This is a big, big swing. Last night, Manny talked about taking risks and being rewarded for those risks. So that's the underlying theme here. And please come up afterwards if you'd like to talk about programming our vision. First, who in the audience knows who Elena's Morissette is? Come on? Don't be shy. Let us Morissette is the person who ruined the word ironic for all of us, right? There's a lot of words that are misinterpreted and misunderstood. I would say democratization is an overused term, especially in business. Disruption is another word that has a real meaning and a real strategic value to it. But it's often misused data is very similar, right? Data is often referred to as the currency of products or understanding or knowledge, right? I think about it more like the fuel, right? There's bad fuel, there's dirty fuel, there's, there's garbage in garbage out bad data. And there's really good clean, high octane fuel, good data. And that's really what propriate is all about, we're actually capturing some of the most valuable data in the world in the operating room. So that's kind of a theme for you to keep in mind here. As we think about those big bets. In artificial intelligence and machine learning, we think about black boxes, right? So there's multiple inputs that go into a process, like aI like machine learning, and an outcome emerges from the other side. It's not necessarily explainable what's actually happened in that process. This is referred to as explainability. In AI, the internal behavior of the code is unknown. I'm looking at my chief medical officer over here, Dr. Sam Brown, a practicing pediatric neurosurgeon. 20 years in practice, he's got more knowledge that he takes with him into the operating room. But it's all these factors that contribute to his performance every time he goes in, and the surgical outcome emerges. It's not necessarily explainable, all those factors that are that are playing into that outcome. We're on a mission to actually map all of those factors and quantify them, we have some breakthrough technology that we think is going to help us to do that. The difference here between AI and machine learning is we're now talking about the practice of medicine. So the internal behavior of the code may be unknown, but we're measuring the impact in lives. So that measurability is incredibly important. So how do you shine a light into the black box of surgery? So we've come a long way in medicine, right, from X rays to neurosurgery to ECG, and all these technologies might microsurgery microscopy, ultrasound, CT, MRI, PET scans, these are all the commercial the years the dates of commercialization. The reason I leave this one up, is because we're in the long tail of innovation from any of these technologies. What do I mean by that we're essentially squeezing the last bits that we can out of these major innovations. And we need to aim higher to take bigger swings and take risks. So NASA knows how to take risks, right? We're living in this era of just having launched the James Webb telescope, that's gonna look back into the origins of our universe. In a way time traveling, right? That's very much looking backward. But what we really need to do is look forward, when you look back at Earth, at humans at biology and look deeper, look further, because every time we do that, we make all these incredible innovations, these leaps, we've all just, we're living through a pandemic, hopefully, this is the last wave and we can all enjoy this safely. But every time we look much closer into human biology, that's where we make the leaps, right? So there's a company called acorns, actually a lux capital company, they've raised a few 100 million dollars recently. And they're doing real time mapping of cellular biology down to the protein. Right, there's going to be many developments and therapies that come out of these kinds of innovations. This is a big swing. Several Nobel laureates spent 20 years working on this. And then Lux made a big bet, right? This is what we all need to be thinking about whether we look in medicine for these innovations, or we look elsewhere. Who here has a recent iPhone? I'm not seeing any hands, but I'm sure you do. Well, then you've got some application of LIDAR technology in your pocket, right? Where did this come from? Well, we look to the moon and we were trying to calculate the average position of the moon, so the Apollo missions and others wouldn't miss the moon. In so doing, we understood that bouncing light off of the moon allowed us to get a textural map of the features of the moon. And in doing that, we understood that bouncing light and lasers lasers off the moon actually could show us some element of what was beneath the surface on the moon. This is a discovery we made only because we had the audacity to point lights at the moon and try to land there. So we've actually brought these kinds of technologies in the operating room now, today, whether you whether you know it or not. It took too long, and we need to innovate faster and take some bigger swings. So this is an image from one of our sensors and our camera array on a robotic arm dynamic. We repositioned in the operating room doing a similar thing to what we did with the moon. We use light field imaging, and we're a pioneer of this type of imaging. The way that I think about it is it's a computational vision approach to mathematical formula for every bit of information traveling through every pixel in a scene. And if you can point a bunch of sensors at that scene, whether they're cameras, depth, sensors, infrared, you name it, you can actually extract bits of data from every pixel on that scene, and actually navigate through that scene immersively innumerable viewers from anywhere in the world can watch and participate in a surgery. And very importantly, you can visualize and map everything that happens in that case, that's really interesting. So we actually realized we were working at the cutting edge of computer vision. This is a paper from CVPR, which is a major Computer Vision Conference, applying a technology that is at the intersection of computer vision and machine learning, called neural radiants fields, or nerfs, what it's doing is essentially the same thing your eyes are doing right now, as you look at me, you don't have all the information in your brain is helping to reconstruct that scene. Now, if I left this up here for long enough, it might hypnotize you. And since we're at an investor conference, maybe that's a good idea. And she starts throwing your wallets at me or something. But we're actually doing this in the operating room. This is three years ago with one of our arrays pointed at a poor sine study, taking a single shot from a camera, and reconstructing that scene with modern computer vision techniques that have been developed elsewhere and brought in to the operating room. Facebook, Apple, Google companies like this are using these technologies for other applications. Proprio was I think the only company who's bringing these things into the operating room and actually inventing them for practice of medicine. Like it's really exciting. And that's kind of the call to action I want to bring to everyone today is we need to take these kinds of bigger, bigger bets. So we've invented a system that does this, the 510 K will be cleared this year, we'll do first inhuman, which is very exciting. This concept of being able to map everything that happens in the surgery, we're launching in spine. But this is relevant for every place that you could get a sensor or won't be optical or infrared sensor into the scene. And then you can layer on these other types of data working with CT, MR. Ultrasound, you name it, because remember, we're unlocking the value of every pixel in the scene. It's pretty exciting. That's about 50 gig of data per hour that we're running a surgery, which is pretty massive. Different Brains can capture and store different amounts of information. But a general approximation is about 2.5 petabytes of data and a human brain. So that adds up to an entire human brain full of surgical data in one year of practice, just a massive amount of information. So as we collect all those data, parse them, process them and analyze them, we will be able to actually explain and unlock and shine a light into this black box of surgery, and so doing unlock a tremendous amount of economic value. So that's happening right now. And happy to talk about it with anyone who'd like to speak about it, it's going to take more than one company to accomplish this audacious goal. We're gonna need implant partners, we're gonna need data partners, right? We're partners with Microsoft, we're using the Azure ML tools in the cloud to postprocess all these data, we need help. This is a big vision, and it's a valuable one. So if that's interesting to you, I'd love to talk. Thank you

 

LSI Europe ‘24 is filling fast. Secure your spot today to join Medtech and Healthtech leaders.

September 16-20, 2024 The Ritz-Carlton - Sintra, Portugal Register arrow