Designing for Trust: How to Create Trust in Health Tech
Designing for Trust: How to Create Trust in Health Tech
Speakers: Matt Parker, Dan Lock & Jordan Abdi
[Music Playing]
Matt: Hello, and welcome to Invent Health, a podcast from technology and product development company, TTP. I'm your host, Matt Parker.
Over the course of this season, we're going to be exploring the fascinating future of health technologies. Today, we ask, how can we design for trust in health technologies and medical devices?
What does it take for you to trust someone? A long standing personal relationship? Them delivering on something they said they would do? Or is it something innate in someone's character, an intrinsic sense that they're not going to let you down?
One sector which answers positively to all those questions around trust, is healthcare. Polling shows that trust in doctors is amongst the highest of any profession, and it's obvious why.
Doctors have some of the most rigorous education and training of any profession, and as custodians of the world's wellbeing, we don't really have a choice not to.
But there's another industry which would answer more negatively to those questions. Technology, trust in technology, and specifically, digital media and the internet has plummeted in recent years.
Something tanks claim that 75% of Americans now distrust social media, while trust for the industry behind it has followed a similar trajectory. This brings up an issue for healthcare.
As we've seen over this series, health and technology are becoming ever more intertwined. Telehealth, remote monitoring, and the rise of AI, health tech is now a fundamental part of the healthcare landscape.
So, as trust drops for technology in general, what will be the consequences for health tech? And what can people working in this sector do to ensure this doesn't happen? This is what I wanted to find out.
So, I spoke to a couple of people from TTP and beyond to get some answers. First up, I spoke with Dan Locke, a colleague of mine at TTP, who you might remember from our first series.
Dan is a principal consultant in human factors at TTP. His background is in psychology with a masters in Ergonomics and Human-Computer Interaction. His primary interests are around human performance, usability, behaviour change, and user experience in medical technology.
He also has a particular focus on supporting design for engagement, something he talked with us about previously on the podcast.
With a topic this broad and fundamental to healthcare, I started off with the basics: what do we mean when we talk about trust?
So, today, we're talking about trust and design for trust as well. I wondered if a good place to start us off is just to think about what does trust actually mean? What do we actually mean when we say someone trusts another person? And why is that so important to healthcare?
Dan: That's a really interesting question. I think trust is … everyone knows what it means, but you obviously have to actually dig into what it really means. I think it's to do with taking a confidence step into something that is unknown.
And I guess, it comes back from society itself is built on trust. Trade wouldn't exist without trust, cooperation between people wouldn't exist without trust. And I think obviously, people have evolved in order to trust each other or to find mechanisms by which they can determine whether someone can be trusted.
And it's quite interesting, I think obviously, been subject to a lot of research on the economic side. As Russell Crowe on A Beautiful Mind, he's the one that invented it.
John Nash, as I was saying, the great economist John Nash (who I'm sure his work, you're very familiar with), he kind of wrote the first paper, I think on game theory.
I guess the three kind of foundations for that is there's got to be a potential win-win for people in order to trust each other. And it can't be a zero-sum game where one person always gains and everyone else loses everything.
There has to be repetition. So, it has to be a game that goes on multiple times. So, if you only need to see someone once in your life, there's no incentive for you to trust or to care for that person.
And then, also I guess miscommunication has to be handled carefully so that people don't get the wrong idea and think that they're being tricked when they're not.
Matt: I wonder, has there been any research looking into the relationships in terms of trust between, not just from one individual to another, but about the relationships that people might have between maybe people and computers or humans and machines, when it's not an equal footing in that sense?
Dan: I think it's difficult to say with a machine whether or not you are trusting the machine or actually the intent of the person that is in control of that machine.
When it's something that's electronic, when it's actually a communication or a representation of a corporation or another agent, and how they're going to react with you, then it's more about whether that is effectively communicating what you need to know in order to trust that item. Is it filling in the gaps in your knowledge to your satisfaction that you'll be able to work with it?
And with computers, things have got increasingly sophisticated over time, and then there's talk about persuasion. Computer is persuasive technology, which is captology — BJ Fogg kind of coined that term.
Matt: What is captology?
Dan: So, captology is basically what feeds into a computer's ability to change your behaviour, to persuade you to do something that's good for you, perhaps like take some medication or live a healthy lifestyle.
What is kind of exploring, you know, how easily computers can help people. And I guess the next phase — we've talked about functionality through to persuasion. I guess the current wave or the one that might come soon would be around whether computers can represent us or advocate for us in a kind of AI sense.
Can we trust them enough to make decisions on our behalf without us really being involved to take actions for us, to do things that we think are the best and do we trust them to do that?
Because once we've established, how it might happen, but let's say everyone has a personal thing on their phone. The phone knows them well enough and they trust it well-enough to know that it can deal with life admin and things that you don't want to do — respond to emails even or set things up for you, then maybe that's the next wave when people trust AI well enough to do that. Then I guess that will be something that's more commonplace.
Matt: In terms of, I guess people trusting their technology, health is maybe an emerging field. Have there been other examples of industries where the science, the understanding here is more advanced?
Dan: I did read some research that were saying in looking at the extent to which people trust certain institutions and the top of the pile was actually a doctor. I think it was like 87% trusted their GP or their primary care provider, and then the lowest was social media and that was around 50%, still quite high, I think.
And then somewhere in the middle, there was your bank, just like 76%.
[Music Playing]
Matt: Field of captology is a really interesting example of why trust and healthcare are both so important, but also fickle. It might seem sinister that computers are being built with the express motive of changing behaviours and opinions, but in healthcare contexts, it's really vital.
So, as technology and healthcare becomes ever more entwined, from telehealth to AI and more, how do we ensure that the 87% trust in doctors that Dan mentioned maintains?
Well, one person who's been working to ensure these figures remain as high as they are, is Jordan Abdi. Jordan is a World Economic Forum global shaper and Schwarzman Scholar who's passionate about preventative health.
He's a medical doctor by background, and has more than eight years of experience working in the health technology sector. He now works as the life sciences business development lead at PicnicHealth, a company who track health records for patients across the U.S.
So, no matter which provider patients go to, their historic health records stay with them. With something as personal as health records, trust in the institution dealing with them is obviously key.
So, I started off by asking Jordan about how fundamental trust is in healthcare, especially between doctors and patients before getting into his work with PicnicHealth?
So, thank you very much for coming to talk to us today.
Jordan: Absolutely. Looking forward to it.
Matt: I wonder if we could start by talking a little bit about why trust is so important in the sort of doctor-patient relationship?
Jordan: Well, I think to be candid, we don't really have a choice. If you think about what you go to see clinicians for, physical or mental health, it's often deeply personal, deeply intimate, problems that you wouldn't even want to disclose to your loved ones.
And so, an implicit in engaging with doctors and nurses is that you trust the privilege they have to know some of your ailments and things that perhaps aren't going so well in your mental or physical life that's treated with respect.
And if you look at the history of clinical practice, as much as we do respect and do have great deal of trust for clinicians, it hasn't always been so smooth. There's been scandal after scandal going back centuries.
And the profession has very much built actively sets of statutes, codes and practice that make patients or allow patients to have a dually of trust because it's really, as I say, not something we really have a choice in if you want to have access to healthcare.
And I think where things are getting interesting is, that is becoming less and less true because more and more of healthcare is delivered in-person, online and via application, certainly in the mental health space.
And actually, there's no longer this obligation that there need to be another human being with whom you have to disclose things who you might be not be super excited about disclosing.
But yeah, I think if you go back to the core of it, the core of that relationship between patients, their doctors, their nurses, if there isn't trust, there isn't healthcare, and the two go hand in hand.
Matt: Increasingly, many pillars of health are now longer on pieces of paper. Computers and computer systems are a vital part of delivering care. And do you think there's a change there from sort of trusting people to trusting computers?
Jordan: Yeah, although I will say it's a little tragic, we haven't quite made as robust or move toward digital records as you might hope. There's far too much paper that's still being used to record information.
But it's a very interesting point that you raise around this notion of trusting systems computers more than trusting individuals. And that comes as part of how healthcare's evolved, but also with the rise of technology.
I mean, you go back 10 years ago, one of the big debates in medical profession was around Dr Google. And the intense frustration that the patient would come to you not with problems but with solutions.
What we're seeing there, fast forward 10, 15 years is Dr Google is getting a lot better. And it's not just Dr Google, there are whole range of healthcare technologies out there that play some role in helping you understand your own health and understand how different behaviours can map to different health outcomes.
And people are much more informed about their own health. And with that, people feel more empowered to take more control over their healthcare and their decisions.
Matt: I wonder if you could tell me a little bit about the work you do with PicnicHealth and what you're doing there?
Jordan: Yeah, so perhaps your listeners won't be super familiar with PicnicHealth. They're primarily based in the United States. Where Picnic slots in for the patient journey is it's almost like a records concierge service.
Patients will register onto the PicnicHealth platform, takes maybe 5 to 10 minutes to get their basic information, some of the providers that they've been to before. And then, Picnic will go around the United States to every sort of provider that that patient's ever engaged with, and retrieve those records on behalf of the patient.
And then, what's provided back to the patient is a full sort of timeline of all of their clinical information, every blood test, every scan, every office visit, every consultation in a single record in their pocket that they can then share onwards with other clinicians as they move around their journey.
And so, we talked a little about trust earlier. I mean, this is a big part of being able to trust PicnicHealth with the health record information is actually, you can see that those information are being provided back to the patients.
So, we work primarily with rare disease patients where they would be more commonly moving around providers due to the nature of the conditions. And yeah, we work with those community to support their record curation.
Matt: I think that's really interesting because we talked a bit about trusting the system and trusting healthcare systems, and I guess Picnic as a private organisation, you are saying, “Yeah, actually, trust us with your records.”
And I wondered if that's something that's factored into some of the companies thinking. And I guess, how do you show patients that you can trust Picnic, as someone to curate and look after your healthcare records, some of your really personal medical information?
Jordan: You're absolutely spot on there. I mean, there's very little more that an individual could share with a private company that is more personal, more intimate. And what really comes back to heart for PicnicHealth is that the patient always comes first.
Everything we do, it's all about making sure the patient is the centre of their data. And so, that means, yes, they get all their data back, but also, we can leverage those data for research, but only with patient consent.
And we do all kinds of projects to understand, help communities or people with rare diseases advance the knowledge and understanding of what's happening. But really, we work directly with patients on those research.
We provide information back, by output of those research, and the patients that we speak to — the feedback we get is pretty resoundingly sort of positive on the lines of the fact that they are able to be really at the core of their care delivery, and then core of how that data's being used.
[Music Playing]
Matt: This distinction between an NHS doctor and a private healthcare provider like PicnicHealth is really interesting. For the former, trust has developed over a long time. It's about personal relationships as much as anything else, but for the latter, it's something that has to be built from scratch.
Putting patients at the core of what they do and focusing on the personal side of health is how Picnic have been so successful in doing this.
But in order for companies like Picnic to succeed, they will also need to align with some core principles for trust that Dan was alluding to earlier.
These lean into the more philosophical side of things. They're about the innate trust between user and machine. I went back to Dan to explore what such a set of principles might look like.
So, if we were to dive into some of the principles for trust that kind of underpin whether we trust something or not, what would you say that they are?
Dan: Well, I mentioned already alignment of outcomes, I think that's a big one. And that is obviously to do with whether the person has anything to gain from your losing out.
If you go to a restaurant and you order something and the waiter takes you aside and says, “I wouldn't order that, actually, I'd get this one instead, it's cheaper and it's more tasty,” you're more likely to trust him. And then when you recommend something else, you'll be like, “Great.”
Whereas if he comes in and he says, “I wouldn't get that, I advise you, I'd get this one which costs three times as much,” then you're less likely to trust him. You're going to think he's actually trying to get something out me here, he's trying to make me buy something more expensive.
And therefore, you might not trust his further recommendations that he makes. So, the alignment of outcomes, I think especially is really important.
The other ones would be … there's transparency, is a big one. So, the fact that you know who are these people? What are they going to do with this data? Where's it going? What's going to happen next? And how are they going to keep me aware of any problems? So, those kinds of things are quite powerful as well.
And then the other one I think is accountability as well. So, showing how you will react, make things right if things go wrong, and if you have a track record being trustworthy or have there been data breaches before, and have you dealt with them badly? Those kinds of things are also going to fit into things.
Matt: And from those three elements, I guess the belief in authority or organisation that endorses something, the sort of confidence that there's some technology that's going to keep my data safe or keep me safe, or those kind of human elements and that human touch, which of those do you thinks the most significant if we're thinking about why people trust or why patients might trust a system?
Dan: You don't need regulations to trust someone, and you don't need the other thing. I think at the end of the day, it's who that person is, is the most important thing.
Matt: That human connection.
Dan: I think so. Believing that there is a real person there and that they are going to react to this would be something that makes you feel you can trust them more as a real person.
And if you sign up for some kind of digital app and on the confirmation, you get a phone call from a real person who's clearly looked at your data and has a few questions and just want to double check that you've understood.
Your answer, it's something, you're going to feel much more like cared for. You're going to feel like that's a real person that the system is quality, it's not just actually automated response. You're going to be more trusting of it and more likely to participate, I guess
Matt: I wonder if there've been any studies which explore trust in this sense, trust between humans, trust between humans and machines?
Dan: I mean there are some very famous studies on trust and authority. Milgram's experiments, especially are well-known in psychology, which is where people were instructed to turn a dial to electrocute another third party that was behind a wall and pretending to be electrocuted. Obviously, they weren't really electrocuted.
And so, Milgram, yeah, that experiment's famous because he was trying to look at why people followed orders from the Nazis. That's kind of the original impetus I think for that. And he was focused very much on authority.
And so, people with white coats were trusted, were more likely to be obeyed than people who were not wearing white coats. And they've redone this multiple times, lots of different factors.
A more professional, clean, kind of experimental room was trust, people more likely to obey than if it was like a dishevelled place in the wrong end of town. So, there's all these things that were explored with those experiments, all very interesting.
But I guess authority is one of the things, but I think trust is also featuring in there because obviously, it's not just authority that you see in the white coat, it is credibility, and that's why doctors have certificates on their wall, that's why people look at reviews, personal recommendations — all of these factors feed into trust and whether or not you're likely to take that person seriously.
So, I think those initial experiments were very interesting and I think probably building on that you can extend the findings to this other kind of markers.
[Music Playing]
Matt: Authority and credibility, these two things stick out for me as key if we want to engender trust in an institution or a person. There's something that healthcare has more of than most institutions, especially in that doctor-patient relationship.
But as we've seen, that personal kind of credibility is much harder to develop in technology. Just take a look at the lack of trust in social media in this new post-truth age of misinformation.
So, how do we develop tools which simultaneously safeguard trust and also make healthcare more efficient? Jordan told me about some really interesting examples which have done just this.
I wonder if we're sort of looking towards the future bid and I guess we're talking about designing new tools — I wonder if you've got any examples that come to mind of systems that do this really well.
Sort of digital tools and digital systems where this kind of thinking around trust has been built in from the core and they're very effective at communicating that to the patient.
Jordan: I think I would … maybe not a particularly creative answer, but the NHS app that sort of gained popularity during the COVID pandemic, where all of a sudden, pretty much the entire population — and other countries I think had equivalent apps that were QR coding their vaccination status.
Where all of a sudden, people were suddenly uploading all their medical records and the very minimum, their vaccination records onto a single platform. And really the reason why they were so comfortable doing so, I think, is because they were so used to trusting the system, in this particular instance, the NHS.
That just to give this data to another new platform that really hadn't been all that well-deployed/existed long before COVID, but at that kind of scale, certainly hadn't, was remarkable. And you saw the trust really across the board.
You saw the trust from patients who were like, “You know, this is going to be the platform I will share my vaccination and other records on.” But also trust from all industry. From aeroplane, open spaces and events venues where you showed this app and there was just no ambiguity.
There was no uncertainty. If this app says you've had a vaccine, 100% you've had a vaccine. If it doesn't, then you haven't. And it's just remarkable, it's almost instantaneously everyone had complete trust of this system, and no one even thought to question, “Are there errors? Are there erroneous data here, or its data 100% secure and safe?”
Matt: Do you think that's to do with the platform itself or do you think that’s sort of, in the UK, the sort of weight and the reputation of the NHS behind it kind of drew that forward?
Jordan: Pretty hard to say. My sense is that the fact that it was an NHS platform, I think went a long way in insurance adoption.
Matt: That's really interesting. I guess it kind of leapfrogs into one of the areas that's come up a lot when we are talking about new technologies, which is using, I guess new artificial intelligence tools and machine learning to develop new systems based on these big data sets that are being input.
And I'm interested in your thoughts on how — I guess when were these systems which have been trained into huge data sets, where there's not such a clear link as to exactly why a system is making a particular decision or a particular recommendation, but how can we know that we can trust those types of technologies where it's not exactly clear how the decision-making process has happened?
Jordan: It's a slippery slope. I'm probably quite conservatively when it comes to sort of black box, big data type models because there are two big risks. Risk number one is that the data that the models have been designed on are not truly reflective of an average patient population.
And what I really mean there, is that there are biases, biases in terms of age. Any other demographic, maybe biases in the disease profile, maybe biases in the way in which care was being delivered.
And so, when you try and map any single model that's being trained on a big data set, onto a real population, you need to be very confident that that data set is really reflective of the wider society.
Otherwise, you can get erroneous decision-making, which is really what I think the bigger risk is, which is risk two that these decisions become indefensible, but unavoidable, and that patients are using tools or using applications or even clinicians using tools and applications and making decisions they cannot justify and for which the outcomes of which may not be known for a very long time.
And so, I think where the industry needs to go, where we need to push the needle, is towards more and more explainable AI. Where the outputs can at least be explained in some way such that patients and clinicians and regulators can really understand whether the models are excellent, or actually in instances harmful.
[Music Playing]
Matt: The difference between tools like the NHS COVID app and AI models in terms of trust comes down to that idea of ambiguity.
With AI, inherent biases and black box issues means that outcomes can feel intangible, but the simplicity of something like the COVID app and the credibility its creator already has means there's no room for ambiguity. It does what it says on the tin.
I went back to Dan to find out some more about how we might design other complicated tools, which can mirror the simplicity.
Thinking about when we're designing new solutions that are going to go into the healthcare systems and tapping into some of these worries, these anxieties around reliability of data and building systems that both patients and providers can actually trust and interact with.
So, maybe the context of monitoring is a really good one to start with. How could we design a system that a doctor would trust that was creating a log or some kind of medical record for that particular patient?
Dan: I think the trust for doctors, I think the challenge for them comes from the volume of data that they're expected to keep on top of and being anxious that they've not missed something. Because obviously, the more of these sort of systems exist, the more sources you're going to have to keep an eye on. The more false alarms you're going to get.
You're going to start to find it very tedious. So, there needs to be trust that the algorithm or whatever it is, is doing its job, that it's selectively screening people out only when they have a very high chance of having a certain problem.
And also, they have to understand the system and how it works. And that brings us onto explainable AI, I guess, where systems aren't making decisions or suggesting people are looked at or brought in for an examination on the basis of some remote monitoring.
Matt: Is that all of the times in AI systems, it's not entirely clear why the system's making the decision it's doing. Is that explainable AI? Is that what we're trying to get at there?
Dan: Yeah, that's it. I think it's understanding why it's made a decision. When it becomes a more complicated thing, like diagnosing a particular thing on the basis of symptoms or signs rather than on any kind of chemical genetic test or something, then it becomes much more … you need to understand the steps they've taken.
Matt: I wonder if, with all the development in this space, do you think trust will in these digital interventions from patients and from doctors will … is increasing over time? Or do you see it continuing to fragment as we see more and more chaos online with social media? Can we bring it back by designing systems that are good that people do trust?
Dan: I think as with humans, you learn to trust something over time as you have increased interactions, frequencies of interactions with that institution, person, whatever.
So, trust will come when people have enough familiarity with it, that they've had good experiences, they've got value, they've been happy that something good has come out of it on a case-by-case basis in terms of like individual digital interventions.
In terms of general trust, it's hard to see past the increased tendency towards mistrust and conspiracy and investing in healthcare. And so, there will probably always be some people who just say, “I'm not going to do that.” And it's going to be some cultural elements to that as well.
I know that there are different cultures which have different attitudes to their personal data, and that's probably ongoing today in other countries, then they may be less trusting of certain types of data and that will be something that might require a different approach in those cultures.
[Music Playing]
Matt: Designing systems to be robust enough to withstand questions around trust is one thing, but there's something else which could go a long way to maintaining it: regulation.
Now, this is something more obviously possible for state-run institutions like the NHS, but regulation has long been absent from many areas of the technology industry, especially in the Silicon Valley based-tech behemoths.
So, as these companies encroach further into the health space, I wanted to know how regulation could play a role in making us trust the healthcare devices they've been developing.
I let Jordan talk us through this alongside some conclusions on the future of trust in healthcare more broadly.
I wonder if we're seeing some of the big tech giants increasingly moving into the healthcare space. Apple health now is increasing its reach into more and more areas of your healthcare, and maybe more people are used to having their healthcare (at least aspects of it) living on their phone, living on a wearable device, living on their wrist.
And I wonder if that kind of process of normalisation there might help some of the onboarding of other technologies that go in a similar space?
Jordan: Yes, I mean, it's a very interesting point with the slow but consistent involvement these big tech companies are playing in healthcare.
I think mobile phone, wearable companies, Apple, they probably one of the biggest, are most notable, they're making a very deliberate attempt to help people consciously and unconsciously, collect more data about their health.
Everything from quality of sleep, to how many steps you're taking a day, what's your heart rate if you have one of those watches, what's the rhythm of your heart rate, are there risks of developing certain conditions?
And they're pretty consistently trying to sort of build this sort of macro data set around your body when you are not unwell in some way to try and help support healthcare long-term.
I'm sceptical, certainly in the short to medium term how much value they're going to really add to the average patient because there's so much noise in your day-to-day data generation and disease cycles are so long that I think with the exception of few very specific use cases, you may not find such early diagnosis of anything with any degree of accuracy.
But it's the space that they're invariably moving into. I think there's probably also a degree (if I may say so) of naivety in healthcare data from some of the bigger companies just because they've been so successful in data in other areas.
Consumer advertising across the board, they've worked with massive data sets and been able to predict outcomes with immense precision that there's almost this expectation, well, healthcare data is just another type of data for which we can build clever models to get to the same conclusion.
But there's a few key pieces that people are, I guess missing, which is healthcare isn't just a science, it's to a great extent an art, and it comes back to our earlier conversation around trust.
When you go and see a physician because you have new pain in your back, you're not just going there for the diagnosis and medication, you're actually going there for a much wider consultation on actually what are my options more broadly, what are the pros and cons? Why do you think I'm having this, and are there other areas where I should be thinking about in my life?
And for any given patient, that answer might be different because it's very context-dependent and people have different tolerances for how they want to live their life.
And if you try and distil healthcare down into a pure exact science and you try and map that science onto a very diverse population, you'll find that actually, one shoe does not fit all.
And that's before we even get into some of the regulatory hurdles that working with healthcare data, bring with it.
Matt: You touched on there on the sort of the regulatory aspect here. This is in its infancy in many areas. Do you think the kind of a strong regulatory environment there is something that can help build trust in tools, and so a patient or a doctor knows that a tool has a particular quality to it that means they can trust it?
Jordan: Yes. And this is I think where it gets very tricky for the regulator. A core part of healthcare delivery — think back to the Hippocratic Oath, a core part of is just “do no harm.”
Do no harm does not mean literally under any circumstance can the patient come under any harm. Because of course, all kinds of medication interventions come with risk and with side effects.
What it really means is that the intention of what you are doing has to be that the patient is not coming under undue harm, and that at the very forefront, is their wellbeing.
And you think about what the regulators are now having to confront with some of these newer technologies with that type of thinking, mapping a macro population risk is much, much harder when you're thinking about system-wide technologies than when you're thinking about professional body.
It's going to be an evolving game. I think we're not going to see regulators go about this in a very Gung Ho fashion. Certainly, the Silicon Valley, break things and move fast, it's not going to apply.
Matt: And do you think digital tools are going to be an essential part of delivering that kind of new model of healthcare?
Jordan: I think they can be. I think they can play an accelerant role and certainly if designed in the right way where patients do have the trough to share information with them and do have the confidence that what they're being shared is being used appropriately and with consent.
Then we will see digital tools play an important role in helping patients understand their own health, their own risk, but also help clinicians make better decisions when thinking about a patient's own long-term risk, but also, that community's long-term risk so that policymakers can invest the right resources in communities to actually meet local demand, and stave off a potential tsunami of illness in a few decades.
Matt: That's absolutely fantastic. I think that's a really interesting way to sort of wrap up that kind of look to the future and yeah, I think it's been a really interesting conversation, Jordan. Thank you so much for coming on.
Jordan: Yeah, I've really enjoyed this, thank you for having me.
[Music Playing]
Matt: Thanks so much for listening to this week's episode of Invent Health from TTP, and a big thanks to our returning guests, Dan and Jordan, for their insight.
We'll be back next time with a new episode looking at cardiac health. To find out about the future of cardiac monitoring and some more conversation around the big tech companies moving into this space.
If you enjoyed this episode and want to let us know, please do get in touch on LinkedIn, Twitter, or Instagram. You can find us at TTP. And don't forget to subscribe and review Invent Health on your favourite podcast app because it really helps others find our show. We'll see you next time.