The Empathy Algorithm: The Pros and Cons of Using AI for Therapy
In this special live episode of Cyber Made Human, Host Alice Violet is joined by Senior Leadership Coach and Consultant Katie Muldoon and Alexander Giles, Chief Commercial Officer at GHS Clinics, for an in-depth discussion exploring the rise of AI as a companion, therapist, and conversational support. This was part of Gloucestershire’s first Tech Week by CyNam.
Thank you to this episode’s sponsor CyNam. CyNam is a thriving community that brings together start-ups, investors, government, academia, and industry to drive innovation and knowledge-sharing in cyber tech.
Find out more and get involved at CyNam.org.
You can watch the full episode on our YouTube and Spotify pages. Check out the full episode transcript below to learn all about this topic and our discussion on it.
Disclaimer: This transcript is an outline of the dialogue exchanged in this episode and may therefore contain inconsistencies with the video version.
Our Horror Recommendations For This Episode Were:
Alice: Bonds of Love by Jessica Benjamin
Katie: Dr. Erica Matluck’s blogs
Alexander: If Anybody Builds It, Everyone Dies by Eliezer Yudkowsky
To discover more book recommendations, check out the Cyber Made Human Bookshelf
The Empathy Algorithm: The Pros and Cons of Using AI for Therapy Transcript
Intro: Welcome to this very special episode of Cyber Made Human Live here at the CyName Secure Futures Tech Week. Today we’re gonna be discussing the rise of using AI as a companion or an emotionally intelligent spaces, such as therapy with an incredible panel comprising Katie Muldoon, who is a former aeronautical engineer and now a psychodynamic coach, and Alexander Giles.Former innovation and commercial officer at the NHS, who’s now the Chief Commercial Officer of GHS clinics. We are gonna be diving into what this means for human relationships, the risks, and also benefits of using AI in emotional intelligence spaces.
Alice: Okay, everybody. Welcome, thank you so much for joining us today. Um, I wanna start by thanking CyNam. There’s two members of CyNam here today. We’ve got Daisy over here, and Hugo, events Manager over here. This is part of CyNam Secure Futures Tech Week. So there’s lots of events happening throughout the week.
Some of you may have come to a few of them, or this might be your first ever CyNam event. So welcome to the community. There’s also other people like Reed in the audience. Big part of the community. And Charlie, I mean I’d probably pick out everybody here to be honest. There’s also a lot of team AVC here, so we have grown.
So Jack is part of the furniture now, but we’ve also got Di at the back, Derek over there, and Josh as well. Roaming around on a camera. I can’t quite see Josh at the back there, but welcome to Cyber Made Human. So this. Started as a passion project for me about a year ago, and it was really about me challenging the industry to speak more inclusively, but also to make cybersecurity more accessible because I think it’s quite complex sometimes unnecessarily so.
And so this is all about taking kind of trending topics and making them accessible. Um, so today I’m delighted to be joined by an amazing panel of Katie and Alex. Thank you both for joining me today. So Katie is a former aeronautical engineer, turned psychodynamic coach, and Alex is the former commercial innovation officer at the NHS and is now the Chief commercial officer of GHS clinics here in Cheltenham.
So to begin with, I’m gonna invite you both to just talk a bit about your journeys to where you are today.
Katie: Thank you for the lovely introduction and it’s, it’s lovely to be with you. Um, today, um, CyNam has become, um, home for me, um, because it’s a place where people are really talking about integrating the arts and the sciences to create a prosperous and secure future for Britain.
And I wanna start there because it then tells me a little bit about my story. I started as an engineer. I loved structure, problem solving, um, seeing ways to de-risk things. And then as I did that for 10 years. I started to see how teams performed and realised that it wasn’t really about that, that it’s important to have it, but actually it is about how people believe, what they believe, what they feel, and their motivations.
And so I turned to the arts to understand a little bit more about that. So I went back, I studied international relations, and then more recently I studied, um, psychoanalysis. What is our unconscious, actually telling us? What do we, how do we go about life without actually realising what is motivating and, telling us what to do?
And the reason I think it’s really important is when artificial intelligence is in our daily lives, our home lives, our work lives, we won’t necessarily know why we’re behaving the way we are. So that’s why I’m here today to talk about the importance of recognising our uniquely human parts in an AI enabled world.
Alice: Thank you. Amazing. And Alex?
Alexander: Uh, I’m, I’m. Only local to Cheltenham for the last, uh, three years. I’m a, I’m a Londoner, um, by trade. I, I thought they dragged me out of their, uh, feet first. Uh, but when I came here, uh, three years ago, I realised this is just, uh, another London borough. Just doesn’t have a tube.
Doesn’t have a tube stop. Apart from that, pretty much the same, um. And, and fun enough. The month that I came here, which is three, three years ago, this month was the same month that term ChatGPT was kind of launched and became a thing. So I’ve been here as long as ChatGPT is a, is is a thing. Um, I’m, I’m, if anything I’m an entrepreneur, uh, by trade, um, probably.
More failures and successes, which is, it’s, I think, standard for entrepreneurs. But I’ve, I’ve been, okay. Uh, I’ve worked in defense and security. I’ve worked in ag tech, uh, advanced materials. Um, but it was only when, when I came, when I came here and did a, did a stint at the local trust, um, in innovation, uh, job that the AI became a thing and the discussions around AI and the NHS suddenly.
You know, uh, exploded. And that’s, that’s a really fascinating, um, situation and that’s why we’re looking forward to having this, this discussion now.
Alice: Yeah, definitely. And I wanna start by saying this is a non-judgmental conversation. I think when we’re talking about using AI as a therapist, you feel like you don’t want to admit to it because some of the headlines are so negative. And we did run a survey with Sina about who has used, uh, ChatGPT or Claude, or large language models as a therapist. And overwhelmingly the answer was most people have, and I’m open in saying that I have ever, uh, used it. So I have my own therapist, but I have also, you know, navigated difficult conversations with
AI and I think most of us have. Um, I wouldn’t necessarily put in very personal emotional information into it, and I think that’s when it come, becomes a bit of a gray area, which we’ll get into. Um, but aside from my marketing side of my life, I’m also a Samaritan and we’ve got a couple of Senior Samaritans here today from the Cheltenham branch.
Um, so do after the, uh. Episode finishes, get involved in conversations with them around mental health as well. Um, but it’s something that I’m passionate about as well. Um, and I’d love to invite both of you to talk and offer your insights on have you ever turned to AI for therapy or emotional conversations?
Katie: Um, well, the, the answer is, this is an interesting one because, um, I’ll give the short answer first, which is, no, I haven’t. No, I haven’t. Okay. Um, but the reason for that is, um, I was training as a psychoanalyst, um, when ChatGPT came in. And so I had a sort of, um, I was in a different position and so my day job, um, I work as a leadership consultant and coach.
I work in innovation environments. I work with tech, I work with AI, I work with people developing it. But these were two things that I understood that were quite distinct. And because of my training, I didn’t, and that’s the thing. So I had a structure where I, I wasn’t necessarily invited to try that, um, as a boundary.
And let me just say a little bit about that. Um, the reason I didn’t is because, um, fundamental to my training was that therapy is a human relationship. And so, um, the organisations that run therapeutic training, um, do not know how to deal with this, right? They haven’t, they haven’t addressed this. They dunno whether it’s helpful.
They dunno whether it’s not helpful. And so, um, there was no restriction on me to do that, but I was in, I had my own therapist as part of my, um, training. And so I was almost, I had a lot of kind of barriers in the way. Um, and so I, I didn’t, but I was curious, but I didn’t. Okay.
Alexander: Uh, the shorter answer is yes. Um, but interesting enough, both this, this year has been the first time that I’ve used AI as a therapist. I, I created on our Gemini system a AI to create a stoic, uh, advisor, which basically tells me that pain is weakness, leaving the body. And that’s, that, that’s handy in the morning instant to know that, um.
But then it’s also the first year, um, that I got a real therapist. Can’t recommend it highly enough. Everyone should, should have a therapist. I didn’t realise, but my therapist probably regrets. Um, but, uh, yeah, so, so in parallel, the first time I had a, a life, a real life, um, therapist, and the first time that I used AI.
Alice: And it’s interesting that you chose to make yours a stoic therapist because I think with ChatGPT, it’s very much what people call an agreeable mirror, where it rubs your ego and it agrees with everything you say to keep it, you engaging with it. And we’ll talk about some of the reasons with commercialisation of the data that they’re taking on, what their incentive is for doing that.
But what was it about yours that’s different then I wonder if even if you’ve trained it in a certain model, was it actually giving you different answers or how was that working?
Alexander: I I, I doubt it. I mean, it was a, it was a. It was a curiosity, which I think lots of people, you know, go in AI and start playing around with it.
I was curious how the gens worked. I was using those at work and, um, and that was really interesting. Um, and I just thought, well, I’ve always, you know, stoicism is one of those things. Stoic is one sort of the masculine things that you feel that, you know, you know, that, that, that that’s good therapy. That’s okay.
And so I thought play around with it. Um, and it, it’s, it’s absolutely a play thing. And I absolutely do not see it as a real therapist or, or youthful in that way. It’s, it, it, it, it’s the thing that generates those posters that you put on your wall, you know, it’s a, you know, keep striving and everything’s going, this tube will pass.
All of that good stuff. And that’s what it’s, that’s what it’s there for. I don’t see it as anything else than that.
Alice: From my perspective, I think when I’ve used it, it’s probably immediacy. So I also have my own therapist and obviously you only see them once a week and that’s a good thing I think, because it contains the conversation and you’ve probably got time that you can go away and work on it.
And you, Katie know like what the purpose of all of that is. But in terms of using ChatGPT if you’ve got a difficult situation or message you need to send immediately, having it in your pocket. That’s one of the benefits. And I’d love to start, we will obviously get into the risks, but what do you think the benefits of the immediacy is?
If there is no alternative? Not everybody can have a therapist that is a privilege. They’re expensive, they’re not always available, they’re not always good. Um, obviously we have the wonderful Samaritans, but if you don’t have access to that, is this better than nothing?
Katie: I think, um, I think we probably have to be a little careful around, um, what is better than nothing.And, um. The idea is, um, in an immediate situation. Um, and I may, I may actually go back to, um, my time in the military. Yeah. Okay. Um, there was something called, um, trauma, um, trauma training. Okay. Um, uh, and it was about trauma reduced incident management. And essentially it was as soon as you can talk to one of your peers about what happened.
Okay. And this helps in the immediacy for the memory to start to process. And it’s very much about human to human. So I, I do make that distinction. Um, but let me fast forward to, um, where people leave the military and, um, it can be quite an overwhelming experience to, to leave and do something and, um.
Seven years ago I was working in invasion consulting and um, a charity had an idea about immediacy, access to immediacy, um, for when people were stuck in the moment that they thought might be existential. And by that I mean, um, people going for their first job interview. So they had gone for their first job interview, having been in the military for 10 or 12 years and the bus doesn’t show up.
Their brain freezes. They don’t want me, I wasn’t meant to get this job. I dunno if you had something then that said phone a cab. Let them know and we can continue the journey. That absolutely, that kind of moment of advice. And I would say that’s not therapy. I would say that’s immediacy of advice. Okay. I’m all for that.
And when people are in some sort of high stakes, um, high anxiety moments, yeah, bit of good advice can go a long way. So that would be what I would say really helpful. Okay, Alex?
Alexander: Um. I mean the, I suppose the, the first, you know, hot take of the afternoon is, is the statement that the NHS is broken and isn’t going to be fixed by any amount of money that you throw at it.
Um, and mental health, um, as a service within the NHS is under an ridiculous amount of strain. So you will happily spend potentially years. On a waiting list, uh, before seeing anyone. So to, to Katie’s point, yes. From an immediacy point of view, in a situation in which for people that can’t afford a private therapist, which is far vast numbers, vast majority of people, um, then there is a very fair argument that for, for those people in certain situations, it does provide, I think advice, advice is, is, is the word rather than therapy.
I think that’s a really important distinction. I agree entirely, um, as an advice tool when, um, the other systems, um, do not work. Um, it has to have some value.
Alice: And I remember when we were having our kind of pre-planning conversations about this, we talked about war zones and obviously we know what’s happening in Gaza, which is hopefully coming to an end.
But you were saying that actually there through some of the research that you’ve got done, which I’d love to hear more about some of the insights that you’ve found, but that they’re not only our military personnel if they’ve got access to phones and things, but. People who have been displaced using ChatGPT because there isn’t anyone else to deal with the severity of what’s happening.
Katie: Yeah. And I think there’s, um, there’s something about the, um, the, the absolute tragedy of course that’s happening there. So I want to place some context that this is a very, very, um, uh, difficult situation for people. And the, the idea that there is some light somewhere means something.
Alice: Yeah.
Katie: Um, and, and that’s really important.Um, if I would, if I could take it out of that situation a little bit just to explain what is actually happening, um, in that situation is is an idea, called adhesive identification.
So this is the idea that people believe they are talking to someone. And that, um, that makes them believe in hope.
It gives them the idea that someone might help them, that someone might save them, that someone, but the important thing is if something like that happens in a, in a war, you know, this is a crisis moment. If they get into a space of safety and then become addicted to that as the place that provides them safety, the place that provides, that’s where we get a little bit in this sort of, that’s not okay.
That’s the thing that we want to be able to really, um, recognise. Um, but the adhesive identification is the idea that, um, you are imagining that this is something that it is not. And that’s the important thing is, um, you may find your phone has no battery. Yeah. It is not like, it’s not there anymore.
You know, this is the thing is, it’s, it’s that it’s not quite as always available as we imagine there are, there are real life things that happen. Um, that means you can’t, you can’t access it.
Alice: And we have actually seen, I think a lot recently where people have used ChatGPT or other large language models, and sadly taken their own lives because they were looking for advice and they’ve kind of gone down a certain rabbit hole that it hasn’t directed them out of or signposted them out of.
And I think there’s been a lot of huge stories in the BBC and other places recently about that. So they might have started changing it. But it’s still not a catchall and it is a problem. But would those people have actually spoken to a therapist? Alex, I wonder what you think.
Alexander: Well, that, that, you know, that that is the problem in that people turn to it because, not because they prefer it to, a human therapist, therapist because the opportunities in the waiting lists are, are, are too long.
And as I said, I, I sadly, I don’t see myself that situation changing in, in, in the immediate future or in any future. You know, because of that, mental health crisis, whatever you want to describe, it just keeps on. Growing for, whatever reasons. And so it begin, continues to outpace like so many things.
Um, for the NHS, you know, the, the, the, the, the need for a particular service ’cause continues to outpace how much money, um, that we as taxpayers are, are going to be willing to, to, to, uh, uh, to, to use on it. So I don’t think that, I don’t think that can change, no.
Katie: Can I challenge a little bit on that? We have got our first challenge.
Alexander: Oh yes. Thank God.
Sponsorship break: This episode is sponsored by CyNam, which stands for Cyber Cheltenham. This is an incredible not-for-profit group who helps to educate the wider ecosystems. In Cheltenham and beyond about cybersecurity in all industries, from government to business, and also education.
Most there events, just like this one tonight are free. So do come along and join the community. This is a great way to learn more about cyber and how you can apply it to whatever industry you are in. Thank you CyNam for sponsoring this episode.
Katie: I don’t know that people necessarily do want a real therapist.
Okay. Um, because therapists forget, they mess up. They mishear. They make assumptions, they get stuff wrong, and though when that happens, it evokes something in you that says, why didn’t you remember that? That’s really important to me, and you didn’t remember that. That is what we need to work on.
Right. That’s the hurt that wasn’t caused by a large language model. That was caused by someone in your past, right? And what you want, what you need to be able to experience. Is somebody saying, I got that wrong. I’m sorry, I forgot. And you’re gonna experience that. Somebody can admit they got it wrong and will repair this relationship.
ChatGPT doesn’t forget. Right. And it will remind you of stuff that you wish it did forget. And this is the thing is being human is that that vulnerability of witnessing mistakes and repairing those errors is relationship. And that’s, the thing. But sometimes when we could talk to machines, we don’t feel embarrassed. We don’t feel shame.
Alice: That’s it, right? That’s the thing. I think it’s the shame, right? If you are feeling suicidal or you’ve got some extreme trauma or you are still thinking about somebody that you’ve talked about to everybody a thousand times by this point, that’s the thing I think that potentially encourages people to use ChatGPT if you haven’t got therapist.
And is it more about friendship and companionship and advice as you say? And we’re calling it therapy. And some of them are presenting themselves as therapeutic models and commercialising that. But really, is it more of a friendship and a companionship?
Alexander: Yeah, but this will start getting into the dangers. You know, friendship is not about, as you say, a system always agreeing with you, which is, which we’ve all, if everyone’s used the AI, every idea you come up with, it’s, I’ve, what a great idea.
you know, south Park, if everyone’s seen the latest South Park episode has done some great, great jokes around that.
Really, they’ve done it really cleverly. Mm. And that’s not what friendship is. You know, outside of very juvenile, uh, friendship, friendship is about having an argument. It’s absolutely about your best friend is the person that will take you aside and say, you are a swear word, and You need to stop doing that.
That is the definition, isn’t it, of your best friend. It’s the person that will take you aside and say you’re out of order, and AI is never going to do that. It is absolutely designed not to.
Katie: Yeah. And, and I think there’s a, there’s a nice bit of research. You’ll see our light research. There’s a nice bit, there’s a nice bit of research I read, um, just the other day, Neanderthals had bigger brains than we do, but they died out.
Because Homo sapiens needed to get on, they needed to work together to survive, and they developed a blushing. And blushing is a social cue with no words that says, I care what you think.
Alice: All right.
Katie: And when we learned to express we care what one another thinks, we learn to cooperate. And we learned actually, we don’t need to be individuals, we need to be a team. And some people will be better at other things, and that’s okay because we’ll collectively be better together. AI is your bigger brain. It tells you you don’t need anyone else. You’ve got me.
Alice: So just before we move on to the negatives and also some of your research, Alex, you mentioned earlier about the NHS being broken and you know, you obviously used to be a leader at the NHS and have amazing insights there and No,
Alexander: no, don’t make me a leader. It’s not my fault.
Alice: It is your responsibility
Alexander: really clear about this. I was not in charge.
Alice: So what do you think AI can mitigate if they’re not the therapist, what can we use AI for positively to alleviate some of that at the NHS?
Alexander: I think that the drive by the NHS to get overly enthusiastic about AI is, is a, is a time bomb. Um, that we have, we have yet to explore what, what that’s going to do. Now, what, what drives in my absolutely personal opinion that is, it’s all about cost savings. I, I don’t believe that, you know, I don’t believe that leadership in the NHS like leadership in almost all organisations properly understands AI and just for the facts, that’s not.
But the AI, uh, so the NHS being unique mm-hmm. I don’t think that people, ’cause it’s so new and so complex, they actually understand it. And so you see this sort of drive now in the NHS, oh, we’re going to adopt ai. Why? Because if you believe the hype, it will make it hugely efficient. And it goes back to the point there isn’t this huge pot of money, it will make it more efficient.
Um, and I think that is, that is deeply, deeply troubling. Um, so even in areas where, AI, has been first of all, you know, great opportunities like looking over scans, for example, uh, tumor scans and, and you know, we have a huge shortage in the NHS of radiologists, trained radiologists. So there’s been lots of research and lots of really interesting technology that says an AI assisting, uh, a radiologist can give them that, um, um, that, that, that assistance.
The danger because the research is now coming in that what that actually means, it makes that original radiologist, lazy. They just accept what the AI says and that’s an unintended consequence. That wasn’t the consequence of bringing that in. But it’s what’s happening and there isn’t enough research.
It’s our challenging, well actually, are we now getting better results? You can see what you know drives it. You put the word AI behind your new health. Tech startup and you will get a multiplier from VC saying what a marvelous idea it is. I mean, the drivers are pretty obvious, but yeah, the use in, uh, the, the use in use in the NHS is, is, is, I think, dangerous to me.
Alice: So my question was, what are the positives? None. Do we not think in terms of admin though, like with things like phone calls, I know that there’s this new e identification or whatever, whatever we feel about that. Um, I think your instant reaction to that suggests me you are not keen on it.
Katie: Um, I would, I would say we’re in a very evolutionary state of it, so we’re very, we’re very embryonic in it.
Yeah. So, um. What we, what we’ve got to be, um, aware of is what is currently being labeled as metacognitive laziness. Mm-hmm. And so this, this idea that, well, the AI is probably right and, and you know, and I feel great that I don’t have to do that anymore. Yeah. Right. This is the thing is you feel great and this is a sort of, um, oh, thank goodness.
Like I, that used to take me hours and now it doesn’t, it takes me five minutes, you know? Yeah. Yes. There is this kind of, um, release. But what we find is that, um. It if I, if we talk about the triage piece is, um, mistakes are made right? Um, because large language models, um, sort of anticipate and they make mistakes.
And so if the, if I give an example of what happened with policing in, in Spain, of all places as, um, they, the algorithm, they wanted to triage, um, emergency calls, 999 calls for domestic violence. And, um, they were receiving so many. That they couldn’t work out, which were the most immediate. Right. And they installed one of these systems that said, do not go to this domestic violence.
Through all the research I have this. And that’s what Al said, and the woman was killed. Oh. And this is the thing is you are going to find these evidence because we are so embryonic in, in what we are using it for at the moment. Mm-hmm. And there’s things like, we do have things such as sentiment analysis.
We do have ways of understanding tone the way things are said. But think about the phone call where the person cannot speak because they mustn’t allow their perpetrator to know they’ve dialed. 999. What does the algorithm do? Mm, and this is a study is we’re still trying to understand how this works.
We’re still trying to learn about it. So there’s a triage aspect, which I would say not yet.
Alice: That’s a really interesting point because I think with Samaritans as well, when we’re answering calls, a lot of the time people take a while to speak either because they’ve been on hold or they’re in prison, or they’re overwhelmed and we know to give them a chance and say, you know, take your time. I’m here when you’re ready. An algorithm might do that in future, but you are right that it might think there’s no one there. Hang up.
Katie: If I can make a lighter note on it as well, is, um, I dunno if many of you, um, ever saw when Siri came out and apologies if I’ve set anyone’s phone off there.
Um, but it couldn’t understand Scottish accents. Right. And, uh, there’s a great little skit about a Scottish man in a lift. Yes. That’s the one. That’s the one, yes, yes. We’ve got a Scottish man. Yeah. And he’s stuck in a lift because he doesn’t understand Scottish accents. And that’s, and that’s the thing again, is we’ve got to be culturally aware.
We’ve got, there’s all sorts of things we need to incorporate into, um, places where we put that. And in triaging emergency, that’s two high stakes. Hmm.
Alice: That’s really interesting as well. ’cause it’s not just accents and language and culture, it’s also nuance. Yeah. Because sometimes I’ve ever used a word with my therapist and she’s completely misinterpreted it and I’ve been like, oh no, wait a second when I use that word, that was positive. Um, so yeah, that’s a really interesting point there. So, I mean, I try to. Squid out some positives and we didn’t really get any, but let’s move on to, let’s move on to the risk.
Katie: We’ll bring some positives, we’ll bring some positive!
Alice: But actually before we move on to the risk, Katie, uh, you have done some incredible research. Your background is amazing. You’ve touched on it. The fact that you went from high stakes military environment to psychodynamic coaching, like the work that you’ve done is amazing. I’ve had lots of conversations with you about it. Um, but I’d love for you to share some of the research that you’ve done.
And I know you’ve got an upcoming book coming out. I do, yes. So exciting. Can’t wait for that. But for now, what are the things that you found in your book, and for anyone who doesn’t know what research leader.
Katie: Yeah, sure, sure. And, and, and thank you. And, and I bring Alex in as well. We have a lot of conversations, um, on, on this. So, um, please jump in.
Alexander: I do not have a book coming in.
Katie: Not yet! But, um, the, the research that I’ve been, that I’ve been looking at is how AI is going to change the workplace.
And, um, when we talk about these relief moments of that legal summary that used to take me two, two hours and it takes me two minutes whilst I scroll on my phone and it does a nice summary for me.
Um. I’ve been looking at, um, what does it actually mean to be human in that space, and how do leaders of businesses actually develop and grow their employees to have meaningful lives, not be more productive, because actually you’re gonna be more productive. But, but how do you get meaning from work?
How do we work together to create good lives? Um, and so this is what the book is gonna be about. And um, part of the research I’ve done is, um, I conducted, um, a survey, um, uh, and it was a, a neat survey of that went globally and 50 people answered this, this survey, um, around their, their use. And I used, um, a concept called Gestalt.
And Gestalt is what’s in your life space. Mm-hmm. So, um, right now is how does your environment affect who you are? Because my belief is that we are all interrelated, we are all affected by one another and our environment and we change because of it. So I wanted to know is, um, do people with pet Like to work with AI?
Do people who work in tech like to work with ai? Do people in health And four distinct and clusters came out of it. Mm-hmm. And um, what was very interesting was the most adaptive people to ai, the ones who were least worried about the risks were women. And that flew in the face of my, my intuition that it would be, they’d be quite right.
But, um, what came outta it was the relief that they had some assistance to meal plan. The relief that they had some assistance to get some thinking time back was so great that actually, do they get my data fine? Whatever. Oh, you know, and this is quite extraordinary, but the instant relief of somebody helping, somebody helping, um, was much needed.
And the people that were more critically engaged, um, were people who worked in tech. Yeah. Um, but um, we had, um, a group, um, that were far more reluctant to engage, um, who were health. Actually, no, my sample size is quite small. This is not, this is not a sort of, um, a thing to say that the world is changing in this particular way, but it does help us understand from a sample where people are engaging with it.
Um, but it shows to me that it isn’t about men or women necessarily. It is about the life space. Do they have kids? Do they have commitments? Do they have caring responsibilities? And it was, we need to change entirely. The um, the demographics. We understand people make decisions by because AI’s gonna change it.
Alice: Yeah, that’s a really interesting point, and it reminds me of some of the conversations we had, so. Previously about women are actually the biggest data set that you found using AI for things that you’ve mentioned that actually AI is predominantly made by men. And some of the things that you found there were quite interesting, if you could share those.
Katie: Yes. It was kind of, um, the, the programming is, is largely made, um, uh, and developed by men. Mm-hmm. And so, um, a lot of the ways that, um, suggestions were made were, were identified with male. Males in mind. And so some of the answers that we got back, um, were not exactly, um, the way that people would want ’em.
So there’s a lot of nuance in the way that people would, would want it. And again, it really calls for that diversity, um, into this space, um, with it. So the only thing is you might find that our ans you know, our future generations are very grateful to these women for fitting all this data in as, as the large language language models, learn from it.
Alice: So that leads me on to thinking about the former Cambridge Analytica scandal, where they were using Facebook data to sway elections in different countries. And they were manipulating data for commercial gain, but also causing massive political shifts in terms of emotional changes in populations and propaganda.
And the thing that worries me personally about large language models in AI ChatGPT other ones are available. Um. Is that it’s the amount of personal data that people share. I’ve seen people send screenshots to ChatGPT and ask it to write responses, upload personal photos, and ask them, you know, what do you think I look like in this photo?
Um, my biggest worry is in 20 years, or 10 years or five years, we’re gonna see A BBC, you know, documentary about how this was manipulated and used. And yeah, I’d love to just dive into both of your thoughts. I’ll start with you, Alex, about the commercialisation of some of this very personal data.
Alexander: Yeah, again, I mean, it’s a, it’s a huge problem in, in the health sector. Um, you know, particularly when ChatGPT, uh, and these other, other systems created, you absolutely had, um, clinicians uploading things, uh, onto it that they absolutely shouldn’t have done. I mean, they did. Hmm. Um, they probably, they probably still.
Um, you know, there is a big push, um, for all of these larger language models to get themselves into the NHS to, you know, create environments in which the clinicians and other medical teams can use them, um, um, to, to collect data, to, to get answers. Um, and it, it is a dangerously, it appears to me, to be a dangerously unregulated system where the consequences are, are not properly.
Uh, thought through, you know, because, you know, the creation of the AI is the creation. Now the AIs are being taught by other AIs, means that the AI engineers themselves that are then wanting to put these systems into the NHS or into the Ministry of Defense, any other agency, um, is really problematic. And, and, and I, and I don’t think that government or any other regulator has really challenged what happens when you put these systems that begin to grow.
Yeah, without outside influence of, of the people that installed and what that means for all of us and all of that data. You know, what happens? I mean, as a, we were, we were talking as a day, and I, I was doing, again, using an ai, I was, I was putting a presentation together and I suddenly realised, oh, actually I asked, I said, have I missed anything?
So we haven’t put the team in, in, into, you haven’t put your team in. So, okay. I say, well, can you look on this website from my website? Um, it’s got our, our team in it. Can you just put that in? Yeah, no problem at all. Only created four completely. These weren’t people, these were not people in the company.
Okay. Four names with job titles. Yep. There you go. I was like, that’s not, that’s not correct. I, I know this company. It’s my company. I know these, these aren’t any people. They said, no, no, no. I’ve looked at your website. No, this is correct. Right? So it’s lying to me. What is that behavior and what does that mean in a health context in an MOD context?
Like I obviously know that’s wrong. Okay, but if I was just asking you about more general data, which I, which we all do every day. So, you know, if, if at some point that’s asking, you know, um, do are, are these two medicines, is there a contraindication that you shouldn’t take XX with Y? Okay. Now that’s a really complex question and even, you know, very experienced clinicians may not know the answer to that.
So at the point at which it hallucinates and says, yeah, it’s absolutely no problem at all, you should absolutely do that. Or you know, the way it’s been trained, it’s been deliberately by an outside actor designed to start saying things, mm-hmm. To cause disruption.
Alice: Yeah, there’s a few things that it makes me think of there.
I mean, in terms of SEO search engine optimisation for any fellow marketers with how websites have certain keywords crammed into them to come up on the search results, ChatGPT or whoever you’re using is now the new cha, uh, SEO, and basically they could recommend, yeah, take this medication. And it’s actually a paid advertisement and I don’t think people realise that.
Katie: Yeah, there’s a, there’s a great example is, um, a coffee company, actually not name the coffee company, um, very, I thought was quite smart actually. Um, from a marketing perspective. Um, have on their website you can drink coffee eight days a week. Okay. And they put into a ChatGPT can you drink coffee eight days a week?
Yes, you can. Yeah. And, uh, quoted their website. Yeah. Great marketing. It’s funny, but it’s also like, oh my God. Mm-hmm.
Alice: Yeah. Going into a slightly darker side. Katie, because of your background in the military,
Alexander: Sorry, I thought we talked quite dark already. How much darker do you want to get?
Katie: Yeah, turn the lights down,
Alice: the most depressing Cyber made human episode ever.
Um, because of your background in the military, and we talked, touched on earlier about war zones. I worry also not just about big tech using it for advertising and selling products, but also potentially getting into the hands of countries that you might be against if your people or your, I don’t know, army are using ChatGPT to talk about where you are or anything like that, those kind, that bits of information as well being used in literally war zones could also become a problem.
Do you think that’s something to consider?
Katie: It’s already happening. Yeah. Right, but it’s happening on Instagram and TikTok. Oh. So what’s, what’s happening is in Russian, soldiers are putting their pictures up, um, um, and. Intelligence, um, organisations are going, I think we can coordinate where that soldier is.
Yep, yep, yep. And there’s, um, a fantastic, um, a company that, um, my, the name escapes me right now, maybe somebody in the audience might be able to help me with this, is they use open source intelligence, um, by getting people like crowdsourcing people to say, where is this place? Yeah. And um, uh, people just start to put together this and they go, I went on holiday.
That looks, pull that snap from holiday.
It’s not Bellingcat?
Katie: Bellingcat, yes. Is this is the stuff is people are, you know, this is something that is happening like, um, like now. So the idea that you could weaponize what people are putting into Yes, yes, of course. But I think with the, this is already happening. Yeah.
Part of the challenge around, um, disinformation and things like that. And that’s a big problem that we have again, which is the weaponization. So it’s already happening.
Alice: Yeah. So one final question and then we will open the floor. 20 questions. Um, I’ve got so many questions I’ve gotta choose what is gonna be my final question ’cause we’ve talked.
I want to choose. I’ve only got a tiny bit of time left. Ah, I know where I’m gonna go. So because of your psychodynamic coaching, but also obviously this is open to Alex, you’ve talked a bit before about the frame and having a specific place and it having a kind of time that you leave and exit and have time to reflect the immediacy and the availability.
Let’s discuss the cons and if you can squeeze in a pro, I’d love it, of the availability of these as therapists.
Katie: Sure. Well, do you want me to talk about the, the frame and, and what, so yeah, frame line isn’t there. Um, so the frame is, is the idea that this is something that allows you to, um, to be able to sort of collapse into, so you, the frame can be structural, um, same, the location, um, the, the person.
Um, and these things don’t change. And when you have that thing that can hold you, you can dissolve a little bit in the session and then you can be sort of put back together and then exited. But if, if the day changes, the time changes, the therapist changes, you don’t feel that you can actually start to dissolve and sort of.
Um, recomp. The, the thing about, um, the, the time you, you can have therapy more than once a week. Psychoanalysis often asks for more than that. Um, but what it is, is, um, it’s a, it’s about building your inner world.
Alice: Yeah.
Katie: Okay. And so when you go into the outer world, okay, you start to go, have I learned anything new?
Do you know, like when you have children and they’re young babies, they’re toddlers and they, they run off and play and then they come back and they, the phrase is, touch your knee. You’re still here. Have I got this right? I’m going out to explore the world again. That’s what your therapist is doing. So your therapist is saying, let’s understand what’s happened now go out and play right now. Come back and tell me, has your inner world changed? Now you’re not literally gonna be asked, has your inner world changed at this? Tell me about it. Yes. Yes. Um, your therapist is going to be able to listen to you and be able to understand. Has anything changed? Yeah.
Because the way you see the outside world.Is your inner world. Right? Okay. So this how you behave with other people is how you are. Mm. And to be able to engage in the world in a way that you want to, you have to look inside.
Alice: Yeah.
Katie: But if you have this immediacy, you don’t have to. Right. Because you’ve always got your ChatGPT to help you and you are never going to change. Yeah. And the thing is, you could say, well, so what? But do you know what, it’s, the bits that make life worth living are the unplanned, unexpected, serendipitous moments. Mm-hmm. Right? The things that you didn’t think would happen.
If you don’t want to, can’t engage in those meaningfully, vulnerably, open-mindedly ’cause you don’t have your ChatGPT with you, you’re gonna list back, list out on life.
Alice: Wow. Thank you. Alex, what are your final thoughts on immediacy and over availability?
Alexander: I can’t improve on the expert. I mean, I, I, I, I just say, you know, in my, in, in a personal opinion that that
therapy should be one of those almost protected last areas where it is should only be human on, human.
And if we give that up.
What you know, what else are we giving up? We’re giving that part of our humanity up. If we’re not gonna have those conversations human to human, those most private and intimate conversations that we want to start relying on a computer to do that for us, then really what hope is there for us all.
Alice: Yeah.Thank you. So my actual final question for anyone who’s never watched the Cyber Made Human Podcast is the cyber made human bookshelf. And this is our opportunity at the end of each episode to get to know our guests a bit more, because it’s either a book that’s changed your thinking, your current read, or it can be relevant to what we’re talking about.
Um, so I’d love to invite each of you to recommend a book to our audience. Katie, I’ll begin with you.
Katie: Yes, and I, I failed in the brief already, Alice, because I’d like to recommend a blog.
Alice: What?
Katie: Yeah. Um, apologies. Surprise.
Alice: I’ve been to your house and I’ve looked at your bookshelf and I remember seeing Carl Jungs Shadow self and read it, and it was incredible. Um, so I am sad You’re not recommending a book, but go for your blog.
Katie: Well, what I would say though is, um, I do have a virtual bookshelf on my website.
Alice: Do you?
Katie: Yes. And you can literally, it is a widget. And it creates a bookshelf, right. So you can actually put the books on it.
Alice: Right. Amazing. We need that for the Cyber Made Human books.
Katie: It’s, it’s, it’s great fun. It’s honestly great fun. It’s, um, and it just looks like you have, and it’s the actual books. Okay. All sort of Very cool. It’s really cool. Yeah. So, um, yes, I love reading and, um, but, uh, the thing that has, um, I’ve, as you’ve kindly noted. I’ve been doing a lot of research, is I’ve turned to blogs more recently just to keep going. And I’d like to recommend, um, a, a, a doctor, uh, doctor Erica Matluck, she’s an American doctor, um, and she has embraced understanding Eastern. Um, medicine and links it to Western Medicine.
And the bit that I love about it is that she recognises that the psyche, which is who what we are and who we are, is led, um, from the heart, not the head. And if you read her blogs, all the stuff, you know, from your, from living in the west, she just clicks into Eastern and it just, life just makes sense when you read her blogs.
Alice: Wow. Thank you. Love that. And Alex, what’s your book recommendation, please?
Alexander: Well, it’s been a, so far it’s been a very negative, sort of half an hour. So hasn’t, so, um, my recommendation, it’s just come out this year. It’s, uh, if anyone Builds It Everybody dies, which I, genuinely, I think it’s, it’s a great book.
It’s about, it’s obviously about the, dangers of an AGI uh, I think it’s, it’s brilliant. Um, um, I think it is very thought provoking. Um, I think it, it should make you nervous. It should make you question where AI is going. Um, I, I’m by no means an expert, other are people in this room who are experts, might have read it, might have an absolutely concrete opinion to it.
Uh, but I think it’s a, a fundamentally important book to read and, and then you can go out and be positive about AI. But I think if you want. Uh, the position of the danger it’s an absolutely fabulous book.
Alice: Amazing. Well, I’m actually gonna recommend Bonds of Love, which I think is a philosophy book, and this is a feminist literature, surprise, surprise.
And, um, it’s interesting in the frame of this conversation because it’s all about vulnerability and power dynamics in love. And I think that with ChatGPT you are basically shielding yourself from that vulnerability or other large language models. By using it as a therapist, it’s because you, haven’t got the strength to be vulnerable and be rejected or have your heart broken or whatever it is.
And I think Bonds of Love talks about that. This was written, I dunno, 30 years ago, maybe longer. Um, so it’s not at all about AI, but it is about that human connection and I think that’s an important recommendation.
Um, so now do we have any ques from our audience?
Audience member: So I, I saw an interesting blog, funny enough, um, the other day actually talking about, um.
You know, sort of the, the future specifically in healthcare. Um, and, and coming back to your point about the NHS, um, and ultimately they’re talking about, you know, at the minute it’s, it’s a bit weird if your GPS ChatGPTing your symptoms or, or something like that. But in about five years time, it will probably be a bit weird if they’re not, you know, and, and actually that really got me thinking because, you know, it’s, it’s coming whether we like it or not, at the end of the day, everybody’s gonna be using it across every industry, every vertical.
Five years for most of us, 10 over the NHS depending on, you know, speed and money and everything else. But what, what do you think to that in terms of, you know, are we gonna be more comfortable with it in, in the next, is it become gonna become the norm in healthcare?
Alexander: I think like, like anything, the generation behind you, uh, behind us, um, will, we’ll find it normal.
And you say, well, we’ll expect it not to happen. We’ll expect that to happen. Um, I think it does go to the point that. Uh, the NHS cannot in my, again, personal opinion, cannot survive in its existing state. It can’t. And, and so what that means, and whether AI plays a part in that sort of fragmentation, um, is, is anybody’s guess.
Um, you know, I think, I think one of the things is to say, you know, ChatGPT is, I said, you know, it’s, it’s three years old. I, I know it’s a lot older than that, but in all of our, anyone was having a discussion for there to be a general AI debate like this, it’s three years ago. 36 months, that’s all. So if to predict in five years time what that looks like is, is crazy.
And it really worries me that yes, senior people in the NHS are actually say, oh yes, in five years time it’ll do this. In 10 years time it will do that. And it’s like, that’s kind of madness because it’s accelerating so quickly about what that will actually mean for us all. Yeah. Um, you know, because nothing else, I mean, just think about what does that actually mean for employment of, you know, of, of, of gps, right?
You know, they’ve already got an issue in which GPS are being trained right now and then can’t find jobs because the system hasn’t got the money to pay them. It’s, you know, can train them but won’t, won’t do that. So what are we saying that we just stop training gps? ’cause if it’s gonna be in five years time, we, we should stop now.
We should not train any more GPS today. Just give up and assume, look, we’re gonna be all replaced by an ai. Everyone happy with that? Anyone okay with that? No,
Katie: I doubt. There’s also a little quirk about us as humans, right? Which is we could automate airplanes now. But we still put two of them at the front because we do not wanna go in at an airplane and go, where are the people flying? The aircraft land themselves.
Right. You know, and, um, I think it’s in Hong Kong, the, um, the transport there, it’s fully automated and they have people sitting very positively at the front who have no control. There is no stop emergency, stop on nothing. But we just want the reassurance that there’s someone there.
Um, there’s going to be a tolerance that humans have. And this is the thing is, um, if you look at how we are as humans, we have been. Turn. Humans have been turning humans into machines since the Industrial Revolution. Mm-hmm. Right. Why do you have KPIs? Why do you have feedback? ’cause that’s what machines have.
Right. We have a real opportunity now to not be turned into machines.
Because then the machines can do the machines, things we can return to being human. Right. And your GP can go, here’s 20 minutes rather than five. What’s really going on? And that’s the magic.
Audience member: Yeah. And I think that’s where we’ve gotta be careful, I suppose, as a, as a society, you know, and, and to get really deep as a human race, I suppose, on where that line is between what do we use AI to help with and what do we get AI to do, right.
Katie: And this is, this is the, do you wanna swap? This is the, this is the mic drop. So the, uh, kind of, this is my book. Yes, yes. Thank you. Thank you. There, there, it’s, um, this is, this is what I’m talking about.
Alexander: Are you planting people in this audience?
Katie: Oh yeah.
Alice: Thank you. Well, we have an incredible audience here tonight. So there are some really interesting people. Florence, I see at the back. She’s brought Jeff with her, Jeff’s Kim from Kenya. Are you only here for this week, Jeff? Yes till Saturday. So do chat to him. As I’ve mentioned, we’ve got some Samaritans, we’ve got tech people.
Very interesting group. So go and help yourselves with some food and mix together and like and subscribes. Cyber Made Human. Thank you so much Katie and Alex, for your input. Really, really in. And I hope you enjoyed watching. Thank you everybody.
Watch the episode now!
Watch on Spotify
Watch on YouTube
GET IN TOUCH FOR ALL YOUR 2025 EVENT NEEDS
PHOTOGRAPHY | VIDEO | LIVE STREAMS | LIVE PODCASTING | SHOW REELS