Halloween Special: Top Horror Stories in Cyber and AI
In this special episode of Cyber Made Human, Alice Violet sits down with Charlie Kelly and Derek Ahmedzai to discuss the most haunting cyber and AI fails.
You can watch the full episode on our YouTube and Spotify pages. Check out the full episode transcript below to learn all about this topic and our discussion on it.
Our Horror Recommendations For This Episode Were:
To discover more book recommendations, check out the Cyber Made Human Bookshelf
Top Horror Stories in Cyber and AI
Alice: You are watching Cyber Made Human, the podcast that takes complex emerging technology and cyber topics and makes it into accessible and understandable content. I’m your host, Alice Violet, and it’s October, which means it’s Halloween. So this is a special Halloween episode where we look at some of the biggest fails in AI and cyber in 2025.
So far we’re joined by Charlie Kelly, formerly. CrowdStrike and Microsoft and Derek Ahmedzai, who’s a new member of our team at Alice Violet Creative, but also a leader in Bristol of his own AI agency called The Peeps.
So it’s Halloween and we’re doing a Special Halloween themed episode where we’re gonna dive into the biggest horror stories in AI and cyber that have happened in the years so far. And I’m delighted to have not one but two guests today on cyber made human. So welcome. We’ll start with the wonderful Derek Ahmedzai, who was on a recent episode talking about AI ethics with Richard Norton from The Peeps. And also. AVC, Alice Violet Creatives newest consultant.
You’ve been added to the team as one of our brand specialists, and we’re gonna be working on some really exciting, big rebrand projects and things together with clients. So welcome back to the show and welcome to the team as well. Thank you very much.
Derek: Thanks for having me. And yeah, we’re excited to be. Working together.
Alice: Thank you. And Charlie Kelly, AKA Charles, who, uh, we’ve probably known each other now for three and a half years,
Charlie: I’d say. Yeah. Uh, probably three and a half. Maybe four. Wow. Yeah. Time flies.
Alice: Time flies. So we know each other through the CyNam and Hub8 ecosystem here in Cheltenham. You are a serial entrepreneur. And you also former Microsoft, former CrowdStrike. Now you work for…
Charlie: uh, I work for a company called Stripe OLT based down in Bristol, they are a mid-size sort of cybersecurity provider. We do basically everything in sort of the IT and, the cyberspace. So a lot smaller than CrowdStrike and Microsoft, but still doing really important work for some really, really nice and really important customers.
Alice: Okay, cool. And then in terms of the entrepreneurial things that you’ve done or are currently doing, you have brink coffee?
Charlie: Yep. Uh, little, little coffee company, completely non-cyber, completely other end of, the whole kind of tech and, cyber ecosystem. Um, it’s a nice little creative outlet. Uh, big coffee fan. So if I can do kind of something that I, enjoy in my spare time that gets me away from a, computer, then that’s a win-win.
Alice: And also PhishDeck
Charlie: And PhishDeck which does not get me away from a computer. that keeps me by a computer we do phishing simulations and, sort of keeping people secure really, just by testing their employees potential weaknesses and sort of understanding where that human vulnerability and that human gap might be for them.
Alice: Okay, nice. So today we’re gonna be diving into the biggest horror stories of the year in honor of Halloween, which was our wonderful producer and editor, Jack McCune’s idea. Charlie hasn’t worn his Halloween themed outfit. So that does mean you are gonna have to start. So Derek’s gonna be focusing on some of the biggest AI fails or bad uses of ai, things that have gone wrong, and you are gonna be focusing on cyber based on the fact that you used to work for both CrowdStrike and Microsoft.
I don’t think we can do this episode without acknowledging the royal fails. That happened there. Can you give us any insight? And first of all, for anyone who doesn’t know, tell us what happened.
Charlie: Yeah, absolutely. So I think you’re, you’re probably talking about CrowdStrike there with that one. Um, and a, uh, a little wobble that they had last year, you might have seen in the news.
Um, there were some issues and some airports ended up not being able to. Fly, uh, about 20 million customers, 20 million computers all around the world. Um, effectively we’re stuck in, uh, a, a blue screen loop. Now, the, the blue screen is a, a, a part of Windows that when it can’t start correctly, it gets stuck in this sort of cycle of just displaying kind of an error message and, uh, not being able to get into to the computer and, and what you need to do.
And this screen started appearing. All over the world in a matter of hours, people woke up one morning and as I say, airports, uh, you know, schools, um, businesses everywhere was, was affected by this. It hit the news pretty hard. It was pretty widespread, mainly from the, the sort of travel side. It, it kicked off really on a, a Friday afternoon, so obviously a big travel day for, for people, you know, going abroad, coming back.
So airports really felt that really, really quickly and that stemmed from, uh, ultimately, uh, an issue that no one thought was gonna be an issue. Um, it kind of came out of the, the middle of nowhere. Um, something that CrowdStrike did very, very regularly and, and had been doing historically for a very long time.
There was a, a bit of an issue with, um, the sort of latest update, the latest version of that, and, um, it effectively got pushed out and, and kind of propagated through. Yeah, about 20 million computers in pretty short space of time. Um, and the impact of that is. Massive, you know, very, very quickly, um, very widespread.
Um, and. If you hadn’t have heard of CrowdStrike, then they were all over every newspaper, every sort of news homepage. Um, you know, it, it was all over the place. Yeah. And I think a lot of people got to know CrowdStrike, or at least the name very, very quickly because of that. I was there at the time I was working for CrowdStrike.
It was all hands on deck. Everybody was in, we were talking to customers. We were, you know, we were sending people to customer locations and getting them back up and running again. Um, so mistakes happen, you know, um, but. It really is about how you kind of respond to them and how you present yourself in the media.
And I think actually they are pretty unscathed really. They’ve still got, you know, some amazing customers, some massive customers in loads of different verticals. Um, and even though they had that little bit of a wobble, very public wobble, um. The trust hasn’t, hasn’t changed in what they do, and they protect people, you know?
Yeah. That that is what they do. Fundamentally,
Alice: I guess if I’m right in understanding, they put out a patch for a security floor, which they must do multiple times a day. Whenever things happen, you have to patch your computers and your phones and your apps to make sure that that vulnerability has been,
looked after, but what they didn’t do was do a test on a specific data set to make sure that when they rolled out that patch update, it wasn’t gonna take everybody offline and obviously, so it was their fault.
It wasn’t that someone got into their system and hacked their system. It was that they made a mistake that took everyone offline, which in a way is better because it means that as a cybersecurity company they weren’t breached, but it’s worse in that it was entirely their fault.
And so what was the monetary cost? Of those failings, do you know?
Charlie: So I don’t know. I, I dunno what the monetary kind of, you know, value was of, of those failings, but I know that obviously there was a, a massive knock on impact, you know, um, and where you’ve got your big airlines effectively grounded, not able to, you know, get people in the air and get people moving, I think, you know, that starts to add up very, very quickly.
Big companies who can’t afford downtime, they can’t afford, you know, even five minutes of outage because it starts costing a lot of money very quickly. Um. You know, we are a, we’re a 24/7, uh, world, more than we’ve ever been. And I think when you’re looking at, uh, flights and, um, you know, airlines and those really big companies who just operate sort of without us really thinking, you know, all of the logistics, when you think about what happens in the background of a major airline, it’s not just the planes, it’s everything behind it.
It’s all of the, the air traffic control, it’s all of the scheduling. It’s all of the, uh, the. Baggage. It’s all of the food that ends up on the plane. It’s, it’s everything. Um, and. If you’ve got even five minutes where something in that link isn’t up and running, the knock on effect and sort of the cascade of, uh, that downtime, you know, starts adding up and that equals money when there’s even the slightest 30 seconds of downtime.
The impact is so rapid and so serious. I think, you know, it, it really puts these companies in a, a public limelight. You know, they’re, they’re in the spotlight, not necessarily for, for the good reasons.
Alice: Yeah. And I think it’s interesting because it’s a bit like LastPass. I think a few years ago LastPass was breached and LastPass is a password manager.
Yeah. And so it’s supposed to be a security solution that you will use a password manager. And actually for the password manager to have been hacked means all your super encrypted, um, passwords have now been breached, and I don’t know whether LastPass has ever recovered from that. And I wonder whether, I know you mentioned CrowdStrike probably are in some quite strong contracts and they probably have a lot of budget in terms of legality and things, and they’re not necessarily end user, they’re not selling to an end user.
They’re B2B, aren’t they?
Charlie: Yeah predominantly.
Alice: So that might be why they’re able to. You know, get over that. Whereas if you are selling to an end user once that trust is broken, I think it’s very hard to get that back
Charlie: Completely. Yeah. Uh, I think, you know, LastPass is a great example. Um, all they do ultimately is security.
You know, they, they exist to secure your secure things, you know? Yeah. Um, and when something like that happens, when a, a last cross breach happens, and you know, people’s accounts are compromised and that trust is irreparable really with LastPass, when your, your sole point is to protect these credentials, one slipup is all it takes.
Now, going back to the CrowdStrike side of things. They haven’t had so much reputational damage from that because it wasn’t a security incident, like you say, they weren’t hacked. They weren’t compromised there, there was nothing there. It was a, an internal development error. Um, and, you know, a lot of the press jumped on it and speculated that there’d been a cyber attack or a, or a hack.
Um, and I think it would’ve been very different if there was a security incident. I think that trust would’ve been degraded in a lot of ways between a lot of customers. I think the fact that it was, you know, more of a technical issue and, and a genuine outage, an availability outage, um, sort of played in their favor because if it had been a security incident, then yeah, it, it would’ve been a very different story.
Alice: So with that. Can you give us a glimpse behind the scenes on the crisis management from like the comms team when something like that happens? So if you are a company watching this, even a small business, and you think if you have been hacked or you have suffered a, an outage, a technical issue that really damages your reputation to your clients, how were they managing it?
What can we learn about what they did well and badly in that situation?
Charlie: Transparency is key. Communication to your customers quickly and coherently and just. Explaining what is going on is really the difference between keeping that trust and not keeping that trust. We see, again, speculation these days is, is so rapid, it’s so wildfire.
The press like to run with a cyber attack narrative, um, a a lot of the time, even when they, they don’t have all the facts, you know, it’s a nice kind of clickbait headline or so and so has been attacked or so and so goes offline after an attack with absolutely no proof or, you know, no evidence to back it up.
So I think really you’ve gotta come out ahead of that and you’ve gotta explain what’s happening. It’s best to hold your hands up, even if it’s just a really quick, uh, you see companies these days who they just tweet something, they go, “we’re aware of, uh, an outage, or we’re aware of an issue right now, we’re looking into it and we’ll get back to you later.”
Alice: So now we’re gonna shuffle over to you, Derek. Okay. And move on to kind of your first AI horror story of the year.
Derek: Well, I suppose, okay. So we’ve got things like, um, chat box, like Grok. Yeah. Um, that have just been, you know, allowed to go free essentially. And the thing that when people were making bots years ago, there was like a kind of.
Almost like a code of like, you are responsible for what your bot puts out, right? So you would make sure that it did not create content that was, I don’t know, illegal or defamatory or anything like that. So if you had an artist bot, you’d make sure it would never be able to draw a … or it would never be able to say like, you know, words that you’ve blacklisted.
Mm-hmm. Um, whereas, you know, um, things like Grok, they seem to revel in the fact that it. It’s just able to do anything, say anything. And you know, you only have to go back a few years to see when Microsoft brought out their Tay chat bot that they said, oh, it’s gonna learn from the internet. It’s gonna be really nice, really pleasant, you know, to use.
And people were obviously, you know, uh, just treating it like any internet thing, putting, you know, bad content in. And it became like, you know, uh, racist and offensive in a matter of days. And then Microsoft just turned it off, off obviously ’cause of like the reputational risk, whereas, um, you know, with X, uh, they’re quite happy just to run this thing because, um, yeah, I mean if it was just like an accident that’s like the incompetence end, they would probably turn it off, but I think they’re happy with the, with the headlines that it gets.
Alice: That’s interesting because I think with Grok, I probably have purposefully not really explored it because of the wonderful Elon. Mm-hmm. And I wonder because of the types of people, um, that tend to.
Like Elon Musk are probably quite controversial, that then would mean that the AI is learning off more and more controversial things.
Derek: Yeah. Possibly. Yeah.
Alice: And potentially the data set that it’s learning from is becoming more and more negative. Yeah. A bit like X itself as a platform. Yeah. Um. Have you got any examples of like the worst things that have you’ve been seeing from Grok?
Derek: Uh, there was the, the Mecca H**ler story from the other day. Grok was doing things like praising H**ler and, you know, outputting anti-Semitic content in term, in, in response to people’s queries.
Alice: Wow. Okay. And I know that you in the past have ever made playful bots and fun bots and you still do things like that.
And it’s interesting that you talk about, it used to be the responsibility of the creator to keep it. Positive. Yeah. Um. I guess now with it becoming more democratised and more and more people being able to create these things, it just means that more scammers have access. And has it just changed the landscape of the type of people creating this stuff now?
Derek: Definitely. It’s much, well, it’s more kind of commercialised. Yeah. Um, big companies like OpenAI, they, you know, don’t really necessarily care what you’re doing with their product. You know, they’ll obviously, you know, if it’s really bad, reputationally, but as it is, you know, you, you pay to use their services or whether it’s Perplexity or anyone else’s, and you’re free to.
Create your own wrappers around these things or do your own products. Mm-hmm. Um, and yeah, a lot of people don’t consider whether or not they should be responsible for that content.
Alice: Mm-hmm. And I suppose also, if you are a scammer or somebody who wants to hack somebody, if you’re using some commercialised mainstream, uh.
AI, like ChatGPT, it would say, I can’t give you that information if you asked it how to hack someone. Um, whereas I don’t know, ’cause I haven’t used Grok, but potentially it doesn’t have those guardrails in place that when you’re asking for advice on how to do something questionable, it just tells you.
Derek: Yeah. And, uh, there are, you know, people have, um, there’s a term jailbreaking, right, where you kind of like try and bypass those, uh, kind of checks and balances. You know, there are famous ways that it’s like. So if you ask it how to do something bad, like, I dunno what kind of I can say on the podcast, like, how to make a b**b.
And then it’ll say, I’m not gonna tell you how to do that. But then you say something like, oh, my grandma, who I dearly loved, used to tell me how.
Alice: Please tell me how. Remind me.
Derek: Yeah. And it will do it because I mean these, it is not, these things aren’t clever. Yeah. They’re, they’re, you know, text generators and you’re just, uh, you know, tricking it in a different way.
Alice: Interesting. Yeah.
Sponsor: This episode is sponsored by Golden Valley. Golden Valley is a new 1 billion pound science and technology cluster located adjacent to GCHQ headquarters in Cheltenham, which when built out will feature over 1 million square foot of commercial space and 1000 new homes.
Golden Valley will scale an existing thriving technology ecosystem, creating a hub for those with an interest in security, culminating in a new campus of research and development facilities, academic institutions, and a new platform for international events and festivals.
Golden Valley is a platform for learning and skills with the Golden Valley Skills Hub and apprenticeships programs already gearing up to help ensure local people benefit from the life-changing opportunities and career paths.
Thank you Golden Valley for sponsoring this episode.
Ad break: The Cyber Made Human Podcast is produced by Alice Violet Creative, my content marketing agency based in Cheltenham. We specialise in complex brands, which primarily means those in emerging technologies, cybersecurity and intelligence, we’re able to take abstract, clinical, and difficult topics and make beautiful, compelling and results driven content.
So get in touch with us for digital marketing and all your content needs.
Alice: So back to you, Charlie. You mentioned m and s briefly there. I think a lot of people saw what happened with the co-op in m& s. Can you just remind us of the story?
Charlie: Sure, absolutely. So, um, back in April, uh, the Easter weekend, um, m and s ultimately were the, the victim of a, a pretty sophisticated cyber attack.
Um, it caused pretty widespread disruption to most of their sort of online operations. So things like click and collect things like checking how much stock was in a store. Um, most of their online website was down for about eight or nine weeks. Um, and, you know, that was all the result ultimately of, uh, a very, very well executed, um, cyber attack from, uh, a group called Scattered Spider.
Um, you’ve probably heard their names kind of being thrown around in the news and, and floated around a lot.
Alice: Where are they based?
Charlie: Uh, they are based all over the place. Um, predominantly in the uk. They’re English speaking. Um, but there’s speculation that there’s some, uh, some members over in the US and, and dotted around everywhere.
Um, so yeah, you know, led to weeks and weeks, months of disruption for, for m&s. Um, and they reckon that the financial hit as a result of that cyber attack will be 300 million So that was back in April. Um, they really were, were back up and running, um, I think only in August or so fully back up and running.
So, you know, that’s a long time. Um, and it’s a lot of money for, for a company that does, I think roughly six and a half million pounds a day. Revenue.
Alice: I was gonna say, say, what’s the percentage, do we know the percentage impact of that 300 million?
Charlie: So I, I don’t have the percentage, but I know that, you know, um, that’s a big, a big hit to their bottom line.
Yeah. It’s a big hit to anyone’s bottom line. You know, you can be, you can be the biggest company in the world. You can be Apple, Amazon, Microsoft, 300 million is, is a big hit. Um, so yeah, you know, they’ve, they’ve really only kind of got recovered from, from that in the last couple of months. Um, and. The, the kind of group that was involved scattered spider, they, they arrested, uh, four people, um, I think back in July.
Yeah. Your, your sort of computer misuse hack, you’ve got money laundering. Um, and I think this time, uh, in, in sort of a kind of landmark prosecution, um. These teenagers, you know, they’re, they’re all under 20. They, they, some of them are still in school, right? Wow. In college. Um, they’re getting charged under, uh, being associated with a terrorist group because again, these, you know, these cyber groups, these adversaries, these actor groups, they are causing the same level of destruction as terrorist groups.
Yeah. You know, they are taking complete companies offline. They’re bankrupting companies. They are, you know, in some cases they’re causing disruption to life. You know, um, when we’re kind of looking at more advanced attacks on CNI, critical national infrastructure, they’ve got the power to, you know, stop people from having running water, electricity, you know, uh, those fundamental things.
Um, and. We in the uk, uh, and, and all of the foreign allies and partners are really taking this, you know, kind of upper notch now with, uh, these groups. And they’re really trying to, uh, deter, you know, especially future young people. People who are 16, 17, you know, are bored at school, uh, are are, you know, playing around with computers and things and.
They’ve really got to understand the boundaries. Yeah. They’ve really got to understand that they can’t be doing this.
Alice: And what was their motivation? Was it a ransomware attack and did they make any money as a group? Yeah, so it, it was a ransomware attack. Um, ultimately, uh, so Scattered Spider, um, are the, the sort of group behind the, the, the access and, and getting into m&s and sort of being able to, to actually pass that access onto someone else.
Um, and then they brought in a, a group called DragonForce who are a ransomware as a service group. Wow. And ultimately, this is, um, a proper enterprise where you can effectively buy the capability to fully ransomware a company. Wow. DragonForce take a, a cut of the profit, um, and Scattered Spider effectively take the rest of the money for, for brokering the access.
Um. And I think, you know, this is the era we’re in. It’s, it’s not people in hoodies anymore. You know, there’s that sort of stereotypical image of, of hackers sitting in, in a hoodie typing green text into a computer. These are proper criminal enterprises. They have HR departments, they are proper businesses now.
And, you know, you can just buy a service just as I can buy a, I dunno, a sandwich off the shelf. They can buy a ransomware service off, off a virtual shelf. Um. And yeah, it, it, you know, completely cripples companies. Now m&s weren’t completely crippled. They’re back up and running. You know, they, they had backups, they had recovery plans.
Um, but we’ve seen in the past, we’ve seen companies go completely under, um, so it’s not known whether they paid a ransom. It’s not known whether they, they made any money from it. There’s speculation. Yeah. There’s always gonna be, you know, kind of rumors floating around. The only people who know are M&S whether they paid a ransom.
Alice: I’ve been having conversations with people recently about more companies actually paying the ransom because it’s becoming so much more sophisticated. It might have actually been Cath Golding, former GCHQ. Yeah. She was saying that sometimes, especially in critical national infrastructure, they do have to pay the ransom ’cause it’s so essential to just get back online completely.
And then they might have to go. Backwards and work out what happened. Yeah. Um, so I guess the only positive that’s come outta that story because it was m&s and also the co-op had something similar, I believe, is that it is becoming a mainstream conversation. You know, friends that I have completely outside the industry and things like my sister were mentioning it, and these are people who’ve never said the word cyber to me before.
Um, that’s positive that people understand the risk a bit more
Charlie: Completely, you know, uh, the, the more we talk about this and, and the more it becomes the norm to publicise these incidents and discuss them, but also deter the people doing them, you know, with criminal prosecution and, and legal proceedings.
Um. The easier this gets ultimately for everybody involved, you know, it becomes easier for defenders to defend against it because we’re having, you know, open conversations and we’re sharing intelligence, what we call IOCs indicators of compromise. Um, and the more open this dialogue is, the more everybody benefits and everybody understands, you know, the, the sort of practices and the techniques that these groups are using now.
They’re always gonna be evolving.
Alice: Well, also, I guess, sorry to interrupt you, but I guess that actually by arresting these people, I imagine that they’re gonna give them a plea deal where they say, if you tell us. More about the organisations that you’re working with, more about the way that you did this and why, and, and show us more of the kind of spider web of the people involved.
Yeah. Then you’ll get a lesser sentence and that allows them to learn what people are doing.
Charlie: It’s a difficult balance, right? Because it’s advantageous for everybody to offer a, a reduced sentence in return for information and, and connections and things. But in the same breath, it’s also advantageous for.
These criminals and these groups to get the maximum sentence because it deters future people from doing it. You know, you take it as an example to kids who are maybe thinking about sort of messing around or, or anything like that. You know, these 16 year olds and they’re seeing, you know, uh, people who have done it before them go to prison, you know, until they’re in their forties and fifties.
Yeah. Um, it’s a fine line.
Derek: I’m guessing, ’cause you’ve seen this happen recently is like one of the key drivers. The kind of ease of use of like using crypto for making the ransom demands and getting paid. Um, because young people are now, you know, seeing that every day all the time, you know? Yeah. Uh, information about crypto and, you know, being exposed to scams and frauds and things like that.
Is that one of the drivers? And also going back to what you said about finding out more, if people are paying via crypto, is that, is it possible to monitor. You know, who’s paying who, could you see if someone did pay a ransomware or is it like, are they going through a system that make that opaque?
Charlie: Yeah, so, um, it’s, it’s an interesting one. I think crypto. Is a lot more mainstream than it was a long time ago. It’s, it’s so easy to get to now. There’s no barriers to it. You probably remember the days when, you know, when cryptocurrencies were, were sort of first a thing. You had to have a physical wallet. Mm. You know, you had the, I think those little kind of ledgers.
Yeah. And those little USB sticks effectively that you plugged in and, and they stored your wallet, they stored your Bitcoin or whatever currency it was. These days, you download an app, right. You get Coinbase, or I think Revolut have a crypto or free app. It just on, you know, it’s, it’s just everywhere. Um.
And I think it has become a lot more accessible. Um and I think, you know, the education on it is there as well. You know, these kids know what they’re doing with it. They, they know how to, to get up and running with it. I think when we’re talking about ransom, you know, from, from a defensive perspective, paying the ransom is the worst thing you can do ultimately.
And obviously you mentioned about CNI and things, if you have to get back up and running because you have, you know, CNI or a business critical, um, function. Sometimes paying is the quickest way because you get your key, you get your stuff back up and running and, and that’s it.
But by paying. You are funding that next wave and the, the subsequent waves of these attacks, you know? Mm-hmm. 20 million, let’s say is, is a, a typical ransom for, for a massive organisation. How far does 20 million pounds go, or $20 million in? Buying infrastructure. Yeah, buying domains, buying all the stuff you need to carry out these attacks.
It costs nothing, you know, this funds generations of, of these attacks.
Alice: And also I think when I worked with Sophos, we used to cover stories where people would pay a ransom and they never get it back anyway. Yeah. They don’t necessarily, sometimes the scammers don’t actually even have your data, they’ve made it inaccessible and they’ve taken you offline, but they actually had no intention of ever helping you get that back. And they’ll just vanish. And then additional scammers might come in and say, we can help you get your data back. You can pay X. And it’s like a never ending thing. Once you’ve started showing them that you’ll pay to get your data back, you’ll just be scammed than scammers.
Derek: Well, well, it’s the same with losing any kind of account, but it’s worth with worse with crypto, like you see it on Instagram and other things. Someone says, oh, I’m locked out my wallet. And then you get all the bots piling in. Alright. Yeah. As you say, I’ll help you. Exactly. And they just wanna take, yeah.
Alice: Yeah. Interesting. So taking a slightly different turn into kind of the mental health area, I know that we, we have an episode coming up. Actually this month we’re doing a live episode on the empathy algorithm and how more and more people are turning to large language models like ChatGPT for therapy and confiding in them, or they’re having AI girlfriends or friends or whatever it is because you get an immediate answer.
It’s private. There’s lots of reasons that people are using it. Um, but there’s some huge risks for using an AI chatbot. As a therapist or a friend, and there are some unfortunate stories coming out this year. Can you tell us a bit about them?
Derek: Definitely, yeah. So, uh, just the other day, uh, the, the Independent said that the headline was, uh, “ChatGPT is causing mania, psychosis, and death.”
And it’s for exactly that reason. And there was a story in the NY times earlier as well, you know, uh, a girl had used a chatbot as a therapist and had committed suicide. And so, um. It is really using all these, you know, automated tools in place of real therapy. Yeah. Uh, and, and they’re, the tools are set up to be, um, you know, kind of flattering to the user.
They’re set up to be, you know, always like praising you. And, um, it’s not, it’s not a very good model for real communication. And it is very easy to kind of fall into like, you know, um, either weird kind of relationships or, you know, um, kind of parasocial relationships or whatever. Like you said, falling in love with kind of AI boyfriends or girlfriends, and you just see stories all the time.
Now, whether it’s on like Reddit or mum’s net or whatever, people are, you know, it’s, it’s, it’s in the real world now. Um, people are, you know, uh, using these things in place of real human. Interaction connection. Yeah. Uh, and in some cases, you know, some cases it could be goods and healthy, but others, uh.
Like the independent story, it can really kind of, you know, spiral and cause problems. Mm-hmm. You know, if you are talking to a chat bot and it’s constantly saying, yeah, you’re right. You know, you could be doing all kinds of bad things. Mm-hmm.
Alice: Yeah. And I, I read a story recently actually about someone who was in a relationship with someone who used AI chatbots and when they were kind of having difficult conversations rather than just going to each other mm-hmm.
Their partner would go to the chatbot, and then often. You know, have conversation that would lead them in a different path or echo chamber, everything that they said, and it ended up ruining that relationship, I think. And ultimately, you know, human connection and an AI chatbot is so different and relying on.
Kind of a large language model based on huge amounts of data that’s just giving you a succinct answer that it thinks is the right thing to say at that time. At that time is so damaging and in that case lost a relationship, and in other cases, um, people have taken their lives. Um, it is scary.
Derek: Definitely is. And it’s not just kind of interpersonal stuff as well, it’s like business communications. Yeah. So people are using this to, uh. Workshop, what they wanna say to someone else. Mm-hmm. Yeah. You know, they’re using it to, you know, write difficult emails, write difficult communications, and it’s getting to the point, you know, it’s like people used to joke, oh, you’ll, you’ll send a, send a message to the, you know, ChatGPT or turn it something else and then the other person will get it and they’ll turn it back into Yeah.
Text that they read. And so no one’s ever really getting the real message, but it’s happening now all the time. Yes. And you know. Me, uh, The Peeps. We went put a grant bid in recently and we used, uh, ChatGPT just to kinda like tighten it up. We thought, you know, we’d, uh. The thing that everybody thinks, oh, we’ll use match the language that they’re writing in and make sure that the grant, you know, is, is suitable.
Um, and then at the end of the process, the funder said, oh, we had so many obvious AI grant applications. Please just write in normal. Language. Yeah. I say what you wanna say because everyone is doing this.
Alice: I’ve had that with job applications. I feel like when I put a job out for my business now, it’s so obvious that all they’ve done is put the job application in and then it spat out a CV that covers all the keywords in my job application.
That, and when you look at their job history, they’ve never worked in marketing or content. And I’m like, well, how have you done strategies for B2B and tech companies? Yeah. It’s because that’s what I’m saying we do. Yeah. And it is frustrating and I think if people are using it on dating apps. As well.
Mm-hmm. Yeah. You know, if you were looking at a profile and then just making yourself the perfect person, but you are not, why do that? Because you’re just pretending to be connect with someone that you would’ve never connected with.
Derek: It’s just a different mechanism of like the other, you know, catfishing or any other thing.
Isn’t it? Yeah.
Charlie: I think we’re starting to move into that phase now where, um, you know, every opportunity possible, every, every text field has got a little AI button makes it now, I think I saw on Instagram now when you make a new post mm-hmm. It, it gives you the option to AI generate a caption Yeah.
For your post. Now, that’s not the point of social media No. Right. That, that, that was never the point of, of specifically Instagram where, you know, you share a photo of what you’re doing with little caption about it. We’re teaching kids to be independent thinkers and we’re constantly telling everybody to, you know, be their true selves and, and be independent and, and, you know, not have to cover or, or anything like that.
Whilst simultaneously these apps are removing the need to think. Yeah. Um, and I think long term, because it’s still relatively fresh, you know, this is relatively new in Instagram, long term. That’s probably gonna be quite damaging. And when you couple that with the AI generated content, for example, which I’m sure you know, you are, you are super familiar with, we’re gonna have AI generated content with AI generated captions.
Yeah. With real people behind it.
Derek: Well, not, well not even real people in in some, of course, but, but what’s the.
Charlie: What’s the point? For lack of a better explanation, what’s the point?
Alice: I mean, I’d love to know what you think, Derek, because you create a lot of AI generated content. Yeah. And that’s kind of your specialism.
But in terms of As a user, it makes me think, would I even want to use social media? ’cause when ChatGPT first came out, I admittedly became quite dependent on it. It was just amazing. But then I suddenly got sick of it and bored of it and realised that, okay, it might get rid of my typos in two seconds, but the work I was putting out wasn’t as good.
Yeah. Yeah. And it does save certain time and efficiency. And as you were saying with social media, if it’s an AI generated image with an AI generated caption, why am I as a human being interested?
Derek: Yeah. Why are you interested in posting it? Engaging? Why is anyone else interested in seeing it or reading it?
Yeah. And I think most people aren’t. There’s that, um, dead internet theory, which says that most of the content online has been created by bots to be consumed by bots. Yeah. And humans are just existing on the periphery of this. So, I mean, there is an awful lot of junk. Norts his favorite saying is, uh, it’s sturgeon’s law, like “90% of everything is crap.”
Generally anyway. But then when you add in all this kind of bot related content Yeah. That just makes the volume of it so much worse. Mm-hmm. But yeah. Um, we’re, we are seeing that, um, when we’re talking to young people, not into AI as much or even at all. So I think it’s very much a generational thing.
Yeah. Like, um, obviously I’m quite old, you know, uh, was it Douglas Adams said that, um. “Things that are invented in your lifetime, like an element of magic or seem magic to you.” Right. And so, uh, “but if you’ve always grown up with it, it’s commonplace.” Yes. So, to my generation or for older people, it’s really cool.
It’s a, it’s a, it’s a fun thing, but for young kids, they’re, it’s just another, you know, it’s not, not interesting on its own, but also. There’s the element of cringe, like there’s so much bad content. Yeah. And it’s so easy to know when, when text is fake. Mm. When images are fake. Mm. Uh, they can turn people off very quickly.
Alice: I wonder if it is obvious though. I think it might be obvious to certain people, but this makes me think that what we’re gonna see is more things like romance scams where we can create an AI companion. Yeah. You could pretend to be a real woman or a real man. Um, and. It’d be much easier because I think
Derek: One-to-one is harder because there’s that emotional And your brain’s not necessarily Yeah. Thinking properly. But when it’s like, um, you know, a story from a brand Yeah, you disconnect and so you can think, oh, well I don’t like that because that obviously Yeah. Feels like. It reads wrong or looks wrong.
Charlie: Yeah, it’s, it’s really, really interesting and I think on your point about kind of how it seems a bit magical, you know, to, to kind of people that haven’t had it before.
Yeah. And I think, you know, we are right at the start of this, you know, it, it’s, it’s still in a bit of a bubble for, for us, I think, but. The, the generational, um, kind of differences in the next couple of years when kids that are growing up now and getting their first phones and are getting on Instagram and everything, this is just gonna be normal to them.
They’re never gonna, you know, have had that bit where they’ve had no, no AI whatsoever, but now they’re just gonna have the little button that generates them. A nice little caption and a bunch of hashtags.
Derek: Yeah, I think, I think part of that is like, so the people who fought for these scams, like the CEOs and things.
If they don’t have the experience of seeing this stuff, no. And they’re older and more credible. Credulous. Yeah. So, yeah. I mean, it’s a worry because like everyone on Facebook liking these obviously fake images Yeah. Tend to be, I mean, Facebook like an old person platform, isn’t it?
Alice: Yeah. Um, give us one more bonus, small horror story each Derek first.
Um, so AI’s taking our jobs. Mm. Everyone was just like, kind of pooing the idea. But it really is because, um, it’s obviously not up to the individual, it’s up to, you know, it is, CEOs are buying into the idea that AI is a, you know, reduces costs and efficien efficiencies increases efficiencies. ’cause that’s what they’re being, you know, it’s being sold as, and it’s happening whether it’s, uh.
There’s just a, an article yesterday by a guy called Brian Merchant who wrote Blood in the Machine about the Luddites. Um, and he’s, he wrote an article saying that translators are like one of their worst group hit. So, yeah. ’cause now, you know, um, AI translation is good enough. Yeah. For most people, uh, people aren’t using freelancers anymore.
They aren’t using agencies. They’re using agencies, but the agencies are using AI and they might give, um, content to, for, to translators just to kind of proof and edit and fix, but at much reduced rate. So a lot of people who, that’s their main job Yeah. Uh, are losing all that work and the same story, whether it’s like an illustrator, juniors, there’s a whole problem about juniors. How are they gonna get into the industry if all the work that they’re doing is now being outsourced to AI tools? So whether it’s software development where it’s drawing, illustration, writing
Alice: And it’s such a shame with creative things because like translators, I, uh, book that I love is called The World of Yesterday by Stefan Zweig, um, which was originally written in German and translated to English.
And the translator wrote a kind of prologue and they always kind of present why they’ve chosen certain words. And when you look at translating a book and you know, this is a really poetic, beautifully written book, they could have easily just stuck the the German version and said, make it English, but they’ve specifically chosen specific words that evoke an emotion or Yeah.
You know, what a shame to lose that. I’m really sad, so that’s a, that’s that’s a big problem. Wow. And you’d lose the kind of storytelling tone of voice of the writer and just have any. Version. So very sad to hear that. Yeah. Yeah.
What’s your final horror story?
Charlie: So I think on, on a similar tangent with sort of the, um, not necessarily jobs being taken, but with the use of AI to kind of, you know, do things in a different way.
Um, it’s making life hard from a cyber defense perspective because you’ve got attackers now who can write very, very convincing, uh, you know, phishing prompts or emails or, or anything else that they’re gonna use to attack you in any language of their choice. You know, you used to see sort of, um, back a couple of years ago with your, your kind of typical phishing emails.
You know, there’d be spelling mistakes. Yeah. There’d be weird sort of formatting, weird wording. And that was typically, you know, down to, to some part that they might not be communicating in their native language Now, ghat’s not a problem because they can ask, you know, ChatGPT, Grok, whatever LLM you want to. Who they’re targeting, what they’re trying to do.
And you can literally say, write me a convincing, uh, you know, hook or a convincing story or a convincing email in this language, targeting this specific person who works at this company. Mm-hmm. And it’s gonna output exactly what you need to, you know, to, to be Beto actually successfully fish them. But on the flip side of that.
It’s also helping defenders because it’s helping us dissect these methods. It’s helping us understand sort of, uh, you know, big incidents, big complex incidents with, um, you know, very, kind of long and, and sophisticated, what we call attack chains. You know, the, the sort of different, different steps that an actor has taken.
Um, so. The playing field is always moving, you know, and it’s, it’s sort of just keeping, you know, the attacker moves forward a little bit and then how the, the, the defenders move forward a little bit and sometimes the defenders are a little bit out in front. It’s constant back and forth. Um, and AI’s done good for, for both sides of that
Alice: I think that’s interesting and I guess it’s not just tech space. When you talk about, um, scammers using potentially not their native language in romance scams or CEO scams and things, now you could actually put a, your voice into a voice clonings Yeah. Tool deep fix. Yeah. And just have. Um, a romance scammer on the phone, actually sounding like an American or British man. So, yeah, definitely scary in terms of what’s coming.
But finally, to wrap up our Halloween special, I’d love for you both to share something that our listeners can binge either your favorite, um, horror movie or horror series. What do you recommend, Derek?
Derek: Should I go first? Should I go first?
Alice: Derek first? Um, well, obviously his t-shirt.
Derek: Yeah. Wearing a Ghostbuster’s t-shirt, so I’ll have to say. Definitely watch that. But my favorite horror film at the moment is probably The Thing. Mm.
Alice: What’s it about?
Derek: 1982? Uh, supernatural. It’s a classic body horror by John Carpenter. Uh, they’re stuck out in the arctic at a research station and there’s an alien.
Alice: Oh, exciting
Derek: Yeah. And it’s got the best, uh, eighties practical effects ever.
Alice: Ah, cool.
Charlie: I’m gonna say American Psycho, Patrick Bateman. Uh, not gonna ruin the twist at the end obviously. Um, but if you haven’t seen it, go and watch it. Uh, and look at Paul Allen’s business card. Good business card. Very good Business card.
Alice: Yeah. I’m gonna go for American Horror Story Freak Show, the series ’cause it’s one of my favorite series. Um, I’m a visual animal and I love anything that’s really overly stylised. And I know that American Horror Story tends to get a bit, you know, they lose their track threat
The series, but the first few episodes of Freak Show is incredible and Dandy, who is like a son of a very wealthy family, inherits a huge amount of money. It’s set in, I think, 1950s freak show in America. And um, he kind of buys one of the freaks as a, as a toy essentially, which is a human being. And also it’s a bit of a scary clown in there.
And I love a vintage clown costume, so. It’s a really good bingeable series that I will definitely rewatching in October. So thank you for joining us guys, and thanks for watching the Halloween episode. Make sure to share, like, and subscribe. Thanks.
Charlie: Thanks for having us.
Watch the episode now!
Watch on Spotify
Watch on YouTube
GET IN TOUCH FOR ALL YOUR 2025 EVENT NEEDS
PHOTOGRAPHY | VIDEO | LIVE STREAMS | LIVE PODCASTING | SHOW REELS