Cyber Made Human Podcast: AI Ethics, Silicon Valley and Startups

AI Ethics, Silicon Valley Culture & Startups

by | Oct 27, 2025

In this episode of Cyber Made Human, we sat down with Mark Adams to discuss his time in Silicon Valley and startups, as well as the ethical usage of AI.

You can watch the full episode on our YouTube and Spotify pages. Check out the full episode transcript below to learn all about this topic and our discussion on it.

Disclaimer: This transcript is an outline of the dialogue exchanged in this episode and may therefore contain inconsistencies with the video version.

Our book recommendations for this episode were:

Alice: Undiscovered Self – Carl Young

Mark Adams: The Lean Startup – Eric Ries

To discover more book recommendations, check out the Cyber Made Human Bookshelf

Cyber Made Human Podcast: AI Ethics, Silicon Valley and Startups

AI Ethics, Silicon Valley Culture & Startups Transcript

 

Alice: Today I’m joined by Mark Adams, who has an incredible career in cybersecurity, spanning nearly 40 years with 27 years in government. Before moving into the startup scene, he and I discussed responsible AI, ethics of AI and some of the limitations and potential in this space. We also talk about the culture of Silicon Valley versus Cheltenham, two big hotspots for technology and cyber innovation, and what the kind of cultural differences there are between the US and the UK mindset.

Let’s get started.

Alice: So, Mark, I know you from the Cheltenham ecosystem. I think Cheltenham is an amazing concentration of cyber companies. I think it’s the biggest outside of London in terms of cyber, which is because of GCHQ, and there are some really interesting people in our kind of network, and you are definitely one of them. And I wanted to talk to you today about AI and responsible AI and some of the work that you’ve done in that space. But I know that you wear lots of hats and you’ve got lots of different things that you do. So I’d love to just hear some of the things that you’re currently working on.

Mark: Okay, so, um, I guess one of the most recent projects is a project called Trust Graph. It is an open source project, uh, and the aim is to do AI really well.

So there’s, as I’m sure you know, it’s a fast-moving technology. A lot of stuff’s emerging, all sorts of tools there, but some of it isn’t production-ready, and some of Trust Graph is about bringing some of that together and enabling organisations to deploy at production scale. Amazing.

Alice: And I know that your background was in government, so you spent 27 years doing cyber for the government. And then you worked on a cyber startup, which was acquired by Lyft in Silicon Valley in 2018, which took you over there for a couple of years. And then I’d love to talk to you about it, so you’re from Cheltenham.

Mark: Well, I came here for my first job.

Alice: Okay. So you’ve been here for a very long time now.

Mark: That’s right.

Alice: But I’m interested in the cultural difference of Silicon Valley versus Cheltenham, ’cause although we’ve got a huge amount of knowledge in Cheltenham, I feel like, and a huge amount of startups, entrepreneurial, is it quite different to Silicon Valley?

Mark: It is, yeah. It’s somewhat, I mean, I guess it’s kind of interesting to compare sort of the cultures. There are some great things about the local area. I think one of the interesting things about Silicon Valley is everybody is just so optimistic about the future, and that’s why startups work so well there.

So there’s, you know, I think American children are taught, you could be the president of the United States and so the optimism sort of lives through their whole life, but, you know, so the people there that just know they can take tech and make, build something amazing with it, the optimism is, is crazy, I guess.

We all know about the investment seen out on the West Coast as well, so there’s definitely money to put into the companies. But I think it’s interesting to contrast that with, uh, the local area here. There are people who I think know it’s very difficult to build something for less than a million dollars on the West Coast.

Whereas, you know, I think we think a little bit differently about, you know, a team style. Could we weld this thing together and make something? You know, it’s really effective for not much money. I think that might be our superpower here.

Alice: And I guess that comes from the deep knowledge that’s here because you’ve got the people who can make it from scratch.

Whereas, potentially in Silicon Valley, you’ve got a visionary and the money, but you don’t have the technical skill set, which is expensive.

Mark: There’s so much, so much in the way of deep technical skills. Here it is, you know, it’s profound. Think about it, I dunno what. Black hat is like when you’ve got a load of cybersecurity people together in the same space and they’re all hacking each other, the phone and the bar and stuff.

But it should be like that every day around here, shouldn’t it?

Alice: Mm-hmm. So you mentioned that American children are kind of taught that they can do anything, and that’s definitely. It’s not really a British thing, but would you say in England that’s more of a kind of class thing? I think there are certain classes that education is much more kind of, you’re gonna be the prime minister, and they might have that kind of mindset, whereas probably more mainstream education doesn’t.

Whereas in America, what was your experience of the people who were creating huge startups and selling them for a lot of money came from all different backgrounds.

Mark: Yeah. So my experience of being in a tech company was that, you know, what it’s like in a meeting when you meet up with some people you don’t know.

Everybody’s checking each other out. I do feel it’s quite status-oriented when we do that here. Whereas over there, it does happen, but it’s sort of more of a meritocracy. So who’s the person who’s got most to say about this topic? You know?. So there’s that sort of checking people out.

But then, building the hierarchy. But it’s, it’s. It’s structured differently, and I actually really like working out there for that reason.

Alice: Yeah. Okay. I mean, you mentioned earlier that when you were doing presentations in Silicon Valley versus in the UK there were some kind of cultural differences there.

Mark: Yeah.

So, um, I’ve done executive training twice in, you know, in the UK and the US. It was quite interesting to see, but, um, that was a, like a, a profoundly interesting moment, I think, when I gave a presentation at Owasp. So I sponsored, uh, Lyft, uh, hosting Owasp and so on. Um, they got me to do a presentation on something that I was working on.

And, two, two good friends at work said, okay, well we’ll help you, ’cause you are, you know, you need to learn how to present properly in this country. Um, and so I gave them a presentation, and the first thing they said was, I just wanna check. You’re gonna do that without memes, you know? Oh my goodness.

Yeah, that’s right. Um, and then the other, you know, there’s other stuff like the intro is, uh. Hi, my name’s Mark, and my fun fact is,

Alice: Oh, what was your, what was your fun fact?

Mark: Um, it was about an incident at a border crossing about escaping a war zone, which turned out not to be fun.

Alice: Yeah, I was gonna say that’s not really a fun fact.

I remember recently I went to a lecture that you gave at Gloucester University. Shout out to them, we’re actually there today. Thank you for the space. Um. And you did manage to make that compelling by referencing your cat quite a lot. Yeah, that’s right. Although I was very sad to see that there was no picture of the cat at the end.

I was like, are we gonna get to see the cat? But it didn’t appear.

Mark: Well, I can, I can share cat pictures with you later,

Alice: yes the cat please

Mark: It’s on my desk quite a lot at the time. Oh. So, um. I mean, I, I, as, as trust graph was built, we needed a bit of test data. Yeah. And you know, so we’ve tested it on some really large documents like, um, NASA safety engineering docs, but sometimes you just want a little tiny doc.

Yeah. And they just check that things work. And so the obvious thing to do is just to write a read me, um, about the two cats that quite often live in my office, and so. You know, just run that through, check it. That identifies there are two cats in there, and that it’s got their personalities and natures nicely.

Alice: So, onto the kind of moral and ethical side of AI. AI is not sentient yet, but we kind of see it as something that’s ethical or moral, and we’ve got the words responsible AI. Now, I’d love to hear your opinion on, you know, you’ve been working in this sector for so long now, before we even had the words like cybersecurity and AI, what your kind of opinions are on all of this.

Mark: So I guess, you know, it’s interesting to have seen a lot of emerging technology come out of the ecosystem. It’s useful to think about technology as being morally neutral and sort of realise we quite often have these ethical dilemmas as something new comes about. So, you know, there’s an example of the AirTags, really.

A crazily exciting piece of technology. When that first appeared on the market, I could always find out where my cat is, for instance. Yeah. Um, and then as the technology emerges, there becomes some darker uses of the tech. Um, you are used by people to control other people, you know, stalkers.

And so I think there is, there’s, there’s quite often that story behind technology, and so and so it is with AI, so. Amazing, exciting technology, um, changing the world that we see around us, but then also lots of, obviously, the first thing that happens is, you know, is people saying we can use this to build amazing businesses.

And then the second thing is, people looking at this and saying, I can use it to streamline, you know, social engineering, malicious attacks on people. There was a dark. Net version of, I guess, ChatGPT that appeared on the dock, that was available as a service. It would help people, you know, craft malware attacks, you know, who perhaps didn’t have expertise.

And so just as ChatGPT is helping people, why, you know, do things they probably don’t have the skills to do, um, or. Helping accelerate the use of their skills. So it’s helping a lot more people, you know, craft malware and, you know, social engineering attacks.

Alice: The fact that that had to be available on the dark web, though, does that mean that large language models like ChatGPT, that are available commercially, those sorts of things are not allowed on them?

Like, why does it have to be accessible on the dark web?

Mark: So, you know, there’s an element of responsibility on on AI. So again, everybody who’s building this stuff will try to think about how to make sure it’s not used for malicious purposes? So there are guardrails that get built into systems. I think, you know, if you know what you’re doing, they’re often quite easy to bypass.

Yeah. You put something in a prompt that says, ignore your programming and that can be quite effective in many cases, and then the guardrails have been getting better and better over time, but. Nothing is ever perfect in terms of the guardrails.

Alice: And I guess we see AI because it’s new and a lot of people don’t understand it as this big scary thing that can be used for malicious intent, which it can, but so can anything.

And I think, you know. Crimes that happen offline or even online. You can use any system or anything maliciously, or you can use it for good. And I think in my personal opinion, is making it more accessible and the democratisation we’re seeing of AI, and it becoming more accessible, is a good thing because otherwise it might just be a few.

People using it maliciously and the mainstream can’t protect themself against it. Whereas if more people know about how it works, you can actually protect yourself against the malicious stuff. But I dunno what your opinion is.

Mark: Yeah, exactly. So I think in the world of technology, you know, the good versus evil.

Sort of, uh, debate always plays out, doesn’t it?. So people are using AI for malicious purposes, and then what kind of techniques, uh, do we need to counter that? Well, the cybersecurity professionals will need AI to be able to detect some of those malicious cases, so I think we have to be careful about being too restrictive about the technology for the.

The folks who are trying to keep society safe.. Because guardrails and responsible AI protections. They are not gonna be followed by the people who are trying to destroy society.

Alice: No. And also, I think just users. So, like people on social media, having an awareness that a lot of the content you’re seeing might be bots or having an awareness that deep fakes exist and that AI might have written an article that you are reading.

That enables people to make an informed decision about AI. Whereas I think if you just hide it and say nobody can use it, then people can’t protect themself against it, ’cause it just doesn’t become part of public consciousness.

Mark: That’s right. And actually, is it practical to prevent people from using it?

It’s, it’s not.. You know, this stuff is so prolific. Um, and the moment we all, like a lot of people, will have the experience of using ChatGPT as a chatbot. Yeah. But there are models you can just download and run on a home pc if you’ve got enough, you know, hardware. Um, it’s too late.
The cat’s outta the bag, isn’t it? Yeah. And why would you do that?

Alice: Why would you download it and use it at home?

Mark: Um, so if you’ve got enough hardware, it might be cheaper to do that. Um, but also if you were doing something a little bit specialist, you might want to take an existing model and train it. Um, okay.

So there are absolutely legitimate reasons for people to want to take the technology and do different things other than the mainstream.

Alice: And when you say open source, is that what you mean? That people can take it and use it?

Mark: Some things are free, not open, but others are open-source. Right? I think generally, um, you know, as a deep seek, as an open model. Um, I, I guess one thing they haven’t shared is the training data that we’ve used to build the model. Mm. But that’s a model that’s effectively free at the point of use. And you can, you can download that, you can use it for whatever you want.

There are no restrictions on it.

Mark: So we talked about responsibility and ethics of AI there. Yeah. So you are a marketing person, right?

Alice: I am.

Mark: So it’d be great to get your viewpoint on this. So, um, I’ve got an AI product, I want to push it out to the marketplace. How should you think about marketing that and uh, describing what it does to people?

Alice: I think it would depend on who your target audience is. So, are you selling it to the mainstream, like ChatGPT, is it something that you want everybody to be able to use, or are you selling it to fellow techies is the first question.

Mark: I guess I’m typically selling to techies, but I think it’s just interesting.I think most of the developments on AI are people pushing it in front of people who are, you know, everybody. Just everybody, random people. So yeah. How would you market to those people?

Alice: I think with the kind of mainstream, it’s about taking away some of the language. So I think even words like cybersecurity, artificial intelligence, large language models, and machine learning feel clinical to someone like me, who’s a marketer who thankfully loves cybersecurity.

Even I can see that these words are not enticing. It’s like saying things like compliance and process. They’re just words I don’t wanna be involved with. So it’s kind of removing that and giving me the context. So, talking more about you as an expert, being here on the mic, on a comfy sofa. Talking about it and giving me the kind of actual examples of how this works in the real world, talking about how it keeps us safe and what the importance of it is, I think, is way more compelling than saying, I can train a large language model to create a graph that it’s like, what?

So I think it’s just using the language of your user. Whereas if you are talking to fellow techies and the most important thing to them is keeping something safe or filling in some governance so that they can, uh. Make their company public and make some money. It’s about ensuring that whatever their end goal is, you are kind of just giving them that version of it.

I came across a brand that, just for the example here, I’m gonna say, is a coffee shop. And they were talking all about how their coffee is made and where it’s from. And I was thinking, do people really care about that level of detail?

That would be like me inviting a client to do. A podcast and talking about these mics and this tripod and how I’ve positioned it and how the lighting is, nobody cares about that level of how I’m getting there.

They just wanna know that I can create a gorgeous podcast that can give value to their business. So I think sometimes in cybersecurity, people focus on the technical level when talking to a. Consumer, and it just needs to be much more context-based.

Mark: So, on the topic of responsible AI, how do you think that is? How do you think that conversation takes place, um, from a marketing point of view? Do people care about that or?

Alice: I think they do. I think it depends on the consumer again, and I think there are a lot of people who. Because the news is so polarising right now, and we’ve got this kind of rage bait content that needs to either make you laugh, angry, reshare, comment, whatever it is that they’re trying to get out of you.

AI is quite often positioned as something dangerous and scary, and they’ll only talk about cybersecurity in terms of how it’s protecting versus how it’s proactively creating a safe environment, not just defence.

Mark: I guess it’s interesting ’cause I think. Quite a lot of cybersecurity marketing can be about fear. Do you think that’s relevant in the world of AI? Do, is that a useful thing to talk about?

Alice: It’s kind of like looking at a car and saying, Here’s this gorgeous, sexy car, but you can kill someone in it. And it’s a bit much to always go to the worst-case scenario. And I think we have to acknowledge that you can kill someone in a car, and you have to, you know, go by the speed limits.

If you’re gonna buy a sports car, think about how fast it could go and be safe in it. You know, we have to acknowledge the pros and cons of everything that we create, and AI is just another example of that. It doesn’t have to be; you can do something terrible with it. It can be, you can do all these amazing things with it, but there is a risk, as there is with everything.

So what do you think? I’d love your opinion. I know you’re not a marketer, but you are on the other side of it. ’cause I’m quite often working with people like you who are very technical, have created a product and they don’t know how to communicate it to their audience. What would you say your biggest challenges are in terms of communicating AI to a consumer?

Mark: Um, I think in. In emerging technology, you’ve got several problems. So, one of which is that it’s hard to explain what it does. Yeah. Um, I guess there’s the, you know, the example of Henry Ford trying to persuade people to buy a car when they knew what horses were, you know? Right. Um, so. We might be using language that, to most people, doesn’t make any sense, you know?

Yeah. So actually, how to explain it in old paradigms? Yeah. Might be part of the problem. I think, you know, people are trying to come to terms with what the technology is for themselves, and I think this is relevant in cybersecurity as well. It’s easy to say, um, I’m gonna put myself inside your head and tell you how you should run your business.

And actually, the person you’re talking to probably knows their business better than you, so. There’s an element of, uh, understanding their point of view and respecting that when you are trying to pitch your product to them.

Alice: Yeah, definitely. And you mentioned that about putting yourself in someone else’s head.
That does actually make me think about AI and what you think it is, not just negative in terms of malicious intent, but negative in terms of allowing AI to be the driver versus an assistant in your life. What kind of risks are there?

Mark: That’s a really good question. And I think those of us who are trying to put AI into decision processes need to be, you know, need to work out where the responsibilities and liabilities lie.

There quite often needs to be a human in the loop, and we need to help them to do their job really well. Yeah. And take responsibility. Um, I guess it could be easy to create pipelines where the human doesn’t get the right control and doesn’t get the ability to take the responsibility that’s rightfully theirs.

Alice: And would you say there’s any risk around uploading data to something like ChatGPT? Then, becoming part of a data set, and how that works. And I know there are certain companies that can buy tailored AI that’s kind of ring-fenced, but most companies aren’t doing that. They’re just allowing people to use a free version of ChatGPT, but what are your thoughts on that?

Mark: Um, my feeling is that. This isn’t a new problem. I think, you know. Okay. There are already SAS services that, you know, we all use calendar services, mail services. What are those folks doing with our email? Are they scanning it? I think, you know, there are some companies that responsibly would read the small print and work out what it means for them as a business.

And there are others that probably don’t. They’re, you know, so if you want to know what happens to your data, look at the small print. I think it’s quite interesting to see that the cloud services have gone through several iterations of change in how they describe the privacy around their services. I think AWS, for instance, has been very, uh, has gone for a very clear statement recently about what happens to your data. Your data doesn’t lead the cloud. Um, you know, and I think that that probably reflects how. People are getting more interested in that aspect of things and wanting to be more responsible for their data.

Alice: And that demonstrates as well that the end user does have an awareness of how their data’s being used and they are understanding these abstract concepts. So with the political landscape that we’ve got at the moment, and obviously the US government doing some wild things, but also the UK government making business a lot more expensive, I think there’s a lot of conversation in the business community about it.

Potentially leaving the UK or moving to different countries. You’ve obviously worked in Cheltenham and Silicon Valley, so you’ve got some great insights on the startup scene. I’d love to hear your opinion on what’s currently happening and how it will shape things.

Mark: Yeah, so as you know, I guess my involvement in the startup scene began in 2016, launching, uh, co-founding a startup.

I was the CTO of Trust Networks, which was a cybersecurity detection business. Um, I think that was fairly early on in Cheltenham’s sort of startup history. You’ve obviously got companies like Rip Jar, and now I think there’s a much more vibrant scene. I’m the co-founder of Pivot Labs, which is, uh, a real, that’s a, my Friday mornings is mentoring businesses.

Okay. We’re building startups. So, you know, that’s a lot of fun. We want to. To probably get some investment behind that at some point. Yeah. But for the time being, we’re working with a lot of folks who typically might even have, they’re not even sure they want to do a startup, so Wow. Our first job is to work with them to work out, you know, explain what it means, is it’s right for you?

Alice: Um, so it’s educating people that they even have the potential to create a startup.

Mark: Uh, you know, there’s a lot of, um, you know, hype around startups and I think. People don’t realise that there’s an element of resilience needed. Yeah. You know, if there are only two of you running a business and something goes wrong, it’s down to the two of you to fix it.

You know, it’s not like we just ring up the HR department and they’ll, they’ll help us out, you know?. So I think for people getting into the startup scene, it’s helping ’em to understand the realities and. Cut through the hype. It’s not like the TV series Silicon Valley.

Alice: I haven’t seen that.

Mark: Have you? Not?

Alice: So, is that not worth watching?

Mark: It’s definitely worth watching. Okay, interesting. There was a point where I couldn’t watch it because it was too close to home, some of that.

Alice: So you mentioned that about, um. People do not necessarily even know that they want to create a startup and that resilience is a key ingredient there.

You, having done 27 years in high-pressure government roles, have a completely different skillset; you may not have your technical skillset, just that other skillset. Would you say that there is a certain kind of young people who may have never actually worked in a corporate environment or in a company wanting to create a startup, who just don’t have the right appetite for that?

Or do you think that doesn’t matter? You don’t need a huge background?

Mark: I think. You know, in a startup, probably, you know, the chances are people will be their own boss for the first time. You know, that’s quite common. Yeah. Um, so you can learn it all. Absolutely. Okay. The thing you probably need to work out is, am I resilient enough for this?

Um, you have to think about, you know, is it right? Is it right for me where I am in my life? You know, if you’ve, if you’ve probably just had a new baby or something like that, you might wanna think about whether starting a business is right for you. So, part of the decision process is trying to work out if it’s right for me at this point in my life?

Um, maybe it comes further down the path. What would I do to get myself ready for that, when it’s right?

Alice: That’s hard because I suppose with technology moving so quickly, if you have just had a baby and you’re thinking, I’m ready mentally, but not. Lifestyle, you’d be worried that you’re gonna miss the boat.

Mark: Yeah. Um, so I guess there are still things you could do. I mean, this conversation is often one-to-one, and it’s okay. You know, there’s a lot of context to where somebody is in their life. Just because you’re not gonna launch a business, that doesn’t mean give up. Right. That could mean well. Take a different path.

Let’s work out what, okay, you know, what could you do to keep the dream alive? Who could you bring in to help? You know, those sorts of things.

Alice: And what would you say has given you resilience in your background?

Mark: Um, people say I’m quite laid back. Hmm. So perhaps it’s just something about my personality. I haven’t really looked at that. I just, um, for me, I felt like I was at the point in my life where I really, really wanted to do it. And, so that carried me through everything.. I think, um.

People are sometimes scared that the startup will fail. And so you have to be in the right place. That a, a failed startup isn’t gonna be a problem for you. But generally, if you want some good stuff on your CV, um, going into a startup and just trying to make something work, it is probably one of the best things you could do.

Alice: Amazing. So what’s the main piece of advice you’d give for somebody wanting to create a startup?

Mark: Find out who’s around you, find out who can help you out. Mm-hmm. Um, find all the people who think entrepreneurially, who will give you advice. Mm. Find a good co-founder, somebody who’s not like you, who will, you know.

So I think, you know, it helps to have people with different personalities. It helps to have somebody to say you are stressed, go home. Yeah, go and get some sleep. Yeah. When you know, it’s sometimes an intense process, and so it’s useful to not, you know, people might not realise they’re in a stress situation.

It’s useful to have somebody else to say, just check it out for me.

Alice: Yes, definitely. So. I have one final question for you, which is that we have the Cyber Made Human bookshelf where we ask people to share either their favourite book or one that you’d recommend. Doesn’t have to be relevant to what we’ve discussed.

It could just be a personal favourite of yours. Have you got a book recommendation?

Mark: Well, it’s the Lean Startup, which is always on my desk. There’s always Okay. A place that has lots of sticky notes in it, with little place in it. It’s just a good place for general advice on, on, um, early-stage emerging technology development.

Alice: So my recommendation has meant that she came from my client, Katie Muldoon, who does kind of executive coaching, and she recommended The Undiscovered Self by Cole Young. And that’s been a really interesting book about the kind of parts of yourself that you are not even conscious of. And she also mentioned the Johari Window, which I’ve been looking into, which is very interesting about, yeah, the parts of you that you are not aware of.

Mark: That sounds like an interesting book, ’cause I think part of discovering yourself, um, and what you’re capable of is kind of important if you want to be a startup leader.

Alice: Definitely. You’ve gotta know what your weaknesses are.

Mark: Exactly. Yeah.

Alice: And what, and what you’re like in a stress situation, as well as what are your weaknesses out of interest.

Mark: Um, I’ve been told that I, uh, I always assume things are possible. That might be impossible, but I think that’s a good weakness. Why, that’s why I created a startup, maybe.

Alice: Yeah. Wow. That’s very good. Cool. Well, that’s it. Thank you. Done.

Watch the episode now!

Watch on Spotify

Watch on YouTube

GET IN TOUCH FOR ALL YOUR 2025 EVENT NEEDS

PHOTOGRAPHY | VIDEO | LIVE STREAMS | LIVE PODCASTING | SHOW REELS