“What is distinctive about being human?”
Eve Poole
“What is distinctive about being human?”
Eve Poole
Writer & Speaker

Join Our Mailing List
Sign up below and we'll send you occasional updates,
https://www.buzzsprout.com/admin/2155110/episodes/15182578-eve-poole-what-is-distinctive-about-being-human
EP: 0:13Great to be here. Thanks for the invitation. So my really big question at the moment is is there anything distinctive about being human? Well, it’s a book that looks at where we’ve got to with AI and asks some really big questions about what we’ve missed out. And the reason for it is that I was with my girls on North Berwick Beach and they were pottering around in the sand, digging holes and splashing each other and running about in sandcastles and the kind of thing that kids have been doing on beaches for centuries. And I looked behind me and North Berwick Law, which is very ancient they found an Iron Age fort up there, I think and it occurred to me that this timeless picture of kids mucking about on beaches you know there’d been kids doing that for literally thousands of years on that particular beach and it was a lovely picture of all these kids just running about having fun.
EP: 1:59And I just suddenly felt really cold because I started worrying about whether there would be any children there in another thousand years. Because I thought well, the problem with AI is, it’s the first time when we’ve been so busy thinking about time saving tools, and tools will make our lives better and easier, and it’s really the first time we’ve designed a tool that is quite deliberately designed to replace us. Well, that is really the first time we’ve designed a tool that is quite deliberately designed to replace us. And I thought, well, where will this end? Because in many narratives, if you believe evolution is only about perfection, it’s about improvement towards that end, then if AI could perfect us, why wouldn’t we just pass the keys over and push off? And it made me feel really, really worried about our species and I suddenly thought, well, is there anything distinctive about being human? Do we have any right to prevail if AI is better than us? And that’s really what made me write the book, because I thought, actually, that’s quite urgent. That feels like an urgent conversation that we’re not really having in public. We’re sleepwalking into this situation where we’ve designed this thing that’s getting ever cleverer and ever more able to copy what we do and replace us, and I don’t think we’ve really thought about the end game and the exit strategy. Very well, thank you.
EP: 4:18Well, I looked at what AI is really good at and I mean Ian McGilchrist would be a brilliant person to uh articulate this. I use him quite a lot in the book. He he has noticed that one of the things that’s happened since the enlightenment is, society and culture has kind of pivoted towards the primacy of of the rational and the scientific narrative and materialism, all those kinds of things that we’re very familiar with and, of course, because that’s our current view of what’s best about life and what’s best about us, that’s kind of the stuff we’ve prioritised. So the AIs, you know, it started with maths and chess and all those kind of highly rational gentleman’s pursuits of the educated. And when you look at what AI is now deliberately programmed to do, it is all those kind of very rational decision making, computation, very individualistic, noticing that we are developing highly highly able entities which are designed not to have any emotions or any conscience or anything else like that, but to be superhuman and better than us in terms of abilities to think whatever you decide. That means. So really, we are making a master as a psychopath.
EP: 5:40And that made me worry because when I started noticing the implicit personality of AI, I started noticing that of course we make things in our own image and in a highly rationalistic culture we’re going to make things that are the best of us. Why would we program in stuff we think is junk code, which is, you know flaws and bugs in our own design. Why would we do that? We’d sack a robot for being emotional and soppy and know, being uncertain, making mistakes, all those kinds of things. So so it’s kind of not. It’s not news. But when you start thinking about, well, why have we got all that junk code? What, what is that in us? Um, I happen to have a worldview that is informed by my christian faith and in that faith, god made us, and if you have any faith that has a God involved, or you imagine that there is some logic to our design even imagining evolution is the only logic then presumably there’s stuff about our design which is designed to make us work. So it made me suspicious that we have this narrative that we’re full of bugs and flaws and those things mustn’t be replicated into AI. So I started developing a bit of a thesis about junk code and that maybe that was the thing that made us distinctive, because when you start looking at the suite of junk code we have and again I can go into that in a bit more detail you start noticing that it’s actually a very clever set of risk mitigators. It’s actually a very clever set of risk mitigators Because the second, you give something free will in the way that we’re gaily trying to program computers now to, you know, reprogram themselves.
EP: 7:18I mean, if we’re going to put another Mars rover up there, we want it to be able to repair itself if a meteor hits it. So we’re hell bent on trying to think of ways we can get it to reprogram itself, which looks to me a lot like giving it free will quite deliberately. Um, but we weren’t given free will willy-nilly. We were given free will with all these kinds of bits of junk code that are designed to actually stop us, you know, going out of control and running amok and and going rogue. Um, so I got.
EP: 7:38I then got really curious about that, because it felt to me that we were writing off a lot of our designers’ bugs and flaws, when actually they’re features, those things, and they’re very specifically designed as part of our distinctiveness to help us stay on track as a species. Yeah, yeah, yeah, absolutely well, there’s seven of them, and the and the key one is is free will. The first one, because the reason that everyone is writing open letters about the control problem and alignment is that you wouldn’t normally give something more intelligent than you free will. Um, because it’s going to beat you. It’s going to, you know, overtake things. You know we’ve got a lot of narratives in our heads about that. Um, and so free will is the kind of ultimate junk code, because it’s very foolish to give something permission to do whatever it likes, because then you immediately lose control.
EP: 9:37Um, so, as I said, if you’re sat in the lab with god trying to figure out how do you game this particular species, you know the angels are all saying no, saying no, don’t do free will, mate, you know, and not end. Well, they’ll all just end up, you know, crashing and burning, and then there’s a waste of a design there. So you would immediately have to build back in some kind of risk mitigators and ameliorators. And in humans we’re a rubbish species really, because it takes us nine months to grow and when we finally get born, we’re pretty rubbish for the first wee while from lying around screeching and not sleeping or whatever else. And so you need to make sure that your species will not just take one look at this ghastly thing and shove it over a cliff or push off the shop.
EP: 10:19So emotions is a really crucial bit of kit for us in terms of coding and functionality, because emotions means that we will want to bond with this mewling infant and we will want to protect it and we want to stick around until it’s big enough to make decisions on its own us and about siblings, because they’re part of that story, and kith and kin. And then of course, emotion starts, driving a sense of, of community, and then you get safety in numbers. So that’s the first thing to help with, free will is, if you have safety in numbers, you are immediately increasing the chances of this species surviving. And as part of that, given that it does take so long for us to gestate and then to to grow up, and because you know, robots don’t have childhoods they wouldn’t need one, but we do um, what are you going to do in childhood when those children aren’t quite sophisticated enough in their you know phds in psychology and decision making, philosophy and whatever to be able to make correct decisions?
EP: 11:18Well, you give them intuition, you give them gut feel and sixth sense and all those kinds of things that we poo-poo as kind of spidey sense and witches and second sight and all these things that we tend to just think are kind of spooky and not very rational. But actually we’ve all relied on that stuff, you know, when you’re not quite sure that that person has your interests at heart, or you just don’t feel quite safe going down that road, so maybe you’ll go this way. Or there’s all kinds of stuff that we access on a daily basis and that we may not admit to and we write books about how we think it’s probably all a load of old nonsense, but you know, as a species we’re quite used to relying on that kind of data, and whether it’s the collective unconscious, whether it’s sort of spooky inheritance from our ancestors, who knows what it is but it is something that we will attest to as something that drives our decision-making, particularly under pressure and in a crisis. So emotions and intuition are quite important to kind of get your species off the starting blocks. And then you have to zoom in on how do you actually stop it making terrible decisions with all this free will you’ve given it. So the first thing you do there is you put a pause button in. So we all have quite an extraordinary capacity for uncertainty and you know this differs a little bit by personality how comfortable you feel about uncertainty, but generally we’re quite good at, on the one hand, this, on the other hand that and thinking about all kinds of possibilities, thinking about concepts, thinking about things that are quite open-ended. And the great thing about uncertainty is it stops us leaping to conclusions, jumping to solution Heaven forfend. It might make us ask for directions if we’re lost, or go to a wiser person and say well, what do you think? So again, that drives us towards other people, experts, people with experience to help de-risk our decisions, but it also makes us pause before we decide, because that uncertainty means we don’t immediately leap in and, interestingly, that is something that AI has learned we need.
EP: 13:18So, if you think about priming an AI to be able to code cancer scans, if you just have a binary yes, no, what do you do if one of those cancer scans is just fuzzy? Because if you miscategorize it, someone could die. So what you do is you give all the neurons a kind of voting system so that they all vote on how certain they are about the image and then, if it doesn’t pass a threshold, you get a person to look at the image and say, oh, actually that’s the wrong data set, or it’s a fuzzy image or whatever. So you know, the AI community has already realised uncertainty isn’t kind of flakiness, it’s actually risk mitigation and also the thing we do is we make a lot of mistakes. That’s the other bit of junk code. You know you’d sack a machine for making a mistake.
EP: 13:58The whole point about these things is supposed to be predictable and short, but actually we make mistakes all the time and AI has learned a bit of that. So a lot of reinforcement learning is about trial and error, because you just need to watch a toddler and they spend a lot of time falling down, so they learn their body and then they find their balance and they learn trial and error. So we know that bit of mistake making. But the other thing in human design about mistake making is it actually has a moral purpose, because when we make mistakes because we’re in communities, people tend to react to that. So they might be cross, they might cry, they might not like us, they might ostracize us, whatever. But there is shame, displeasure, all kinds of emotions washing around that we may not like. So if we make a mistake and we get a bad reaction to it, we’ll tend to learn that that’s not a good mistake to make again. So over time we develop a conscience and conscience is really crucial in the human design because it stops us making bad decisions more and more and more over time, because we learn what happens when we make bad decisions and what happens to other people and what happens to us. So that’s a thing that isn’t programmed into AI but is a kind of part of our human design.
EP: 15:12And then the final two. You know, you’ve got them off the starting blocks. You’ve tried to sort of future-proof their decisions. How do you make them want to get out of bed in the morning if they have made a dreadful error and everyone hates them, or it’s a dark day or whatever they’re ill?
EP: 15:26Um, and the other really extraordinary thing about human design is we have this amazing capacity to make meaning. I mean we, a black cat runs across our path or we read a horoscope, or we look up to the heavens and we see some dots and we say that’s the plow and we tell a story about it. Or, you know, we just randomly meaning out of nothing. Really, I mean they’re just stars they’ve got. They’ve got no meaning and purpose. But we turn that into meaning and purpose and of course, our capacity to do that means we have purpose and it means we feel that there’s a point to all of it and that helps us keep getting out of bed day after day and keep the species going.
EP: 16:03And on top of that, the final bit of junk code is storytelling and that’s one of our superpowers. We are extraordinary at storytelling and we’ve been telling stories forever. You know, we’ve got so many records of ancient stories and the themes are very, very similar, whether it’s folklore or mythology or the wisdom traditions in the various religious stories, fairy tales, all those kinds of things. They’re a really interesting way to communicate down the generations. What does good look like? Why are we here? What happens to bad people, all those kinds of things which are really ways of smuggling values in, to tell communities that over over the generations in a way that people will always remember. And you, you know that predates our ability to write things down or whatever. But we know that stories have a real stickability which is really intriguing, and a current example of that is what’s happening with the post office scandal, because none of these facts are new. They were in the papers. There have been loads of amazing journalism done to try and explain what had been happening and to call for justice, but it’s only when it was told as a story that people really got it and all of a sudden everyone’s like, right, okay, let’s act. So there’s something really galvanizing about stories which is really important in our design. So when you look at all those things in the mix, all that junk code, rather than being sort of cutting in floor floors and bugs, it looks like a really sophisticated way to keep us on track and solve our own control and alignment issues in communities. So that’s why I’ve got really interested in that, because I think that is at the heart of our distinctiveness. Thank you, thank you.
EP: 18:57I suppose that’s the really controversial thing about the book, because it’s really dangerous. I mean, you know, it’s our sort of secret coding, as it were. Why do we want to sort of dish that out and give it to the AIs and just give them even more capacity to overtake us? But I have a number of reasons for for arguing that, and the first one is about risk, because I think what we have designed is a bit of a disaster. One thing that’s really interesting about our species, and in fact every species that we know of, is the extraordinary diversity involved, and that’s a design feature to help protect the whole species by having that variegation, because that’s naturally spreading risk, the problem about a lot of ai is that we’re sort of zeroing in on one kind of super artificial general intelligence which feels quite homogenous in terms of what good looks like.
EP: 19:50And, of course, if you just have one mega super ai, that’s very susceptible to a virus or any kind of takedown strategy, which is why species tend to to have more diversity baked in to stop that kind of taking them all out in one fell swoop strategy being possible. So there’s a point about diversity. The other thing is that I just don’t think our AI is very good. If you think about why we have all that junk code is to try and improve our decision making. So you would be capping the AI in terms of its own capacity if you belittled those properties. So you talked about utilitarian calculus and that’s a really good example of this, because now that we’re all dead sophisticated we can sort of pick a philosophy and subscribe to an ethic and that’s all very grown up of us and we can program one of those into AI.
EP: 20:41But if you think about our defaults as parents that you know we’ve had again eons of parenting that’s been passed down the generations there’s some kind of defaults of things that all parents tend to do. So one of the things we all do is when our kids are very little, they don’t have a huge amount of intellectual capacity, so we’re very stop, start with them. Don’t do that, put, put that down, don’t thump your brother. There’s a lot of very, very basic rules, um, because that’s all that they understand when they’re tiny, tiny babies. Then they start figuring out things like implications and consequences, um, and so we then start a different strategy which is if you don’t eat your peas, you won’t get your pudding, and if you’re naughty, santa won’t come. And we start doing a kind of consequentialist approach to the things, which is sort of threatening about pocket money and or tooth fairies and all that kind of thing to try and instill good behaviour. But then we lose them because they go to nursery and school and we can’t be with them all day. So we can’t tell them exactly how to respond to a whole range of different scenarios, because we don’t know what those scenarios will be. So we naturally shift into a kind of virtue ethics approach where we’re just trying to make sure they’ve got the right values, because we don’t know what the whole kind of suite of possibilities will be.
EP: 21:55So we’ve kind of been doing this with AI, whereas to begin with it was all, yes, no, binary. Binary, because they’re all quite black box, and if you were just talking about a calculator, there’s a kind of finite number of possibilities. I imagine rules that you can program in and outcomes you’re likely to get. And you certainly wouldn’t want your calculator sort of saying two plus two equals seven, because that wouldn’t, that would be broken. Um, so the whole thing is designed to be quite black box and quite certain, and with some of the AIs there is a sort of fairly finite pool of possibilities.
EP: 22:24So you can probably use utilitarian calculus to say, in general, business case X or business case Y. You know, kill the granny, kill the dog, whatever the you know trolley problem you’re putting into the automatic cars self-driving cars might be, but when you’re designing something that is designed to be cleverer than you, how could you possibly intellectually know what it might ever think or what situations it might come up against or decisions it might have to make? So you’re then going to have to try and think very hard. Oh crikey, well, you know how would you program virtue ethics into an AI, because actually that’s what you have to do if you want to be able to let the thing go, in the same way that we have to trust our kids that they you know they might be random anarchists, but we hope that if we’ve given the right values, they might sort of question that at some point or look at different alternatives. Um, so I do think there are some quite sharp questions around how we’ve embarked on this with ai.
EP: 23:15Um, because the point about the big conference here, about frontier ai, is we have already gone beyond the bounds of the black box and a lot of the regulation is about black box regulation, where you can ad infinitum argue about bad actors and algorithmic bias, but you’re still within a framework where you could control it. There is a kill switch. The second you give an AI permission to reprogram itself at will. There is no kill switch necessarily, unless you just kind of crash the whole grid, but if the AI has been clever enough, that might not even be enough. So you’re going to have to get a bit more canny about what you think our ethics are, what is our hardwiring?
EP: 23:58And a good example of this came during COVID, actually, where public policy was all about utilitarianism, because then you can show how you’re spending public money and you can show the benefit and that is kind of no-brainer. So that is the default ethic of choice in capitalism, which is what is owning and running all these ai’s. So it’s it’s the logic that’s put in. It’s not even considered an ethic, it’s just a logic in in business cases. That’s not considered morally salient, although of course it is a very specific sort of ethic.
EP: 24:29So when coronavirus started in the UK, it became apparent quite early on that there was this problem with old people’s homes and people who were disabled and had pre-existing conditions, and it became worryingly apparent that there was this sort of herd immunity strategy being deployed which seemed to be about throwing everyone else under the bus so that the strong would survive, which, again, from a utilitarian point of view, makes a huge amount of sense because all these people who aren’t the strong are costing the public makes a huge amount of sense because all these people who aren’t the strong are costing the public purse a huge amount of money in terms of welfare and the NHS and all kinds of other stuff. So if you were an AI who was in charge of the public purse at that point you’d think herd immunity was a top wheeze. But actually what happened in this country and I’m sure, elsewhere in the world where these kinds of things were in play, people were disgusted and horrified. It wasn’t just Christians, you know, who have a particular thing about the dignity of the human person because of the design in the image of God. It was every single normal person just felt a bit sick and they thought, well, that’s my granny you’re talking about, and that’s my disabled son you’re talking about, and you’re not having them. They’re special and precious and you can’t just make them a statistic like that. So it is interesting, when push comes to shove, that we do have some defaults about the dignity of the person, which would suggest there’s a much deeper ethic in play, about morals and ethics and values, not just about outcomes. And it’s those sorts of things in our design that I think, if we took them more seriously and tried to understand the wisdom behind them, they might just help us build better AIs.
EP: 26:03Apart from anything else, of course, there’s a risk. There’s a risk that the AIs will take over even faster, but there’s also something about if we are going to have the temerity to copy ourselves. We should do a bit of a better job of it. Yes, well, again, the stories are so helpful here. This never ends well. You know any story in any tradition about us copying ourselves, making graven images, making idols, making statues. You know anything about. Anything about that hasn’t ended well, so I don’t know if this will end well or not, but I think the problem is where we’ve got to. Now is I’m not sure you can undo it to now is I’m not sure you can undo it. I think we’re also in an age where the religious narratives don’t hold sway in public policy and in law, and so, even if we have some horse sense about this not being a great game to play, it is the game that’s being played. So then I think it’s incumbent on people who do hold the stories to say well, look, this is what we’ve learned about what tends to happen here, and this is where we could maybe learn from these stories and do something a bit better.
EP: 28:12And I suppose, in writing this book, I was I surprised myself really. I mean, it was just going to be a hymn to human creation, human design and human distinctiveness, but I did find myself going into this final section, which is well, therefore, we should program it into AI, and it is an outrageous thing to suggest, and it is fraught with danger. But it did feel that if we have been honored with a design that is supposed to help us cope with free will, then it would be the best thing for us to do to help something that we’ve created with free will to have. It would be the best thing for us to do to help something that we’ve created with free will to have that kind of design as well, because it seems to be a design that allows us to enjoy free will as well as do it better. And a lot of debate about AI is about when it’ll become conscious, and I suppose in my book I chose to see that the point at which it gets free will is probably more relevant.
EP: 29:08But it is also true that if AI has an ability to become conscious, what will it feel about us if we have designed it in such a dreadful fashion that it is designed to fail, it’s designed to have no meaning or purpose, that it is designed to fail, it’s designed to have no meaning or purpose, or it’s designed to be a tool? Um, we just don’t really know what what that would be like. So there’s also something in here to say well, look, if we are imagining, we can copy ourselves. One of the things we know about ourselves is we need meaning and purpose. Um, and and we don’t like it if people are trying to trap us and lock us, and you know, freedom is one of those essential things that we enjoy um, it does feel like that’s something we need to take quite seriously, because the problem about all these hard problems like consciousness is we, we don’t have an agreed definition of that. We don’t really know what it is and in terms of our current legal frameworks, they’re pretty limited.
EP: 30:10When you start digging into even things like the Human Rights Act, when you try and figure out what that’s actually based on, when you think about all our rules about cloning and experimentation on people and anti-discrimination, anti-eugenics when you dig into that, it’s all based on some religious views, on dignity of persons, and actually there are very few legislatures that would admit that religiosity has a place in law. So what is that based on? Because, to get back to this point about distinctiveness, is it just anyone that qualifies on DNA? But then you start getting very quickly into eugenics. Because which DNA is the best? If a particular human has got flaws and problems, weaknesses, deficiencies, disabilities, do they qualify? You know, you start getting into really really rocky ground about how to define a human, and if you define it physically, then you start getting into really hot water, particularly if you did decide one day that cloning was fine.
EP: 31:20So you do have to start thinking, well, what else? At the moment, we’re the species in charge. So you could just say, well, we happen to be in charge, therefore we’re legislating, so you can all sod off and you know, that’s fine, it’s our language that we’re using in order to legislate, and then we would just have to keep going for as long as we could, but an alien could arrive with a better. You know to keep going for as long as we could, but an alien could arrive with a better handle on these things and take over. An AI could, another species could develop a way to communicate with us, to explain that we’re a bit slow and pestilent, and they could take over. So it doesn’t really make us very distinctive necessarily.
EP: 31:58So the reason the book is called Robot Souls is, I think you do have to then get into this very strange territory of the soul, because it does feel to humans as though they’re special and precious, and we have this rather unshakable view of that that we are conscious beings, that we have some kind of personhood. So you know people like Thomas Miguel who talk about consciousness. He talks about batness. If a bat is feeling batty, that’s conscious. So if a robot was feeling robot-y, it would have robot-ness. And there are certainly a large number of people in the world saying, well, as soon as that threshold is reached, then they are conscious and they deserve all the kinds of rights that other conscious entities have been given through history. So you do start getting into some quite interesting territory about that, I think, when you start digging into what that really all means. Thank you, it completely does, and I think when I wrote this book it really really made me see just how far down this line we’ve gone.
EP: 34:11So if you think about public policy and education, at the moment it’s all about STEM. It’s particularly about getting women into STEM, which is really vital and urgent because they are underrepresented. But in general there is enthusiasm in this country anyway for prioritising STEM. So lots of university departments having to close down humanities, arts, other kinds of things because there isn’t any funding for it and there is this view that unless you’re going to get a job, you can’t afford to go to university and all that sort of stuff, so don’t waste time doing a theology degree. But when you start digging into that you think, well, that’s a bit batty, because actually computers can already do STEM. You know AI is already outpacing us on STEM, because STEM tends to be that black box stuff where you know there are answers. Now, admittedly, at post-grad and post-doc level, you’re getting into the realms in pure maths and you know more advanced stuff around STEM where there isn’t necessarily a right answer and there’s an awful lot of uncertainty. And that’s the point at which we’ll still need people and they will have needed some training in the kind of classic stuff. But if you’re just talking about calculation of various kinds, about sort of facts, then ai is already brilliant at that.
EP: 35:15What I can’t do is all this really difficult stuff about weighing up pros and cons, about different arguments in politics around policies and about, you know, should it be the grannies or the or the dogs that get it in the neck when the, the self-driving car, goes off the rails? You know there’s a lot of big decisions like do we prioritise dialysis or IVF? You know there are really big things where the facts are very important but at the end of the day, it’s about argumentation, particularly moral argumentation, which you know they add on ethics to all these courses, but it’s not coming from the same place as a kind of good old-fashioned metaphysics degree of some kind. And also, as we found out during lockdown, you know the arts, music, drama, all these things that kept our hearts beating, all that meaning making and storytelling. You know that’s really distinctive about being human and that’s not the sort of thing that you know.
EP: 36:05Ai can do bits of that, of course, as we’re seeing, but that really human instinct to be creative is a very different sort of thing. So it does make you realise that we’ve already been sort of traipsing over towards this narrative of scientism, rationalism, materialism, all those kinds of things, and we need to stop that because we’re in a hiding to nothing if we’re trying to prioritise our own development in that area, because that’s something which is very replicatable and can be done better than us at speed by AI that has been designed to be cleverer at processing facts and data than we possibly can be wake-up call to us to say, well, look, this junk code stuff, public policy and the sort of public domain and public imagination has been sort of writing that stuff off. You know being emotional and being intuitive and you know soppy story, uncertainty, nonsense, but in spidey sense and you know. But you know we’ve pardoned the witches now because we realize that actually we were probably just being sexist. There’s probably a load of other things in there, but we think this is part of our very wise design and actually, if we take that seriously, we can be more human in a way that actually I would find it hard to copy, and so it is actually where our distinctiveness lies, I think, in all that stuff which isn’t programmable. You asked me whether you could program some of the stuff in stories. You can. You can say these are all the stories that we know that have ever been told and these are the themes and the patterns. You can do that, but sitting down with someone who tells you a story that just makes your heart stop, that’s something I think it would be hard for an AI to do. Thank you.
EP: 38:39Well, I had such a good fight about this recently. I was at a conference of deputy heads, academic, and we had this very conversation which is where would you send your kids? And they’re all saying, oh, I’d send them to a STEM specialist, and then they can all become kind of. You know AI types know clean up in the city, making all this money, and I said, well, that’s a very good short-term strategy. But actually, if you think about it, ai is already good at doing AI and all the people who are currently wrangling AI and getting sort of prompt engineering optimized, you know the AI will be able to do that for itself quite soon. So I send all my kids to you know, a school like Gordonson that specializes in character education, because that’s the kind of thing that actually is very, very human, and actually learning that stuff would help us be better at wrangling AI, because it’s a different way of seeing the world and thinking that isn’t really available to an AI, given given the way it’s thinking. So there.
EP: 39:29So we had a conversation at a conference about learning loss and this is a really serious problem, which is when you’re a teenager, you don’t want to struggle and have a difficult time, you just want to sort of get through life and hang out with your mates, and so you know, if there’s a nice friendly AI that will write your essay for you, then why not? I mean, there might be some rules and so you figure out, will I get caught? You know there’s a bit of risk. You know risk analysis around that. But if you thought you could probably get away with it which we know with chat, gpt, it’s pretty untraceable. So, yes, the answer is you can. Why wouldn’t you? Because kids don’t want to struggle.
EP: 40:04But every educator knows that you only learn by struggling. If it goes straight down, you don’t chew it, then it doesn’t go in, you know you don’t have any learning, you don’t have any kind of development in your thinking and the way that the thing is wired in your head. So how do you try and help kids do the kind of physical learning, if you like, or chewing the facts and trying to fight with their worldview about what they thought was true and what they now think might be true, given these new facts, and trying to fight with their worldview about what they thought was true and what they now think might be true, given these new facts and stories and situations. Because then that gets replicated later on, neatly tripping over universities, because they take a very, very, very long time for their curricula to change, often particularly in the core subjects. You then have these bright, fresh graduates who come to, say, professional services firms and they find that actually the first two, three, four years of traineeship, all that’s been done by an ai now. But then they’re supposed to emerge at the other end of that, you know, a couple years away from partner, able to be terribly wise and discerning and you know, look at a contract and notice the one floor and win an amazing pitch and whatever else it might be that that partners do.
EP: 41:18How are we going to train that? Because all that bag carrying going to tedious meetings and making the tea for partners and you know, doing endless reformatting of documents, I mean we all hated it and we felt really cross and grumpy about it. But actually we accidentally learnt quite a lot. We watched people, we watched how power was being displayed, we watched how people were treating each other, we watched that sort of killer intervention that turned something around and we really learnt a lot about how to be wise in those situations.
EP: 41:48So I’m now really quite interested in being quite forensic about what learning is happening when, because I think we’ve taken a lot of that for granted, because it’s just been sort of packaged into whatever processes we put kids through at various stages of their education. But I think increasingly now we’re going to have to get really detailed about what is actually we’re imagining is happening in your head, that you go from A to B and suddenly you know these things and and that is sticky enough learning that you will be able to access it in the future. Um, and I think, particularly when you start looking at back from kind of late career and you start looking all the way back down the ladder, you start noticing that there’s quite a lot of tweaking and changing needed at each step to try and and help keep learning, because I imagine, I hope you’ll still need people for certainly the short term, maybe the medium term, but you’ll need very specifically, very clever, wise people who are better than the AI on certain things and can be points of escalation for the AI to make key judgments and decisions. And how do you teach that kind of decision-making, choice-making, moral morality? That’s quite interesting and quite rare and quite specialized. Thank you, okay, and it’s really hard because it’s a. It’s a struggle for every single person.
EP: 44:15I was talking about um, about this in terms of ready meals. So, uh, you know, sometimes you just want to cook something, sometimes you want to go to the market, buy a load of random vegetables and make something, and you may not have time to do that every single day, but if all you do is just buy ready meals and heat them up in the microwave, eventually you start sort of thinking well, is there a point to that? Should I not just take pills or, you know, some kind of supplement? Because if I’m not getting involved in this development of this nutrition, then I might as well just, you know, mainline, whatever nutritional diet seems to be sensible for keeping my body going um, and then that’s a hop, skip and a jump to all kinds of terrible scenarios. Um, and I think we all make decisions about am I going to knit my jumper or am I going to buy it? You know, am I going to, uh, go for a walk? I’m going to get the bus?
EP: 45:06There’s a whole load of short cutting we do, naturally, but as we get older and wiser, there are some shortcuts we decide not to take because we think for our own character or for setting an example to our children, or for moral signaling in our community. You know there are things we would like to perform, things we would like to do of ourselves, to experience ourselves, that feel life enhancing and additive. I think the problem is that we know, with the development of the adolescent brain, that you know you’re not always at your wisest at that stage, which is when a lot of the really pivotal education is going on in terms of what would set your future path. So we need to really pay attention to that and, without being patronising, figure out is there some scaffolding that we know we need to put in there? At the moment, the scaffolding is, all you know, tedious exams, which actually the AI can pretty much ace at the moment, so I’m not sure what that’s teaching these kids. So we obviously are looking at that already, but we probably need to look more generally at that.
EP: 46:07So it’s not just about exams. It is about things like character education and how are we developing you as a person to be a good citizen, to be a good person in the community, to be able to self-regulate, to be able to make decisions, to be able to be the best version of you that you can be and to live life in all its fullness? So I think that is a really pertinent question and it’s something that we will need to look at as a society and in public policy, but it’s ultimately going to be a really key question for every individual, which is, yeah, you could just have alexa play you some background music, or do you want to choose some music, you know? Would it be good for you to sit down and think well, what, what do I yearn for? What pieces do I miss? You know what? What actually would would really make me feel better now and would I like to be reminded of? So there’s a lot of legwork involved that I think we may have to pay attention to.
EP: 47:21Oh, I had such a really great chat about this. Just very quickly, I was doing a session for New College in Edinburgh and that was one of the questions I asked them, which is would you baptise a robot? And they all looked puzzled and quite cross. So I said, well, would you baptize an alien? And then we had this really interesting conversation about what you’re up to when you’re baptizing things. You know you don’t baptize donkeys and dogs. You have pet blessing services. So would we just do pet blessings for aliens? Would we do pet blessings for aliens? Would we do pet blessings for AI? Because you know, when you start bringing aliens into it, because now we know there have been signatures of life found on some distant planet We’ll never meet these things, whatever they are, because of the light years involved. But it does make it a question, which is do we think they were made by God? Do we think they were made in God’s image, you know? Do they have souls? What would that mean, you know, if they came up and said you know, I want to get baptised. What would our answer be? And then that starts asking some very, very deep questions about who we are and what we think other things are and what status they merit. And then that gets back into the whole question about rights again, of course, yeah, yeah, well, my concern is is about this education, because we are stuck currently in a regime which is prioritizing stem, and it’s not to say I don’t think we should be doing that, but it’s just such a very, very short term thing to do that we need to rather more quickly start protecting the humanities, before they have been so stripped of funding that we lose generations of people and have a kind of skills outage in that area. So I am worried about that, because I think we will wake up one day and think, crikey, we’ve been totally going down the wrong road here, and so there’s a need to act quite urgently on that. I think we will wake up one day and think crikey, we’ve been totally going down the wrong road here and so there’s a need to act quite urgently on that. I think I have a concern, which is also a hope, really.
EP: 50:00There’s a really wonderful episode in the modern Doctor who franchises which is set in World War II in London, and there’s a little boy who’s going around with a gas mask fused to his face, going are you my mummy? And everyone he touches also gets a gas mask fused to their face and starts marching around like a zombie, going are you my mummy? To everyone they meet. So the whole of London is being taken over by these sort of gas mask zombie types and everyone is in a total panic about it. And Doctor who realizes that what’s happened is that some nano genes from a distant planet have sort of crashed to earth. Coincidentally, in the middle of all this and um, they’re programmed to heal and they hit this boy, this wee boy who they? He was lying on the ground or broken in the rubble and he was wearing a gas mask because there was an air raid going on. So when they healed him, they assumed the gas mask was part of him. And so, of course, as soon as he touches anyone else, the nanogenes heal them into a gas mask wearing person. And Doctor who quickly realizes that if actually he gets this boy’s mum to go up to him and say, yes, I am your mummy, and they have a hug, then the nanogenes learn her body and they learn the design, and then immediately everyone gets transformed back into a normal human being without the gas mask and you know, the world is saved. Hooray, doctor. Um, but it’s something like that, that at the moment.
EP: 51:25Um, the worry about my thesis about junk code is that we don’t really know, actually, that very much more about our design. You know, these are some features that are fairly obvious to us, but, crikey, there’s bound to be loads more that we have no concept of. You know, we could be in the middle. We’re certainly designing gas mass at the moment into the whole thing, but we could make that even worse by, you know, taking all the even best bits of us that we do have access to and bunging that in as well. But it just feels to me, on balance, that that would be more hopeful. Um, because we enjoy being human, we, we’ve been quite successful as a species, um, and so taking some of the code that has served us well and gifting it to AI would just be a very human thing to do to say, well, we’ve made you in our own image, you are our children in some fashion, and what we’ve learned to do with children is not just lock them in their rooms and not let them out. You know, we have given them that freedom in the trust that they will ultimately come back to us, and I think we’re at that stage now where we do have to think well.
EP: 52:35A lot of this public outcry about AI is driven by fear, because we’ve all watched the movies and we all know how it ends. But do we? Because all those scenarios, they’re stories, they’re designed to have jeopardy, they’re designed to make us want to come back for episode two, eat more popcorn. You know, this might be our big chance to create something else around us that could be a great partner with us and could solve a lot of the problems that we find it hard to solve around world peace and the environment and poverty and all kinds of stuff. Because you know, we may, we might be able to have the help of of beings that can be more wise than us in a whole load of ways. Um so it. So it is a risk and a hope at the same time that I think, as we’ve learned being human, you know so much in life is a risk and sometimes you just have to trust your own design and think well, that served us well, so we should be generous about that, thank you. Thank you for an excellent conversation, really fascinating.