On this episode of the AI For All Podcast, Rob Knight, Founder and Robot Hardware Director at The Robot Studio, joins Ryan Chacon and Neil Sahota to discuss the state of robotics. They talk about robots versus smart appliances, domestic robots, the ultimate goal of robotics, the dexterity problem, robot use cases, attitudes and safety around robots, plus a robotics demonstration!
Meet Rob Knight
Rob Knight is a humanoid robot designer and prototyper. Rob has always been fascinated by robots. For the last 20 years, he's designed, prototyped, and displayed humanoid robots full-time for some of the biggest names in tech including Maxon and NVIDIA. Rob is also committed to the democratization of robots and has published two open-source hands to encourage people to get into the world of robotics!
Interested in connecting with Rob? Reach out on LinkedIn!
About The Robot Studio
The Robot Studio is a workshop that designs and manufactures some of the world's most advanced biologically-inspired robots.
- [Rob] So this is an arm which is now running off a visual feed. So this is a fully 3D printed arm which is receiving instructions from a 3D camera, just a little bit over there, using machine learning, which I have literally no idea how the machine learning works, but I could get it for free from Google and with a bit of hacking around, the arm will now quite crudely try and follow my actions.
- [Ryan] Welcome everybody to another episode of the AI For All Podcast. I'm Ryan Chacon, with me is my co-host, the AI Advisor to the UN and founder of AI for Good, Neil Sahota. Neil, how's it going?
- [Neil] Yeah, I'm doing awesome, Ryan. How are you doing?
- [Ryan] Pretty good. We also have Nikolai, our producer here as well.
- [Nikolai] Hello.
- [Ryan] So today's episode is a pretty exciting episode we have planned. We're going to talk about robotics, the current state of the robotics industry, what domestic robots are, enablers for adoption, things like that. And to discuss this, we have Rob Knight of The Robot Studio. This is a company that is focused on designing and manufacturing some of the world's most advanced biologically inspired robots. Rob, thanks for being here today.
- [Rob] Thanks so much for having me.
- [Ryan] Absolutely. So let me start this off with just a high level question for you. Talk to us about the current state of the robotics industry. And what I think is important to note is that a lot of us out here who have seen different robotics through different council social media that we follow, news stories, events we've been to, very impressive stuff we've seen out in the market but where really are we as an industry when it comes to robotics adoption, where is it leading the way? Just some general stuff to get our audience in the right frame of mind here.
- [Rob] It's a great question. Almost in a way, the first thing you have to say is do you know the phrase AI winter? The AI winter is what's always happened in the past where there has been this massive hype cycle, and it's finally all going to happen, and it's just been completely disappointing. It hasn't happened at all. And there've been several cycles of this which basically lead to a complete absence of funding. And that's the AI winter. But this time there is this, there's real optimism that it's going to happen. And I would say there's a lot of confidence now that it's technically possible. All of the fundamental technologies and the computer systems and the data sets and the synthetic data sets and everything you need exists on a technical level. But that's also true of driverless cars. And that hasn't really happened either. People are acting as if there's going to be a big push in the next year, two, three, and that suddenly robots are going to be everywhere. But we really don't know. I'm very optimistic, but then I've been optimistic for 20 years. I would say it's technically possible. Robots can do what we want them to do. Whether or not we can get them to do that economically, in a way that people like, in a way which is acceptable, is a much, is now the big question. But we're going to find out. I think is the exciting thing. And there's a reasonable chance that this time it's really going to happen. But that's as best as I can say.
- [Neil] I was just going to ask Rob, everyone always says, when are the robots coming? But I think the robots are already here, right? A Roomba is a robot, right?
- [Rob] Actually, by my definition, a Roomba is a very smart appliance. But there are some key differences you're going to notice. Things like, from the consumer point of view, the robot does the whole thing. So we have appliances. If you have a, you empty your, you load a washing machine and then a few hours later you empty your washing machine and then you hang the clothes up and then you iron them and then you've got dry clothes. If that was a robot system in the sense that we mean now, like this domestic robot we'll talk about later, then the whole task is done. You put dirty clothes in a basket or even it picks them up off the floor for you and then the next time you look in your drawer, there they are, clean and pressed. And that is something that present systems can't do.
And the other thing, which has really come up quite recently is that the new systems will understand their own limitations. ChatGPT has of course kicked off all of this massive frenzy of being able to talk to machines. And it's sinking off now and maybe people are thinking it wasn't all that great, but with robots, you suddenly have a means to talk to them, and I'm actually really excited about this because we just did this for the first time a couple of days ago. We got ChatGPT drive in one of the hands and the first time we asked it what is 2 plus 2, it held up a single finger. And we said, we asked it, why did you do that, and it said because I responded to you in binary. And it could tell us why it had made a mistake. We didn't have to work it out. Things like that just suddenly make you think, wow, this will just open up a whole bunch of stuff, and you'll say try and do this thing, and it'll do it. And you say you didn't do that quite right, why did you do it this way, and the emphasis will change.
- [Ryan] You mentioned domestic robot. Can you define what that means? What does it exactly mean? Because I think to Neil's question earlier, when we think of robots in the domestic setting for us, yeah, we think about robot vacuums and things but to your point, it's, there's probably a fine line of what it really means to be a robot versus a smart appliance. So what does the term domestic robot really mean?
- [Rob] The phrase which is bandied around a lot is a general purpose machine. And the root to that is probably a multi purpose machine. And again, you have to compare it to appliances. Almost all appliances are single use, even if it's a food processor, it's still processing food using various attachments. It does one thing. The promise of the general purpose machine is that simply by changing software, it will be able to do different things, completely different things. But really until they're being used and are out in the real world and people are interacting with them, the definition is a little slippery.
It's probably easier to define what it's not in terms of existing robotics. It's not an industrial robot. Industrial robots have very specific requirements for how they're made. They're very powerful, they're very dangerous, and they're also governed by specific laws. So it's none of them. Which is why things like the Honda Asimo, which was just retired, was never going to become what it was promised. That's why you never saw it touch people. It wasn't, I don't think it was even legal for it to touch people, and it certainly wasn't safe. So you're talking about different kinds of bodies which need different kinds of intelligences to control them and will interact in new ways. So the only definition I ever have as a working point is that it does the whole task for you.
So in that sense, the nearest we commonly have are things like automatic telling machines. Holding, and you get your cash out. That whole task is done. Apart from chatting about the weather and having a friendly face to smile at, everything is done. You go, you get, you're gone. And when that can be applied to a much broader range of tasks, then that will be what becomes defined as the domestic robot.
We want it to do the housework and the cleaning and the cooking and stuff like that. It will come in fits and starts. Something will suddenly be possible. Receiving parcels, putting things away, but that requires less skills than cooking and other kinds of things. So it's gonna, I think it will emerge. And then we'll say, oh yeah, of course that's what we always thought domestic robots were. But as for a hard definition, it's evolving all the time.
- [Neil] It's not domestic robot. It's not Rosie the Robot from The Jetsons. Is that what we're saying?
- [Rob] That's the ultimate goal, right? That, that, that's one of the problems. And it's been, this vision has been around for thousands of years. I recently saw they discovered a new statue which has a wine pouring mechanism inside it, which was apparently very popular in ancient Athens. And that's what wealthy people had. When you turned up, this essentially a robot would pour you a wine, and it would mix it correctly for you according to some mechanism inside it. But again, that is a kind of single use appliance. It looked like a person. I think there's likely to be aspects of it looking humanoid. It'll have probably hands. It'll probably have a face. It will probably have some physical characteristics. It's a similar physical envelope, which is similar.
- [Neil] I don't think Rosie had legs, if I remember correctly, so I'm not.
- [Rob] Indeed. Yes, the legs is an interesting one. Legs I think are a bit of a distraction. People seem to be putting a, it makes a robot, makes, well, it's meant to make them look cool, but I think most of them, it makes them look like they can't walk properly. It makes them more expensive and it makes them work less well. All you want is hands. You want a system which can do things, interact, pick things up.
- [Ryan] What are some of the technologies that are really enabling the robotics space to move forward at the rate it's moving now? Like some of the stuff that you work with on a daily basis, how have things really changed over the last few years versus where they were before?
- [Rob] Very similar story to the rest of the AI industry. It's been driven by massive increases in computer power coming out of the gaming industry. That in turn with all the graphics has now led to a massive increase in synthetic data. The equations and the mathematics go back a very long time. But it's only recently been possible to apply them to these existing datasets, which was photos, and then text off Reddit, which gave rise to ChatGPT. When in this next phase, when you can do the same thing with three dimensional objects, that's one of the things that will really suddenly enable you to do a whole load of tasks which are very hard to code by hand. And but ChatGPT actually is a very useful thing because it solves one of the last problems. Even once you've done all of that, and you've managed to make this low cost, incredibly high performance robot which can work out where its arms and its hands are and its fingers are and do all this stuff, you still had the task of explaining to it what you wanted it to do. And the fact that you can just suddenly do that, and it really, it took us not many days to get that running, to do a first live demo, which we were, even though we know other people have done it in the same kind of timescales, we were surprised it just suddenly started working. It really brought the goal post towards us a long way.
- [Ryan] It's pretty neat to see how you've been able to incorporate these new AI technologies into the work you're doing on the robotics side. I know this is an audio and video podcast, so our audio listeners will not get the full benefit of this, but can you show us a little bit here? I know you have the robot right behind you.
- [Rob] So this is an arm which is now running off a visual feed. So this is a fully 3D printed arm which is receiving instructions from a 3D camera just a little bit over there using machine learning, which I have literally no idea how the machine learning works, but I could get it for free from Google and with a bit of hacking around, the arm will now quite crudely try and follow my actions. And this was done as just a visual demo to show people the mechanics of the arm. But in the next phase, when this is being controlled by reinforcement learning, by algorithms, by simulations, there's a few approaches that have come together, and one of them is called sensor real. You use reinforcement learning in a virtual environment, you work out how to do things, and you transfer that into the real world. And if you've seen cubes being manipulated, if you've seen any of the quadrupeds running around, which are very impressive now, jumping over objects, climbing under objects, walking through spring loaded doors, solving hard physical problems, that's all being done with reinforcement learning.
Actually, it's all being done by reinforcement learning in Europe! Boston Dynamics still uses model predictive control. So there's competing fields as well for what to do, but the software, the techniques exist, and the results are being demonstrated on real machines. So I'm partnering with a few of these companies now, and I'm very hopeful we'll have something where pretty soon you'll be able to say, pick this thing up, and it will know what you mean, and it'll just start to do it. I call it the general sorting task. If you can identify an object in an environment, if you can pick it up, if you can put it down somewhere else reliably, and you can just repeat that thousands and thousands of times, you can just suddenly do a whole bunch of real world things that were just impractical before. But if that single block, no other manipulation, just where are you in the world? You see all the cans of baked beans, I want them all on that shelf. And all the cans of tomatoes, I want them on that shelf. And then when you're done, sweep up and turn my lights off, and I'll see you tomorrow morning. It's, but it's like when Elon Musk said in what four years you'll be able to sleep in a car in San Francisco and wake up in New York, right? Didn't happen either. But it seems so, so near to being possible now. And I thought that's what we're working on.
- [Neil] We're getting closer but your example of picking stuff up, I can see that to a degree with like cans of tomatoes and stuff. But what about the dexterity problem that exists with robotics?
- [Rob] Yes, dexterity is a very interesting one. There's something very fundamental about being able to handle objects and use that to generate intelligent reactions to your environment. You can get surprisingly far with normal grippers, and grippers will have a little bit of touch feedback. You can pick up cloths, you can fold cloths, you can pick up individual objects. If you look back at all the robot demos that have really been done since say like the PR2, which is the open source platform which came out of Willow Garage back in the 80s, their videos used to be sped up about 50 times. And more recent videos have been sped up maybe 10 times. And Elon, Tesla's videos are sped up two times. And there's a company called Sanctuary, which is sped up not at all. They do, they show their videos in real time. That robot has proper hands. The slower ones had pincers. So if you can tolerate doing a task more slowly, then the dexterity you can get away with a lot less and maybe then humans have to be able to fill in the gaps and do the difficult bits, which is one of the reasons why we think the robots will continue to be humanoid in format because it makes it easier to do that transferring about chopping and changing of tasks.
In reality, it's still too difficult to really model the hands and get good results. And the tactile sensors are still too big, and they're still very expensive. So that's one of those areas where, yes, everything's technically possible, but it's the wrong size, and it's too expensive. It doesn't fit together. We can't quite use it all yet. But I see it being very important moving forwards. And eventually those videos I think you'll, robots will do things faster than us. They get optimized for certain tasks. But no, but dexterity is open, still quite an open field, which is good for me because I specialize in hands. So I'm really interested, we're interested in thumbs, opposable thumbs, how we do things. I feel personally it's got to the stage where until you can run it through a model and you can then check it in real life and see what you need to adjust on the mechanical side, I don't really know how to proceed beyond where I've got it to.
So you start with simple objects, cans, cylinders, cubes, boxes, cuddly toys. It doesn't have to be something which is, which stops it moving forwards, is my feeling, but it is something which will need to be looked at. And there'll be lots of models of hands, lots of models of hands eventually, and they'll be specialised for certain industries, there'll be food hands and dealing with fish at sea hands and all this kind of business. I think that's where it'll go. I think there'll be a big industry for hands, which doesn't really exist yet, but it's key, key to developing intelligence, it may even be key to developing language. Allow the systems to really interact in a way which produces a kind of embodied intelligence. Yeah, I think you do need dexterity and a lot of people are working on it.
- [Ryan] So you've mentioned a number of tasks that you feel like are going to be the first tasks that are really achievable once you're able to prove it out and so forth. But if I'm listening to this as just a general just enthusiast for what's happening here, what do you feel like is the tasks that are going to be achievable first. Aside, you mentioned like picking things up, moving them, putting them down, but in kind of a domestic sense, what do you think will be the things that there will be demand for and a real actual application that these robots will be able to help and solve?
- [Rob] I'd love to be able to say it'll be doing my ironing. That's something I'd really like. That's a particularly difficult one. It's probably easier to give examples of how far automation has got in existing environments to illustrate where it stops. And there's a nice one from the pepper farming industry. A lot of vegetables, which you grow indoors in greenhouses, but the robots really can't harvest from the plants very fast. They're one 10th the speed of a person. You have to look through, find things, check, right, there's still lot of stuff. And people can do this very quickly, robots can't. But, once you've harvested it, and you put it on a conveyor belt, they all go down towards the boxes, and the robots are perfectly fine at that. They can pick them up off a conveyor belt and arrange them nicely and stack them all up and do that beautifully, endlessly, and with great reliability. So that's where the technical aspects stop it from going any further. And again, it's a speed issue. If, as the, every, Moore's Law is Moore's Law, is still basically holding. So every couple of years, everything's getting twice as fast and particularly with GPUs and other systems that you need for AI. People are talking about tens of thousands of speed up being possible. Comes at a cost, of course. And that's maybe something we would also have to bear in mind. There's a real energy and water cost for generating these models. So it's not, it's not all plain sailing. But as they get quicker and the economics of it changes, if you get them up to the same kind of speed, then it'll do more and more tasks. But to predict what the actual task is that it would actually be able to do in a commercially economically viable way in a house, if I knew the answer to that, I'd be a very rich man. That's that, that is, that's the question. With all these wonderful toys we've got now, what could we really get it to do where people are prepared to pay money for? Yeah, they tried making pizzas, didn't work. Vacuum cleaning, the Roomba you mentioned is, was for a long time the only robot company in the domestic sphere to ever make money. They did it for one quarter. I think now they've passed that, but back in the day, they were it. And it links into your hands as well. What robots can do at the moment is look at things. So what gets, what's actually making money, what's actually becoming commercially successful, shall we say, is inspection robots. Drone companies that look at stuff and do monitoring automatically, they're still going. There's a demand for that. And quadrupeds, which can inspect oil rigs and save people's lives because a lot of people get knocked off oil rigs every year. That's a solid use case. So until you can handle objects, you have to look at them. That next phase where it can indeed handle objects, then suddenly things will become apparent that weren't obvious before and you go, wow, actually, that's great. That's really useful. That's great. If that system can do that all day long, brilliant. That's worth what it costs for me to make, and then suddenly you found that first case. And it either builds up in a series of them, or you have to do this huge exposure moment where someone puts so much money into it, but finally the capability is so great, but yes, obviously we've got the robots of the future. And I used to think that might happen with SoftBank, but it didn't. It might happen now with Boston Dynamics. They have said they will launch a robot next year. There's, every month there's a new humanoid robot coming out. But they're the first one backed by a, apart from Tesla, but they're the first one backed by a car company, which is planning to take man to the world. And they've got the longest history of making these robots do unbelievable things. So maybe. I live in hope that one day I'm going to wake up and turn on the media feeds and there's robots everywhere just doing stuff. Oh yes, it's done. Here's the software, and it's open source and everyone can get involved. But hasn't happened yet.
- [Neil] To your point, Rob, I think a lot of the robots that I think have worked well are very specialized in their usage, right? You mentioned Boston Dynamics. They have little robot dogs that have gotten really good traction within the mining industry, but that's specifically at the ore bodies, and it's a very specific use. It's the same thing that, there's 12 different firefighting robots out there, but again it's a very special and limited, hopefully limited, use in terms of fighting wildfires.
- [Rob] That's the problem, right? We're talking about a fundamental technology shift from single purpose machines to general purpose machines. And until they're general purpose, what is the few things that they can do well enough to justify their existence? And if you can't find those, then it won't become a thing. And that's a concern for me anyway. But, it'll be possible, but people just won't use them, they won't care. They'll be too expensive, they'll be too slow, they'll be too much hassle, we don't trust them.
- [Neil] You mentioned SoftBank and their Aldebaran Systems division. They were really big that they thought some of the robots would take off as concierges. And I think some, I won't name names, hotel chains, major hotel chains, thought they'd be able to staff these robots, 24/7 service concierge, maybe front desk, but that never really took off. What happened?
- [Rob] Over hyping, overselling, and underperforming. I've never stayed in the hotel but there's a one in Japan which was meant to be a hundred percent robots. It just didn't work. You turn up and halfway through check-in, it'll start talking to you again and the whole thing restarts. It's not so much robots are stupid. It's that humans are blinded to how hard it is for us to achieve what we consider to be basic skills. It's got a name. It's called the Moravec paradox, after the researcher that worked on Flakey many years ago. And a lot of our brain is devoted to solving physical tasks, but you're completely oblivious to how you do it. So just getting it to do any of these so called simple things, playing chess is, as it turns out, relatively straightforward now. Humans have not been able to seriously threaten a chess computer since about 2005. But there's a great joke, Emo Phillips loses to a chess computer. Do you know the comedian Emo Phillips? He says, it beat me at chess, but it was no match for kickboxing. And that's just the simple truth. They're just rubbish at physical domain.
So the hands, as you said, getting them to actually do real things. It's just, it just holds them back. They just break down constantly. And the systems we're trying to control break down constantly. And again, if you get that general purpose, if you want to say even an artificial general intelligence that can suddenly run all these things, this, the sci-fi, the vision, The Jetsons, everything's hunky dory and it works. But we've got a whole range of systems which can kind of cover everyone and make everything kind of work when it's all lashed together and still being supervised by people. But as soon as you let it go and do its own thing, it just stops. It's remarkably difficult as it turns out, and for that reason it hasn't happened.
- [Nikolai] Could you speak to, because you said biologically inspired, just like what exactly are those specific things?
- [Rob] It's partly to differentiate industrial robots. It was an approach to try and work out why humans are so, and animals, are so good at interacting with the environment, and why robots are so bad at it, and one of the conclusions was, in complete opposition to say the intelligence in Space Odyssey, HAL, the disembodied superintelligence, if you build a system which has no interaction with the physical environment, it's very brittle. The conclusions it comes to are just ludicrous in many ways, and it just, you have to have experience as it were of interacting in order to be able to generate embodied intelligence. Certainly no one's ever been able to make it any other way. And so the basic idea was what are the intelligent systems which can perform really well in the real world. Let's copy them as much as we can. So for me, that was about the hardware. And the key distinction between industrial robots and biologically inspired robots is that industrial robots are designed to be as stiff as possible. They're made of metal, they have very expensive gearboxes, they're very solid machines. Biologically inspired robots can be quite floppy. This one's got springs inside it, all over. So even if it hits you, there's a limit as to how much force it can generate. It can be more dynamic. The reason it moves so quickly is because it's storing energy in its springs it's releasing. And the flip side of that, they're much harder to control.
Industrial robots can use classical control, you know where your motor is, you know where your end effector is, just by maths. But you can't do that with biologically inspired. You need a simulator, you need a much more sophisticated control method. Anything which you're seeing, any robots which you see really putting out a lot of power and running around, to some extent will have aspects of that.
So like the Atlas by Boston Dynamics, that's got a big hydraulic reservoir. It stores its energy hydraulically. But nonetheless, it has energy on tap which you can release, and it gives it a kind of bouncy look. Things that bounce about. If they use springs, they're just more efficient. And that's one aspect of a biologically inspired.
I guess the other aspect is the form, the envelope. Having, why five fingers? If you're making a robot hand, why have five? Why not more, why not less? And for a long time, the argument on that was a little weak. It was about fitting into the human envelope. And that was all we had. But it seems now that because of the way the systems are going to learn by really copying people, if you design your robots with less or more fingers, then you constantly have to work out what to do with more or less, and it becomes very tiring for the people that are trying to teach the systems. So it turns out that it really does need to be biological looking, humanoid looking, have the same form, have the same internal structure, just not necessarily for the reasons we originally thought.
- [Ryan] When you talk to people outside of your studio and kind of your circle of friends who really get this stuff, what's been the kind of feedback as far as the or the expectations for the future of robots being part of everyday lives? Is there concerns, hesitations? Like, how do you think society will deal with something like this?
Are there any challenges that have really been raised that are just really maybe more thought provoking and stuff that we really need to figure out before this becomes something that really fits into the everyday life?
- [Rob] Absolutely. The concern is always about robots coming to take our jobs, which we saw really play out recently, the AI coming to take the jobs and all the Hollywood strikes and the effect that's having on people. The research on that, the most recent one that I've seen indicates that there is a split across I guess what's commonly referred to as blue and white collar workers. White collar workers end up actually better off and blue collar workers end up worse off. The concern for me is very much about that it will continue to drive an inequality in wealth in society.
The classic was always that people were just afraid of robots, and it's an extension of the Frankenstein into the Terminator and yeah, I think you have to look at popular culture for that, but the Terminator, the original Terminator, Arnold Schwarzenegger's character, he's now like the light relief in the films. If you look at the films, he makes the jokes. And for me what's important about that is the education. It's not that the robot is bad. It's what the robot's intentions are and who programmed it. It's just, you know, it's just a very advanced power tool. Covid changed that as well a lot that people were very skeptical before, particularly of cleaning robots, automatic cleaning. And then suddenly, we just need them now. All that, don't worry about what we said before. Just, we just need them to do this job as quickly as possible.
But there will be real issues with trust, with transparency. It's why I'm so much in favor of the open source approach. If you're really trusting these things that they, you're looking, they're looking after your elderly parents, they're looking after your pets when you're on holiday. They're looking after your, things you care about, and you don't know why they do things or how they do it.
- [Ryan] There's also the privacy element of it too, is bringing these robots into daily life. We've seen that in the, just in the IoT space with smart home products. There are people that are very hesitant to bring in products that automate things, that collect data, that do different elements of their day to day because of the privacy.
- [Rob] Very much, and the more capable they get, the worse that becomes, of course. So there's, that's something that's going to have to be maintained and looked at longer term. It's interesting, maybe not more the privacy, more of a security aspect, but the major difference between driverless cars and robots is that robots are much, much less likely to kill you. And driverless cars have killed quite a lot of people. So that might be something which changes the balance and the dynamic there. But then driverless cars are, I would say that they can't come into your bedroom, but you've seen the thing about some video that was leaked from Tesla cars, and they're in people's garages and seeing all sorts of weird stuff. Apparently the cars could record you.
- [Ryan] It's interesting because like we've had this discussion before about just AI coming into our lives in a deeper way. We've brought up the movie Megan, who, which I guess is AI and robots. You have this doll that is meant to be able to interact and create a relationship with this young girl, but it starts to take on a life of its own. It starts to behave in its own way, which is the programming of it, but it's still a human looking robot. Behind the scenes as it starts to come apart at the end, you start to see what's underneath, but stuff like that I'm sure does not help this kind of conversation.
- [Rob] Yeah, clearly it's going to have to be marketed in the right way. The funny thing for me with the Terminator is that it's grossly overpowered. And in the later films, it has nuclear reactors in its chest and stuff. I think that the Megans are much more pernicious. And if you really think, for me, I think of, the way I think of the first wave of contact with AI and humanity was social media. And I would argue that in many ways we lost. It's increased childhoodness, unhappiness, and measurable statistics. It's had a negative effect. It's increased polarization. And that's AI in the virtual form. And now we're going to see AI coming out into the real world and being able to, be able to do things when you turn your back and it's done something. So yeah, no, it's, there's a lot, but again, that's the transparency bit, right? If it's a closed system, if it's the great corporation, and we don't know how it works, and we don't know what it does, and we can't access their data, that I think will be a harder sell than my 12 year old nephew knows how to get this thing, it's like, I have to go to my kids now, can you do my phone for me please. What's that thing you did? I don't know how to do it, and they do it for me. So as long as you know someone that knows someone that can check and say, no, it's fine, it wasn't what you thought, that to me is a much, you can dispel some of that paranoia. But yeah, it's still going to need a lot of expectation management.
- [Ryan] I think most spaces do that, right? You have that natural curve where it's the early adopters, the enthusiasts really drive it. Early adopters and then you start to get through the hype cycle.
- [Neil] I was going to just jump in because I think expectations management is a really important piece that we expect these things to work perfectly and all that. But there also should be an expectation on us as people on what we're doing here. The Megan conversation and spoiler alert for those who haven't seen the movie, they actually make it very clear in the story that the reason the thing goes haywire is that the people that design and quote on quote programmed her didn't actually understand child rearing and the emotional state of children and stuff and as a result, it gets a little bit of a conflict. It was still executing its program, just not in the intended way. We know that as engineers, we're building towards output or this specific outcome. We don't think about other potential uses or misuses and that's the big problem with Megan. I'm sure when that movie Creator comes out shortly, it'll be a similar type of story with that AI system, but that's something that we actually constantly see, and that's why we get surprised by some of these things. It's like well that's not the way we expected it to behave, but we never thought of that scenario or that use case, or it highlights I think the really bigger problem going on in the whole industry is that you have a lot of people working on things they don't fully understand the domain of. And that, I think, is the real bigger issue we're trying to tackle.
- [Rob] From my perspective, the dilution of responsibilities, I'm open sourcing as much of the design as possible. It's, I don't, I'm just going to be the guy that teaches other people how to build them. That's my goal really. You go and experiment with them. But yeah, there's a, there's clearly a risk. And what we've seen with the chatbot who was turned into a hate bot within a couple of days of, people just mess with it. And you see it with driverless cars. The problem with driverless cars, one of the big ones, is that normal drivers, human drivers, once they realize it's a driverless car, they just ignore, they just mess around with it because they know it'll stop. So they force it off the road, they do, they just bully them because they can, because it makes their journey faster, because it's fun. But that, that has consequences.
- [Neil] But this is the ironic thing. You talk about self driving cars and people talk about when is it going to be safe enough and stuff. Inside really the United Nations, we don't debate when do we legalize self driving cars, we debate when do we ban human drivers. Human drivers actually introduce the most invariability in the system. And while everyone focuses on the trolley problem, which is like a one in a billion chance, I'm not saying that's not important to think about, there's 20 million people every year that die or suffer permanent, horrific injury in car accidents, and that number would drop by at least 95 percent if you actually ban human drivers, right? It's just, where's the trade off and balance between that? So that's, those are the things we're really trying to grapple with. It's not that the machines themselves are evil, they're just tools, and if we don't train them properly, or we don't think of some of these other uses or misuses, bad stuff's going to happen.
- [Rob] I'm very keen for it to use power tools, but if it can pull a trigger of a power tool, it can pull a trigger on anything, right? I don't see any fundamental way in any system which you teach to be smart enough to not use a gun, you can hack to use a gun, right, I mean by definition. And who deals with all of that who, but I think the utility will outweigh it. You could argue that it's nuts having cars at all, right, because so many people are killed by them, but we're not about to give them up. You know, destroying the planet, kill millions of people. Yeah, but they're real convenient. And normally it's okay. So if you can get it to that perception on that level, there'll be horror stories, family, nasty things happen to family on holiday with crazy robot kind of stuff, right? I don't want to be too specific, maybe headlines like that. But as long as it happens to lots, as long as it happens to somebody else far away and mine never did that, people will, but then they really have to be capable. There's this whole balancing act to pull up. But just today I saw Agility has just announced it's building the first factory to produce humanoid robots. So that, we will try this experiment. We will try the hotel thing as well. I know there are concierge robots. That's, definitely going to see those again. That's definitely happening. And this time, you will really be able to talk to them. And it will really get into depth and detail, and it might really be be able to know ways to get you tickets you can't get any other way and all this kind of stuff. And that's the utility. If they get useful enough, they get useful enough, people will ignore the downsides, I suspect.
- [Ryan] Rob, let me ask you this before we wrap up here. So as, as I'm listening to this, watching this episode, what is it that I should be paying attention to just out in the news, the world, and the tech space that is really going to help this, the robotics industry get closer and closer to what we're talking about, the positive stuff we're talking about, not the taking over and killing people, that kind of thing, but more so on the domestic robot becoming more of a realistic thing. What are those enablers that are going to be moving this forward that we should be paying attention to and looking out for just in our daily following of the industry?
- [Rob] What is that thing that they can do with the existing levels of technology that make people think it's worth it? That's what I'm looking for. So if I find it, I'll tell you. Some people think it's packaging.. Taking things out of boxes, putting things into boxes in warehouses. Amazon, Amazon still uses a lot of people. They are the biggest producer of robots in the world. People still put things in the boxes. Still can't do that yet at a commercial level. I can't help but think that once you can solve that, you can do a lot of other things. But then someone still has to fund the production. You still need millions, billions of these things around doing stuff. And then if they do decide they don't like us, what are you going to do about it? I think one day I'm going to wake up and there'll be a headline, and I'm going to be that's just obvious, obvious, should have done that. Should have thought of that. Didn't. Someone else did it. Well done for them. Brilliant. What's the next one?
- [Ryan] Maybe it'll be your headline, Rob. Maybe we'll be reading your headline.
- [Rob] If one of them one day is mine, that'd be brilliant.
- [Ryan] Before we let you go, Neil, anything, any last words or questions from your end before we wrap up?
- [Neil] I think this has been a great conversation and Rob, if you take this journey to figure out that one thing or people have ideas for you but if they could just learn more about your work and how to stay in touch with you, what's the best way to do that?
- [Rob] So this is the one we published at dexhand.org, so you can get, this one's a bit beaten up now, but this is low cost, you can print it yourself, and you can put it together, and you can start to do things. There's a lot of stuff on my YouTube channel, I just film stuff and put it out. But the DexHand is a first serious attempt.
And if there's enough interest in that, then we will do the arms, and we will provide all the rest of the systems. And the aim is to get to the point where if you've got the interest and a little bit of money, you'll be able to build something which can, you can suddenly start to really experiment with.
So with many people as possible, it's just a, for me, it's a statistical game. The more people looking, the higher the chances that we're going to find what that thing is. The great untapped reserve I'm looking for is all those people that ever dreamed of having their own robot.
- [Ryan] All right, Rob, appreciate the time. Great conversation. Thanks again for it. And yeah, we hope to talk again soon, and we'll look forward to getting this out to our audience.