Over the last 10 years, Brett Adcock has gone from founding an online talent marketplace, to selling it for nine figures, to founding what’s now the third-ranked eVTOL aircraft company, to going after one of the greatest challenges in technology: general-purpose humanoid robots. That’s an extraordinary CV, and a meteoric high-risk career path.
The speed with which Archer Aviation hit the electric VTOL scene was extraordinary. We first wrote about the company in 2020 when it popped its head up out of stealth, having hired a bunch of top-level talent away from companies like Joby, Wisk and Airbus’s Vahana program. Six months later, it had teamed up with Fiat Chrysler, a month after that it had inked a billion-dollar provisional order with United Airlines, and four months after that it had a full-scale two-seat prototype built.
The Maker prototype was off the ground by the end of 2021, and by the end of 2022 it was celebrating a full transition from vertical takeoff and hover into efficient wing-supported cruise mode. Earlier this month, the company showed off the first fully functional, flight-ready prototype of its Midnight five-seater – and told us it’s already started making the “conforming prototype” that’ll go through certification with the FAA and EASA to become a commercially-operational electric air taxi.
The first flight-ready Midnight prototype is complete, and ready to begin testingArcher Aviation
Hundreds of companies have lined up to get into the eVTOL space, but according to the AAM Reality Index, only two are closer to getting these air taxis into service: Joby Aviation, founded in 2009, and Volocopter, founded in 2011.
Archer’s aircraft isn’t an outlier on the spec sheet, it’s the sheer aggression, ambition and speed of the business that has set Archer apart. And yet we were surprised again in April to learn that Adcock was launching another venture simultaneously, in a field even more difficult than next-gen electric flying taxis: general-purpose humanoid robotics.
These robots promise to be unparalleled money printing machines when they’re up and running, eventually doing more or less any manual job a human could. From ancient Egypt to early America, the world has seen time and again what’s possible when you own your workers instead of hiring them. And while we don’t yet know whether the promised avalanche of cheap, robotic labor will bring about a utopian world of plenty or a ravaged hellscape of inequality and human obsolescence, it’s clear enough that whoever makes a successful humanoid robot will be putting themselves in a much nicer position than people that haven’t.
With a screen for a face, the Figure 01 looks like it’ll be difficult to anthropomorphize Figure.ai
Figure, like Archer, appears somewhat late to the game. The world’s most advanced humanoid robot, Atlas from Boston Dynamics, is about ten years old already, and has been dazzling the world for years with parkour, dance moves and all kinds of developing abilities. And among other more recent entrants to the field is the world’s best-known high-tech renaissance man, a fellow who’s found success in online payments, electric vehicles, spaceships, neural interfaces and many other fields.
Elon Musk has repeated many times that he believes Tesla’s humanoid robot worker will make the company far more money than its cars. Tesla is putting a lot of resources into its robot program, and it’s already blooded as a large-volume manufacturer pushing extreme technology through under the heightened scrutiny of the auto sector.
But once these humanoid robots start paying their way, by doing crappy manual jobs faster, cheaper and more reliably than humans, they’ll sell faster than anyone can make them. There’s room for plenty of companies in this sector, and with the pace of AI progress seemingly going asymptotic in 2023, the timing couldn’t be better to get investment on board for a tilt at the robot game.
Still in his 30s, Adcock has the energy and appetite to attack the challenge of humanoid robotics with the kind of vigor he brought to next-gen aviation, hoping to move just as quickly. The company has already hired 50 people and built a functional alpha prototype, soon to be revealed, with a second in the works. Figure plans to hit the market with a commercially-active humanoid robot product next year, with limited-volume production as early as 2025 – an Archeriffic timeline if ever we saw one.
On the eve of announcing a US$70 million Series A capital raise, Adcock made time to catch up with us over a video call to talk about the Figure project, and the challenges ahead. What follows is an edited transcript.
Figure’s offices definitely have that startup feel going onFigure.ai
Loz: Between Archer and Figure, you’re doing some pretty interesting stuff, mate!
Brett Adcock: We’re trying, man! Trying to make it happen. So far, so good. The last 12 months have been incredible.
How has Archer prepared you for for what you’re going into now with Figure?
Archer was a really tough one, because it was a problem that people felt couldn’t be solved. You know, battery energy density is not available to make this work, nobody’s done it before commercially. We’re kind of in a very similar spot.
You know, we had a lot of R&D in the space. There was a lot of groups out there flying aircraft and doing research, things like that, but nobody was really taking a commercial approach to it. And I think in many ways here, it feels quite similar.
You have like these great brands out there, like Boston Dynamics and IHMC, doing great work in robotics. And I think there’s a real need for commercial group that has a really good team, really well funded, bringing a robot into commercial opportunities as fast as possible.
Archer was like: raise a lot of capital, do great engineering work, bring in the right partners, build a great team, move extremely fast – all the same disciplines that you really need in a really healthy commercial group. I think we’re there with Archer, and now trying to replicate a great business here at Figure.
But yeah, it was really fun. Five years ago, everybody’s like, Yeah, this is impossible. And now it’s same thing. It’s like, ‘humanoids? It’s just too complex. Why would you do that, versus making a specialty robot? I’m getting the same feeling. It feels like deja vu.
Yeah, the eVTOL thing feels like it’s really on the verge of happening now, Just a few hard, boring years away from mass adoption. But this humanoid robot business, I don’t know. It just seems so so much further away, conceptually to me.
I think it’s the opposite. The eVTOL stuff has to go through the FAA and EASA approval. I wake up every day with Figure not understanding why this wasn’t done two years ago. Why don’t we see robots – humanoid robots – in places like Amazon. Why not? Why aren’t they in the warehouses or whatever? Not next to customers, but indoors, why aren’t they doing real work? What’s the limiting factor? What are the things that are not ready, or can’t be done, before that can happen?
I wake up every day with Figure not understanding why this wasn’t done two years ago. Right. So, part of that must come down to the ethos, I guess, of Boston Dynamics. The idea that it’s research, research, research, and they don’t want to get drawn into making products.
Only five years ago, Boston Dynamics said ‘we’re not going to do commercial work.’ 10 years ago, they said, ‘Atlas is an R&D project.’ It’s still an R&D project. So they’ve put up a flag from day one saying ‘we’re not going to be the guys to do this.’
Which is pretty remarkable, really.
It’s great, they’ve done a lot of research. This has happened in every space. It happened with AC Propulsion and Tesla and with Kitty Hawk in the eVTOL space… These were decade-long research programs, and it’s great. They’re moving the industry forward. They’ve shown us what’s possible. Ten years ago humanoids were falling down. Now, Atlas is doing front flips, and doing them really well.
They’ve helped pave the way for commercial groups to step in and make this work. And they’re great, Boston Dynamics is probably the best engineering team in robotics in the world, they’re unbelievable.
Well, I guess you’ve assembled a pretty pretty crack team yourself to take a swing at this. Can you just quickly speak to the talent that you’ve brought on board?
Yeah, we’re 50 people today, the team is separated into mechanical – which is all of our hardware, so it’s actuators, batteries, kinematics, the base of the robot hardware you need. Then there’s what we call HMS, Humanoid Management Systems, that’s basically electrical engineering and platform software. We have a team doing software controls, we’ve got a team doing integration and testing, and we have a team doing AI. At a high level, those are the areas that we have in the company, and we have a whole business team.
I would say they’re obviously the best team ever assembled, to be confident! You know, Michael Rose on controls spent 10 years at Boston Dynamics. Our battery lead was the battery lead for the Tesla Model S plaid. Our motor team built the drive unit for Lucid Motors. Our perception lead was ex-Cruise perception. Our SLAM lead is ex- Amazon. Our manipulation group is ex-Google Robotics. Across the board, the team is super slick. I spent a long time building it. I think the best asset we have today is the team. It’s quite an honor to wake up every day working alongside everybody. It’s really great.
Figure has taken an aggressive approach to hiring, drawing in talent across the robotics industry, as well as from high-tech automotive and elsewhereFigure.ai
Awesome. So the Alpha prototype, you’ve got that built? What state’s it in? What can it do?
Yeah, it’s fully built. We haven’t announced what it’s done yet. But we will soon. In the next 30-60 days we’ll give a glimpse of what that looks like. But yeah, it’s fully built, it’s moving. And that’s gone extremely well. We’re now working on our next generation, that’ll be out later in the summer. Like in Q3 probably.
That’s quite a pace.
Yeah, we’re really moving fast. I think it’s what you’re going to see from us. It’s like what you see from a lot of successful commercial groups, we’re going to move really fast.
Yeah, Tesla comes to mind obviously. They’re building all their own actuators and motors and all that sort of thing. Which way are you guys going with that stuff?
We’re investing a lot in the actuation side, that’s what I’ll say. And I think it’s important, there’s not really good off the shelf actuators available. There’s really not any good control software, there’s no good middleware, there’s no good actuators. Autonomy can be stitched together, but there’s really no good autonomy data engine you can just go buy and bring over. Hands maybe, there’s some good work in prosthetics, but they’re really not at a grade where they’re good enough to put on the robot and scale it.
I think we look at everything and say OK, let’s say we’re at 10,000 units a year volumes in manufacturing. What does that state look like? And yeah, there’s no good off-the-shelf alternatives in those areas to get there. I think there’s some things where you can do off-the-shelf, like using ROS 2 and that kind of thing in the early days. But I think at some point you really cross the line where you’ve kinda got to do it yourself.
You want to get to market to by 2024. That’s… Pretty close. So I guess you’ve got to identify the early tasks that these robots will be able to shine in. What kind of criteria will decide what’s a promising first task?
Yeah, our schedules are pretty ambitious. Over the next 12 months in our lab we’ll get the robot working, and then over the next 24 months we’ll ideally be able to step in the first footprints of what a pilot would look like, an early commercial opportunity. That would probably be very low volumes, just to set expectations.
And we would want the robot to demonstrate that it’s actually useful and doing real work. It can’t be 1/50th the speed of humans, it can’t mess up all the time. Performance wise, it’s got to do extremely well. We would hope that would be with a with a few of the partners that we’re gonna announce in the next 12-18 months.
We would want the robot to demonstrate that it’s actually useful and doing real work. It can’t be 1/50th the speed of humans, it can’t mess up all the time. We would hope those be easier applications indoors, not next to customers, and it’d be able to demonstrate that the robot can be built to be useful. At the very highest level, the world hasn’t seen a useful humanoid built yet, or watch one do real work, like, go into a real commercial setting where somebody is willing to pay for it to do something. We’re designing towards that. We hope we can demonstrate that as fast as we can; it could be next year, could be the year after, but we really want to get there as fast as possible.
Do you have any guesses about what those first applications might be?
Yeah, we’re spending a lot of time in the warehouse right now. Supply chain. And to be really fair, we want to look at areas where there’s labor shortages, where we can be helpful, and also things that are tractable for the engineering, that the robot can do. We don’t want to set ourselves up for failure. We don’t want to go into something super complex for the sake of it, and not be able to deliver.
We also don’t want to go into a very easy task that nobody has any interest in having a useful robot for. So it’s really hard. We do have things in mind here. We haven’t announced those yet. Everything’s a little too early for us to do that. But these would be, you know… We think moving objects around the world is really important for humanoids and for humans alike. So we think there’s an area of manipulation, an area of perception, and autonomy is really important. And then there’ll be an interest in speed and reliability of the system, to hopefully build a useful robot.
So yeah, we’re looking at tasks within say, warehousing, that there’s a lot of demand for, that are tractable for the robot to do. The robot will do the easiest stuff that it can do first, and then over time, it will get more complex. I think it’s very similar to what you’re seeing in self-driving cars. We’re seeing highway driving start first, which is much easier than city driving. My Tesla does really well on the highway. It doesn’t drive well in the city.
So we’ll see humanoids in areas that are relatively constrained, I would say. Lower variability, indoors, not next to customers, things like that at first, and then as capabilities improve, you’ll see humanoids basically branching out to hundreds and ultimately thousands of applications. And then at some chapter in the book, it’ll go into the consumer household, but that’ll come after the humanoids in the commercial workforce.
At some chapter in the book, it’ll go into the consumer household, but that’ll come after the humanoids in the commercial workforce. Absolutely. It’s interesting you bring up self driving, there’s a crossover there. You’ve hired people from Cruise, and obviously Tesla’s trying to make their robot work using their Full Self Driving computers and Autopilot software. Where does this stuff cross over, and where does it diverge between cars and robots?
I think what you’ve seen is that we have the ability to have algorithms and computation to perceive the world, understand where we’re at in it, and understand what things are. And to do that in real time, like human speeds. 10 years ago, that wasn’t really possible. Now you have cars driving very fast on the highway, building basic 3D maps in real time and then predicting where things are moving. And on the perception side, they’re doing that at 50 hertz.
So we’re in need of a way to autonomously control a fleet of robots, and to leverage advances in perception and planning in these early behaviors. We’re thankful there’s a whole industry spawning, that’s doing these things extremely well. And those same type of solutions that have worked for self driving cars will work here in humanoid robotics.
The good news is we’re operating at very different speeds and very different safety cases. So it’s almost looking more possible for us to use a lot of this work in robotics for humanoids moving at one or two meters per second.
Once they’re sophisticated enough, humanoid robots threaten to crash the value of human labor down near zero. Economies and societal structures had better be readyFigure.ai
Fair enough. How are you going to train these things? There seems to be a few different approaches, like virtualization, and then the Sanctuary guys up in Canada are doing a telepresence kind of thing where you remotely operate the robot using its own perception to teach it how to grab things and whatnot. What sort of approach are you guys taking?
Yeah, we have a combination of reinforcement learning and imitation learning driving our manipulation roadmap. And similar to what you said with the telepresence, they’re probably using some form of behavior cloning, or imitation learning, as a core to what you’re doing. We’re doing that work in-house right now in our lab. And then we are building an AI data engine that will be operating on the robot as it’s doing real tasks.
It’s similar to what they do in self driving cars, they’re driving around collecting data and then using that data to imitate and train their neural nets. Very similar here – you need a way to bootstrap your way of like going into market. We’re not a big fan of physically telepresencing the robot into real operations. We think it’s really tough to scale.
So we want to put robots out in warehousing, and train a whole fleet of robots how to do warehousing better, and when you’re working in a warehouse, you’re doing a bunch of things that you would do in other applications, you’re picking things up, manipulating them, putting them down… You basically want to build a fleet of useful robots, and use the data coming off of them to build an AI data engine, to train a larger fleet of robots.
Then it becomes a hive mind-type learning system where they all train each other.
Yeah. You need the data from the market. That’s why the self driving cars are driving around collecting data all the time; they need that real-world data. So tele-operation is one way you can bootstrap it there. But it’s certainly not the way you want to do it long term. You basically need to bootstrap your robots in the market somehow. And we have a combination of reinforcement learning and imitation learning that we’re using here. And then you want to basically build a fleet of robots collecting sensor data and position states for the robots, things like that. And you want to use that to train your policies over time.
You basically need to bootstrap your robots in the market somehow. That makes sense. It just seems to me that the first few use cases will be a mind-boggling challenge.
You’ve got to choose that wisely, right. You got to make sure that the first use case is the right one. It’s really important to manage that well and get that right. And so we’re spending a tremendous amount of time here internally, making sure that we just nail the first applications. And it’s hard, right, because the robots are at the bleeding edge of possible. It’s not like ‘oh, they’ll do anything.’ It’s like, ‘hopefully it’ll do the first thing really well.’ I think it will, but you know, it’s got to work. It’s what I’ve built the company on.
So in the last six months, AI has had a massive public debut with ChatGPT and these other language models. Where does that intersect with what you guys are doing?
One thing that’s really clear is that we need robots to basically be able to understand real-world context. We need to be able to talk to robots, have them understand what that means, and understand what to do. That’s a big deal.
In most warehouse robots, you can basically do, like, behavior trees or state machines. You can basically say, like, if this happens, do this. But out in the real world it’s like, there’s billions or trillions of those types of possibilities when you’re talking to humans and interacting with the environment. Go park on this curb, go pick up the apple… It’s like, which apple? What curb? So how do you really understand, semantically, all the world’s information? How do you really understand what you should be doing all the time for robots?
We believe here that it’s probably not needed in first applications, meaning you don’t need a robot to understand all the world’s information to do warehouse work and manufacturing work and retail work. We think it’s relatively straightforward. Meaning, you have warehouse robots already in warehouses doing stuff today. They’re like Roombas on wheels moving around, and they’re not AI-powered.
But we do need that in your home, and interacting with humans long term. All that semantic understanding, and high level behaviors and basically how we get instructions on what to do? That’ll come from vision plus large language models, combined with sensory data from the robot. We’re gonna bridge all that semantic understanding the world mostly through language.
There’s been some great work coming out of Google Brain on this – now Google DeepMind. This whole generative AI thing that’s going on, this wave? It’s my belief now that we’ll get robots out of industrial areas and into the home through vision and language models.
It’s my belief now that we’ll get robots out of industrial areas and into the home through vision and language models. Multimodal stuff is already pretty impressive in terms of understanding real world context.
Look at PaLM-SayCan at Google, and also their work with PaLM-E. Those are the best examples, they’re using vision plus large language models, to understand what the hell somebody’s saying and work out what to do. It’s just unbelievable.
It is pretty incredible what these language models have almost unexpectedly thrown out.
They’ve got this emergent property that’s going to be extremely helpful for robotics.
Yes, absolutely. But it’s not something you guys are implementing in the shorter term?
We’re gonna dual-path all that work. We’re trying to think about how do we build the right platform – it’s probably a platform business – that can scale to almost any physical thing that a human does in the world. At the same time, getting things right in the beginning; you know, getting to the market, making sure it works.
It’s really tough, right? If we go to market and it doesn’t work, we’re dead. If we go to market and it works, but it’s just this warehouse robot and it can’t scale anywhere, it just does warehouse stuff? It’s gonna be super expensive. It’s gonna be low volumes. This is a real juggling act here, that we have to do really well. We’ve got to basically build a robot with a lot of costs in it, that can be amortized over many tasks over time.
And it’s just a very hard thing to pull off. We’re going to try to do it here. And then over time, we’re going to work on these things that we mentioned here. We’ll be working on those over the next year or two, we’ll be starting those processes. We won’t have matured those, but we’ll have demonstrated that we’ll be deploying those and the robot will be testing them, things like that. So I would say we have a very strong focus on AI, we think in the limit this is basically an AI business.
Figure’s team has already built a functional alpha prototype, to be revealed soonFigure.ai
Yeah, the hardware is super cool, but at the end of the day it’s like ‘whose robot does the thing?’ That’s the one that gets out there first. Other than Atlas, which is extraordinary and lots of fun, which other humanoids have inspired what you guys are doing?
Yeah, I really like the work coming out of Tesla. I think it’s been great. Our CTO came from IHMC, the Institute for Human Machine Cognition. They’ve done a lot of great work. I would say those come to mind. There’s obviously been a large heritage of humanoid robotics over the last 20 years that have really inspired me. I think it’s about a whole class of folks working on robotics. It’s hard to name a few but like there’s been a lot of great work. Toyota’s done great work. Honda’s done great work. So there’s been some really good work in the last 20 years.
Little ASIMO! Way back when I started this job, I vaguely remember they were trying to build a thought-control system for ASIMO. We’ve come a ways! So you’ve just announced a $70 million raise, congratulations. That sounds like a good start. How far will it get you?
That’ll get us into 2025. So we’re gonna use that for basically four things. One is continued investment into the prototype development, the robots. We’re working on our second generation version now. It’ll help us with manufacturing and bringing more things in-house to help with that. It’ll help us build our AI data engine. And then it’ll help us on commercialization and going to market. So those are kind of the four big areas that we’re spending money on with the capital we’re taking on this week.
We thank Brett Adcock and Figure’s VP of Growth Lee Randaccio for their time and assistance on this story, and look forward to watching things progress in this wildly innovative and enormously significant field.
Leave a Reply