April 19th, 2018 by Mladen Barbaric
I’ve been asked “when are we all getting robots?” so many times, I can’t even count. Being a product expert and always working on the cutting edge of technology has a downside: “Are you designing them? Are they like C-3PO? Will we just get rid of our spouse soon?”.
As I eye roll, I realize that hidden under the crude surface, there are actually solid questions posed. Here’s my translation: where are we really on the technology adoption S-curve and what’s preventing robotics and AI from being widely adopted at home? The question isn’t binary: robotics or no robotics, AI or no AI, but what level, combination/flavour of tech, features, services, cognitive triggers, experiences, embodiments, business models etc. make most sense today, for robots and AI to become mass adopted?
We should note that Artificial Intelligence IS in just about every home in some capacity, through home assistants or even through media platforms like Netflix, which keep learning about our preferences and intelligently adjusting suggestions. Robotics on the other hand haven’t permeated our homes in masses, except on a smaller scale through vacuums and toys.
So at what point and for what reason, would we invite robots into our abodes, and is that time soon? I simply couldn’t get the proverbial gerbil to stop spinning, so I decided to rope the team at Pearl into a bit of a mind-bend study, and said: “Let’s envision robots and home hubs we would love to have, and could make, today. Leave your chains at home and have a freekin’ blast.” Onslaught of creative juice ensued.
Before diving in, one has to first understand the advancements made to date. I could write a few thousand pages here, discussing different nuances of history, motion in robotics, feedback, navigation, mimicry etc, but this isn’t the purpose of this blog, so I’ll just quickly touch on key points.
There’s incredible work being done in mobility and dexterity by companies like Boston Dynamics (check out the Atlas! I can watch this thing in action all day, and yes, I’m a geek). Human likeness, among other sophisticated aspects of robotics, is studied and mimicked to the extreme at the Hiroshi Ishiguro Labs. The team at Sphero entertains us with ultra-fun, Star Wars themed, rolling toy robots, while Teenage Engineering really brings some funk to the game with emotionally engaging and cultish “R” (Props - love it!). And of course, if we talk about emotional connections, one can’t leave out Sony’s Aibo, a real pet replacement robot that had people cry over the last version when it went obsolete. On the opposite end of the spectrum, MIT’s media and robotics labs explore very practical applications, especially important for industrial uses. Then there is iRobot Vacuum and the myriad of similar solutions it catalyzed (I’d go on forever listing them, so let’s not, and say we did).
Of course, when you think about robot navigation, you can’t ignore the advances made in driverless vehicle technology – for which the requirements may be an order of magnitude more difficult (and mission critical), than for robotics at home. But, one could imagine the same techniques ported and applied to robots.
Even a quick glance at what’s out there, and you realize that we may have all the technology needed, so what gives? I think there are several layers of the onion to peel here, so let me dive in.
First, our view on robots is primed by Hollywood. While movies are certainly fun, this isn’t helpful. There are basically two buckets of robots in film: functional-creepy robots we fear, and the cute/playful robots we fall in love, or laugh with. Each is imagined as the most sophisticated version of itself, that either threatens / out-grows us (think Hal 9000, I, Robot, Samantha, or “The Machines” from Matrix), or evokes smiles (think Wall-e, R2-D2 or Chappie).
These movie character robots are all very, very evolved with human-like traits because, well, they are only bound by human imagination and fuelled by the desire to entertain. It’s no wonder that when we think about robots at home, we imagine these characters and have similar expectations. So, as a result, we expect real tech to be either creepy or cute, and be very, very smart.
But, reality is quite different. While technology pieces are available and rapidly evolving, a human replacement in all aspects (mobility, consciousness, empathy, learning etc) is an extraordinary feat of multiple complex systems engineering (as Siri, Alexa and Google Assistant clearly illustrate when you ask for much more than the weather. Not dissing – this is normal, this techology is based on learning, and you have to start at some point). Creepiness is also very hard to get around. People have mostly gotten around it by taking the cute route.
But just because our expectations are set incorrectly, and we can’t duplicate and control Samantha or Wally just yet, it doesn’t mean that we can’t start simple and rebuild expectations through devices that are profoundly useful, and perhaps different from expected.
Maybe we should re-set expectations. How about baby steps?
To jump start the mass adoption, there are a few things that must align. Obviously, we must first solve a functional need that we are not addressing with static products. What can we solve with motion (robotics) and intelligent decision making that we really want / need?
Next, we should embody the function in something completely non-threatening, but create an object we can take seriously (unless we firmly believe that the mass adoption robot is a toy). I tend to think that when designing something for the home, we really have to design for context. Creating another techy thing with lots of plastic and wire just isn’t something people want to bring home.
We have to remember, robot or chair, objects designed for the home are an integral part of the interior, and should compliment, blend or accent.
We also need to balance cost vs. benefit, because we can all geek out and sprinkle sensors, motors and other designers’ toys we can’t wait to deploy, but if the cost starts outweighing benefit, we’ve just tipped the scale towards non-adoption. This is not to say that there isn’t a case for an expensive robot, we just have to create a perceived value equal to or greater than the appropriate multiple of cost. In simple terms: we can make a $2000 robot, as long as the user feels the robot is worth $2000 or more, cost is something like $500, and enough people would buy them.
We could also consider business models where the product is the service, and the robot is simply a window providing such a service, sold at cost or given away for free. This topic sounds obvious and everyone wants to jump on the bandwagon of “hardware as service”, but it’s a complex issue and a whole other blog, so I’ll table it (for now).
So, for our exercise, we assumed the anchor function is something quite basic, like a speaker, light, camera, or some combination of those, because they are readily found in our homes, they’re usually placed in strategic locations, plugged in, and we could relatively easily evolve them to build services around or adjacent to the core functions. If so, then we could imagine restricting the motion to minimal. Why? KISS! Duh. Cameras for example, don’t need much more then a pan and zoom. Or, if we needed to move about the environment, let’s imagine the simplest and least threatening way to do it.
Something amazing happens when you reduce elements and constrain design. You’re forced to work with limited resources and if you’re focused on executing well, you can expand more energy on fewer tasks, making simple things great.
So, what can we accomplish with simple motion? First, note that when we place a robot in an environment, we are creating an interaction between object and environment, in motion. AKA: furniture that is animated – how does it play with the surroundings? (Of course you wouldn’t animate a light just because you can. Bear with me for a second). How do you use light, colour and space differently? Does the robot respond to surroundings? Can changes in the robot adapt with light over the course of a day? Should we use mobility to blend or to accent? Can we use motion as a simple way to draw attention in a new user interface?
Next, with motion we create another level of interaction between robot and groups of occupants at home: people / pets, residents / guests, adults / children. How do we treat different occupants or occupant groups? Can we have different uses and behaviours for different people? Should we be interested in children over adults? Heck, what if we focused on one group - just children or just intruders? Would this thought process allow us to bend our view of what a robot could do or should look like?
I tend to think that the natural first step for robotics isn’t to take over all of our chores, but provide passive services, with the robot remaining almost invisible. To be clear, active = I command, I get. Passive = it just happens, and I’m served with something I didn’t ask for, but want, at the right time. Imagine if, for example, your bot could photograph your kids passively, and capture those silly dance moves, cute conversations with dolls etc. that you missed. Imagine the memories you could preserve – it’s priceless. Sure, there’s some privacy issues to work out, but you can certainly sign me up for pro level photography / videography of my kid that I get to decide how to use and share (Sorry Abby - you don’t get a say until you’re older, I can’t get enough of your cuteness).
There is nothing more powerful in product development than inspiring human emotion that results in a connection between object and owner. But I’m not sure that it necessarily needs to be done through human or pet mimicry. Creating a connection between owner and robot may be a lot more basic than trying to recreate a human to human or say, feline to human relationship.
We tend to connect with things that fill our emotional gaps/needs, and sure, our most obvious modelling for connection is the relationships we’re familiar with, but that doesn’t mean we can’t dive a bit deeper and experiment.
Just think “Wilson” in Cast Away: complexity is not the key to emotional response. We can be more imaginative, push the boundaries of connection, and immerse ourselves in these new paradigms of interaction. I personally can’t wait.
Full disclosure, what we’re showing here has been passed through a filter, holding back some of our more... mind bending ideas. All I can say is, we’ve just started on this road, and for us, like many in the field, a renaissance in design for the home is coming. I’m crazy excited.
Screw the box!
* This blog is provided as an opinion and may not reflect the views of Pearl Studios Inc. All concepts, ideas and formats displayed in this communication are properties of Pearl Studios Inc. All trademarks and intellectual property mentioned in the text belong to their corresponding owners.