Have you ever looked at something and been freaked out by how almost human it looks, but not quite? Be honest – does the robot Sophia scare you? People find things that look human, but not quite human, to be scary. The creepy feeling can range from robots, to CGI animation, to animatronics like in theme parks, to dolls, or even PDAs. This concept is called the “strange valley”. And, believe it or not, this is actually a particularly big reason why many AI projects fail.
The strange valley is the relationship between the degree of resemblance of an object to the human being and the emotional response of the man to this object. Basically, people find things that look like humans, but not actually human, to be scary. Specifically here we are talking about the physical aspect of something like robots.
If you look at an industrial robot, these are generally not considered very scary. They don’t have faces. Don’t take the form of a human body. Then if you become a bit more humanoid like WALL-E, people think that’s kinda cute. But once you get into the animatronics or the robot Sophia where they try to look and act like humans they start to fall into the strange valley where a lot of people find them scary and uncomfortable with them.
The strange valley of data
Generally, the concept of the uncanny valley applies to humanoid and anthropomorphic physical objects. However, the strange valley as a “scary” concept can also be applied to data. Due to a convenience/privacy trade-off, people are sometimes willing to be a little scared off if it means extra convenience, but there is a line that once crossed it’s hard to earn people’s trust back. You may think you’re not building a robot, so you don’t have to worry about the strange valley. It’s not just humanoids that can be scary. There is also a data version of the Odd Valley. And this is all too often overlooked.
If you push this line too far and fall into the strange valley, you risk causing the AI project to fail. If people are scared of an app, they won’t use it, resulting in failure. The Uncanny Valley is an interesting way for an AI project to fail, as we don’t typically consider psychological responses as a reason why an AI project may fail. Say, for example, a museum or hospital has built an AI robot to interact with customers or patients. If people don’t want to use these robots, and patients don’t want robots in their rooms because they find them scary, you’ve wasted time, money, and other resources on a medical project. ‘IA that ultimately failed. It may be because you didn’t have a solid understanding of business at the start of the project and didn’t take these psychological responses into account.
Organizations, businesses, government agencies, and businesses are collecting more data than ever before. They use this data to better understand their customers, gain additional insights, and gain a competitive advantage, but people often don’t know how their data and information is being used. Some organizations collect your data and use it to improve customer experience and make helpful recommendations and often people are comfortable with it because it suits them.
However, if companies look at your whole shopping behavior and start making recommendations on things you haven’t researched but might be considering buying, it can quickly get weird. valley and people are starting to think it’s scary. People have different thresholds for what they consider scary for the Strange Valley, which makes finding the line a tricky balancing act. You want to provide just enough customization and convenience, but you don’t want to share too much and give the impression that you know too much, which will cause trust to deteriorate and people to feel uncomfortable using technology. Once you enter the strange valley, you have eroded the benefits you would otherwise have gained from this technology.
The Strange Valley IRL
It’s one thing to talk about this concept theoretically, but it’s another thing to see it in action. In Japan, a chain of hotels called Henn na Hotel was created, consisting mostly of robots to help with various tasks that people would otherwise have done. Things like welcoming guests and checking them in, bringing bags to rooms, and providing wake-up calls. It showed an immediate return on investment by saving on labor costs, staffing issues, was very efficient and had the gimmick of being a “robot hotel”.
However, over the months, problems proved that this hotel was slipping into the strange valley. Some of those issues were technical issues, like the robots inside the room waking up guests at night because they accidentally mistaken snoring for speech. Other issues arose when guests struggled to enter their rooms due to faulty facial recognition issues. And many people have complained about how slowly the robots move when delivering bags to their rooms. However, you can argue that technology can be replaced and updated, so that’s not why this project failed. Even though it was an extremely efficient hotel, these issues made guests feel uncomfortable, giving them unpleasant experiences. The hotel finally decided that people had better do these tasks. In the end, the hotel didn’t realize how uncomfortable people would be with a hotel that was about 90% robots and only 10% humans.
How to solve the strange valley problem
There’s no hard and fast line when it comes to the Weird Valley. You may not want a robot to walk up to you and say, “Hi, how can I help you?” But if you’re at McDonalds, chances are you’re willing to use their self-service kiosk to order your food. The difference between the two systems is that the kiosk does not look like a human and the human controls the kiosk. You don’t have to engage in conversation with the kiosk, and you don’t try to get the kiosk to do more than its primary function. It’s easily controllable and very predictable and it seems to solve 90% of these problems. The same goes for data, if you log too much data and are too pushy, people will simply stop using your service because they are not comfortable with the perceived lack of privacy.
Some people are more comfortable and less “scared” of technology than others. This is why organizations need to come up with alternatives to systems that might be too close to potential trigger points where people might be spooked. One component of iterative project management methodologies for AI is testing different approaches in real-world pilots that see how people will react. If you see that people are having an adverse reaction to the data or the physics system, you can either “tone down” the creeps or come up with less scary alternatives that still provide the value of the AI system. There are many important reasons why AI projects can fail, but you definitely don’t want one of those reasons to be the psychological creeps of your AI solution.