Applying baby instincts in designing new technology
This article is a beginning of my idea on merging baby instincts with user experience design. The full article will be published this fall as a part of ITP's first online journal called Adjacent Possible.
Here is a fascinating thing about babies: as soon as they are born, infants use their basic instincts to navigate a new platform, their mother's body. They go out to find something that will cure their hunger. Even with their yet-developed sense of smell and vision, they will try to crawl towards objects that catch their eyes. For instance, most babies will notice their mother's breast. Once they have found it, they will latch on and immediately know how to get milk (*1). From then on, babies face new interfaces every day. The world is for them to test and learn without reading a manual. Watching an infant was a study in human interaction and prompted me to consider how to design digital experiences for adults.
Maternity leave, in a way, was taking a step back from working digitally to relearn the basic human skills. Often, watching my infant sparked thoughts and realizations of ‘Huh, so we did that when we were infants?' or ‘Wow, he is completely vulnerable and depends on us for survival.' A few months later, the thoughts became more like, ‘Wow, humans are such fast learners.' and ‘He understands gravity? Nobody taught him that!' My documentation of his milestones became a form of design research, if only specific to my own son. But, could I use this documentation as an artifact of human instinct for behavior and learning? On Feb. 5th, he realized what a ball was. On Feb. 14th, two weeks later, he recognized the same object and tried to reach for it. On March 4th, for the very first time, he laughed out loud when I waved the same ball in front of him.
It was fascinating to see that smiles and laughter were one of the very first expressions for my son. I started reading about baby’s behaviors and learned that laughter and tears are human’s earliest forms of communication(*2), and an insight into how the brain works at a primitive stage.
Dr Caspar Addyman is a scientist inside the world's leading infant-research units, Babylab. He said, "If you are trying to understand the psychology of humans, it makes sense to start with babies. Adults are far too complex. They either tell you what you want to hear or try to second-guess you." But if a baby does something, he concludes, "it's bound to be a genuine response."(*2)
Humans, with our superior brains, understand a lot of things in this world naturally. When babies interact with touch screens, we are amazed at how they intuitively use the device. But, we forget that computer interfaces before touch screens were not intuitive, but rather, “interpretative.”(*3)
VR: Create an environment of YES!
The relationship between people and their communication methods has come a long way in expressing different modes and concepts. Communication technologies range from the Internet and mobile phones to personal digital assistants (PDAs)(*2). Also, the environments that are being created include "smart homes," as well as "information oases" and virtual settings, such as VR and AR. When interfaces change their outfits over the years, the subject matter inside our conversations remains the same. Phatic conversations such as, "How are you?" and "Did you have dinner?" or doing mundane things in life like buying fresh milk stayed the same as millions of years ago.
We are living in an exciting era where new communication technologies are being built with no history to reference. It’s amazing for designers since we are not conditioned with old paradigms that make you forget why things were made in the first place. Especially in VR, there are opportunities for us to start building a new world (and interaction) from scratch. When a user first put on the VR headset, the already-known physical laws might disappear. Your balance is off, and the sense of orientation is gone. How can we bring back our baby instincts to navigate a foreign territory?
Research has shown that infants and toddlers absorb information faster when they are in a familiar, nurturing environment. When switching babies to a new environment, it’s important to leave some of the familiarities with them, so they have a base ground to feel emotionally comfortable. According to Patricia K. Kuhl; a language acquisition specialist, babies repeat certain behaviors until they are confident that they’ve mastered them (*3). Babies are active learners. They do not sit passively and take in info. They probe and test their way into knowledge. Taking the beginning of the VR experience as an opportunity to design repeating behaviors can expedite the users' learning curve with the new platform. In January, Google published an article that covered the concept of how to create an engaging platform. In their article, the first point, “#1: Make the viewer the protagonist, not just a spectator, in the VR experience,” relates directly to the point I am making here; designing dynamic interaction is the key to helping users get used to new experiences.
While it might seem like we are creating experiences without boundaries, it's important for us to think about the effects and outcomes. I asked my pediatrician why my one-year-old trips so frequently. And the doctor said it is because, in his mind, he’s great at walking and capable of going fast. But his body doesn't follow as well as his brain. This was my realization moment. With VR, when it’s easy to make the experience as real as possible, how do we set the boundaries, making it intuitive but preventing users from making dangerous mistakes in the real world? Will people think it's okay to jump off the cliff or shoot a stranger after fictional, digital experiences? Would doctors become insensitive about an individual who's getting a heart operation?
Voice recognition AI: Building an open, universal minded conversation vs. single mind
When it comes to voice recognition AI, there is the concept of creating an open and global-minded conversation vs. a single-minded conversation. All people are born with phonetic sensibilities (*4). As infants, everyone has the ability to differentiate one language from another even before they understand either of the languages. When it comes to creating a new conversation with voice A.I, it is crucial to apply this open-minded nature of infants to embrace different variables.
When my team at R/GA were working on an A.I project, we found that the most frustrating pain point is when the A.I cannot understand a user’s accent. One interviewer from our research said, “It never really works for me (I guess I have Japanese Accent.) Also, they never say my name right. I tried to train Siri how to pronounce my name, but it doesn't get it. It only works with common English names.”
Recently, Backchannel published an article about this particular matter. The article talks about how collecting data is expensive and cumbersome, which is why certain key demographics take priority. In the end, it leads to “a voice devoid of an identity and accent.”
When it’s an innate expectation for the other person to understand your background, it’s hard for us to talk to A.I. like the other individual. The design-thinking needs to be ingrained in affecting the core program of these machines to adopt different accents and intonations. We need to start by giving computers the ability that infants have; not the ability that old scholars have.
When my family encountered Alexa for the very first time, I worried that she might confuse my 11-month-old son mainly because of where the sound comes from. However, Alexa can respond with a gentle light and sound that makes my son smile and agree with the response; such cues represent happy human/machine interaction moments. Even with this in mind, more work needs to be done in the field of A.I., especially about human cues and natural language.
Generally, as a user, a person will never think that a conversation was okay if the machine has to say that they don't know an answer to the question asked, then they accompany this with an immediate cutoff. Another issue that emerges is how a designer can create an engaging conversation loop instead of saying sorry. Incorporating Emotional Intelligence/ Emotional Quotient (EQ) in A.I. should be the next challenge in our “Intelligent Age.”
In my diary, I wrote down that my son gave a big kiss to "mong mong." "Mong mong" is a Korean phonetic barking sound of a dog. After fifteen months of living on this earth, he has identified a dog, is able to associate a Korean sound to it, and has built a relationship with an object that looks like a dog. Fifteen months can seem like dinosaur years in the age where three-second Instagram Stories can feel lengthy. I wonder if we just need to give everyone a break and understand that learning takes time for computers too.
Reference *1 http://www.breastcrawl.org/science.shtml
Reference *2 http://www.independent.co.uk/hei-fi/entertainment/science-behind-a-babys-laugh-8225783.html
Reference *3 A Companion to Digital Literary Studies edited by Ray Siemens, Susan Schreibman
Reference *4Machines That Become Us: The Social Context of Personal Communication Technology edited by James Everett Katz
Reference *5 http://ilabs.washington.edu/kuhl/pdf/Kuhl_2004.pdf
https://eclkc.ohs.acf.hhs.gov/hslc/tta-system/ehsnrc/cde/learning-environments/environment_nycu.htm
Reference *6 Andrew Garner, M.D., chair of the American Academy of Pediatrics Early Brain and Child Development Leadership Workgroup