Machines That Can Think and Learn and Feel and Do Other Stuff Real Good

This week we are reading Alan Turing’s “Computing Machinery and Intelligence” and thinking about “machines that can think and learn.” At the same time, I was clearing out my 1000 open browser tabs, bookmarking things I still wanted to get back to at some point when I ran into “We Know How You Feel,” a New Yorker article about affective computing and the possibilities of teaching machines how to read and respond to emotions in ways that are meant to emulate human behavior. Reading these two texts together got me thinking about the tendency to anthropomorphize computers, such as when we ask, “Can machines think?” Maybe this is just meant to function rhetorically like click bait, but I also wonder if and how it contributes to the narrative of computers/robots/machines “taking over.”

The original question, “Can machines think? I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted… Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.

I don’t really mean to be dismissive of this idea (about machines taking over, that is)–I realize that machines have replaced several kinds of jobs that were once performed by humans, rendering skills that once made lives sustainable unmarketable, and that these kinds of shifts have real implications for large groups of people, their families, and the communities around them. At the same time, it was curious to me that when I taught a course on feminisms and interaction design last semester, a topic that really seemed to interest students was this idea of how technologies might evolve to such an extent that they “take over,” rendering human beings useless and expendable. I believe we talked about Jeopardy’s Watson and robots that write news stories as examples of how computers continue to outperform humans in certain activities and how they do so in ways that move beyond physical mechanics and simple calculations. I think a major concern was how these kinds of technological developments might continue to threaten the kinds of work that people will be able to do to survive. At the same time, and while I sympathize with these real concerns, and while I understand that computers may one day grow to surpass the abilities of human beings as a collective, what they are doing is still the result of programming and automation based on human ideas and actions, right? Isn’t this kind of Turing’s point and the point of the discussion of these machines as learning machines? Will they really be able to do everything humans can do to the point that humans become obsolete? I guess I’m not entirely convinced by cyber-apocalyptic narratives, but maybe I just haven’t read enough of them to where I can imagine what this would look like.

Anyway, back to machines that feel. The New Yorker article discusses companies like Affectiva that has collected visual data on human emotional responses that can be used to predict human behavior based on their affect.

So, for example:

Affectiva is working with a Skype competitor, Oovoo, to integrate it into video calls. “People are doing more and more videoconferencing, but all this data is not captured in an analytic way,” she told me. Capturing analytics, it turns out, means using the software—say, during a business negotiation—to determine what the person on the other end of the call is not telling you. “The technology will say, ‘O.K., Mr. Whatever is showing signs of engagement—or he just smirked, and that means he was not persuaded.’

What this means to me is that there is huge potential to revolutionize usability and methods for assessing user experience in ways that could be both amazing and scary–amazing in that it indicates that possible future where machines might one day be able to think, feel, and do whatever it is that humans are able to do, but more quickly and with better precision; and scary in the sense that I think we tend to think of emotional responses as the most private and personal things we have, and the potential for that to be made public–to where we are constantly analyzed in even seemingly private activities like watching a movie–is pretty frightening, at least in terms of our current cultural values and expectations of what is private and what emotions and affect mean–it’s supposed to be what makes us human. Seems these understandings may change in the near future.

The other concern I had with regards to these kinds of technologies–a concern that was touched on in the article–was that I got to thinking that in order for these machines to work, besides needing a huge collection of data indicating all possible emotions, is that there is a need to stabilize and universalize human behavior, embodiment, and affect to some extent, which I think can make humanists (like me?) uncomfortable because it hints at another step toward normativizing human behaviors in ways that have left certain groups out. For instance,

Like every company in this field, Affectiva relies on the work of Paul Ekman, a research psychologist who, beginning in the sixties, built a convincing body of evidence that there are at least six universal human emotions, expressed by everyone’s face identically, regardless of gender, age, or cultural upbringing.

And even with over 90% accuracy, there is still the 10% of folks whose affect might not fall in line with the computer’s standardized assessment of emotion.  Are, for example, those with physical and mental disabilities (autism was mentioned in this article, and IMO represented problematically as something that needed fixing), or who have undergone facial cosmetic surgery included in the data, and how can the computer account for relatively unique exceptions in the scheme of larger patterns? My concern is that for these sorts of technologies to work, it may likely be contingent on able-bodiedness and normativizing human behavior in ways that may be harmful for those with disabilities and other groups, despite the statement that gender, age, and cultural upbringing don’t impact the results.

Specifically I am thinking of my research on YouTube videos about East Asian double eyelid surgery and how people rationalize the decision to get the surgery through a lens of emotion: patients an surgeons say that they want to widen their eyes in order to appear more expressive, alert, friendly, and approachable. I also think of the corner lip lift that is intend to create a sort of permanent smile for those who have pouty lips. What lies beneath statements like these are the idea that participants feel that their faces–or eyes, or lips–are read in ways that don’t necessarily line up with who they actually are or how they actually feel or at the very least how they want to be perceived. This also raises the question of is there a dominant paradigm of what it means to look friendly that certain, perhaps racialized, perhaps gendered bodies don’t adhere to? This would seem to be true if we even think of emotionalized stereotypes like the “angry black woman,” the “dragon lady,” or the “lotus flower.” So what do these bodies mean for technologies like Affectiva?

Next time, on Machines That Do Stuff Real Good, Machines That Can Make You Big…

One Comment

  1. Thanks so much for pushing the implications of what we mean by “thinking” and “humanity” in ways that really resonate with your own research and contemporary concerns. There’s always this tension between categories and concepts we assume to be “universal” and the continuum of experience and expression that complicates those concepts. Your research on elective double-eyelid surgery really brings that tension out. Thanks! Can’t wait to read you next post!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>