Illich, Deschooling Society (1971)

This piece by Illich is useful for how it illustrates some of the consequences of institutional bureaucracy in relation to education, the benefits of open access to information, and the problems that accompany de-contextualized learning, wherein objects are removed from every day use, brought into educational settings separated from the contexts in which they tend to be used. And yet, even though I agree with this point, I also want to acknowledge that different teaching and learning contexts–including institutionalized education, depending on what specifically they look like–come with varied affordances that oftentimes benefit some and not others.

Today, I am opting to provide a set of images that represent a deschooled learning situation. In addition to these examples, I think of on-the-job training and learning, as well as internship opportunities as reflective of deschooled learning:

These images look like fairly simple one-on-one, one-plus-object, or two-parents-on-one modes of learning and the motivations and contexts are absent from the image, but I suppose I’d like to encourage that we imagine multiple possibilities for each. For instance, how can we flip who’s doing the learning and who’s doing the teaching? Is there an underlying message about the right and wrong answer or is it more open-ended? I suppose these things can be harder to represent in photos.

So maybe the question now is, what can we learn from these examples of deschooled learning, and how they inform how we approach teaching and learning in our current positions?

Two by Laurel

The readings by Laurel, “The Six Elements and the Causal Relations Among Them,” and “Star Raiders: Dramatic Interaction in a Small World” draw on Aristotle’s Poetics to understand human computer interaction as drama, and using six qualitative elements: action, character, thought, language, melody (pattern), spectacle (enactment).

This comparison of HCI to drama is probably most clearly visible in plot-driven video games, but my sense is that these elements are probably widely applicable to many moments of interaction–among people and between people and machines; for instance I think about the scene of getting directions from a smart phone using Google maps. If we draw some boundaries around that act, surely we can use the heuristic to gain insight into what that interaction looks like and perhaps how it can be improved. In other words, within this particular interaction, it may be helpful to analyze the action being taken and what physical, embodied movements are involved in that action, who the characters are and what are their predispositions and traits, what internal thought processes take place for the human, and what information is being processed by the application, what language is being used and how effective is that use in terms of the characters and their thoughts, what patterns are involved, and what is the spectacle?

As someone in rhetoric and composition, my mind also goes in the direction of Kenneth Burke’s work on dramatism, which is a way of understanding rhetoric and communication more broadly in terms of drama. Then, I wonder, how does the dramatistic pentad (act, agent, agency, scene, purpose) work within the same situation? What becomes visible and what is covered over?

For one, I don’t think there is the same kind of set causal relations as exists within the elements featured here. Laurel says, “Each element is the formal cause of all those below it, and each element is the material cause of all those above it. Somehow this is really hard for me to conceptualize how these elements always take place within a set order, mainly because I don’t really believe this is how interactions work–or is this not what Laurel is saying?

One of the prompts for today was:  Which of the six elements you think is most critical for human-computer interaction and why? My sense is that the element that resonates most with me within human computer interaction does not fit cleanly into one of the six elements: feeling. While this element is not listed in and of itself among the six, it might come up in the elements of action, character, or thought (though this seems to focus on feelings that are consciously and actively thought). However, I think this element is crucial enough that another useful heuristic might feature it at the front and center.

Viola, Will There Be Condominiums in Data Space?

To start, I enjoyed Viola’s “Will There Be Condominiums in Data Space,” and it seems like something I’d want to and need to read a couple more times to more fully take in. Today, I’m reading it rather quickly as I’m using Audacity to edit audio, which seems to be an interesting illustration (yet not quite) of the section on Viola’s imagined future of digital composing, in which we “shift away from the temporal, piece-by-piece approach of constructing a program […] and towards a spacial, total-field approach of carving out potentially multiple programs[…] We are proceeding from models of the eye and ear to models of thought processes and conceptual structures in the brain.” I wonder if a better example of this might be Web 2.0 and the separation of form and content. The possibility of models of thought processes and conceptual structures in the brain is intriguing but difficult to imagine, especially in considering how substantially brain structures have purportedly changed in even just the past twenty years.

At any rate, some other themes that came up in this piece include memory systems, educational models (constructive/additive versus a more inquiry-based model), the relationship between the whole and its parts, representational models of reality (branching, matrix, “schizo”), the relationship between art and data space, and, oh, that porcupine. I can’t say I get what that story was meant to do. Is it meant to show some relationship between varying technology users? Their varied perspectives about technology and how it encroaches (or retracts) onto varied (but the same) worlds? Is it an example of a “whole” with distinguishable parts? Who’s the porcupine and who’s the driver? And what does it mean to have condominiums in data space? Someone help me out here.

Zac says, “Viola’s essay is enigmatic. One key to deciphering the argument is his reference to Indian/South Asian spirituality. Viola tells us that the visual image, the geometric diagram and the mantra are all equal outward expressions of the same underlying thing. 

 

Given this framework, I suggest we also understand the parable of the porcupine and the expository sections of the essay as equal outward expressions of the same underlying thing. So: what is that thing?”

So, what I’m understanding this to say is that Viola is interrogating the boundaries and overlapping nature of the visual image, the geometric diagram, technological memory systems, sound, the story, the essay. Is it that each (visual image, geometric diagram, etc.) are “mmnemo-technics” that illustrate a relationality of “our individual existence”?

Bonus Question: Is data space a sacred space, a secular space, or something else altogether?

I believe the answer to this question may lie near the end of Viola’s essay:

“Applications of tools are only reflections of the users–chopsticks may be a simple eating utensil or a weapon, depending on who uses them.”

My sense is that data space is rendered sacred, secular, sterile, virile, productive, damaging, a reflection of humanity, and a rejection of humanity depending on who uses and interprets them, as that user exists within a particular cultural, historical, and political context. And then the answer also depends on how one defines “sacred” or “secular” or “profane” and what it means to be any or all of these things.

Evaluating Communication Media

  • McLuhan, M. (1964). “The Medium is the Message.” Understanding Media.
  • “Post something abut the communication medium you think has had the biggest impact on our world and why.”

I’m having a hard time responding to the above prompt, in part because my answer depends on what “impact” means, and for whom (who is impacted?)– or maybe it is my concern about what it means than what it actually means–in part because it depends on where and when and how the answerer (me!) is situated (and is this okay?), in part because its impossible to quantify in a holistic way what communication medium has had the most impact (and wouldn’t I hate to have the “wrong” answer?), and in part because there is the question of what counts as a “communication medium”? I know, it’s fun to ask academics seemingly simple questions. Anyway, this last question comes up for me because McLuhan talks about the electric light bulb as a communication medium, but one that people tend not to think of as a communication medium due to its lack of alphabetic textual “content.”

19166108_8a8c90ff84

And yet, according to McLuhan, electric light, along with power, “eliminate time and space factors in human association exactly as do radio, telegraph, telephone, and TV.” It shapes what we are able to do and when (at night, when there is no natural light), and where (underground, where there is also no natural light).

And this makes me think of the clock, which makes measurable the very medium by which we consider something as having a big impact–particularly in terms of efficiency. So much of our day to day lives have been impacted by the timekeeping technologies, including the 9-to-5 work week, when and how we sleep, when and how we eat meals, when and how we learn/go to school, as well as what we value–I’m thinking of the importance of being on time and the very ability to be on time in particular cultures versus the importance of not being on time or being 5-10 minutes late in others. 🙂

2746117951_ba77914e86

McLuhan also talks about:

“electricity, that ended sequence by making things instant. With instant speed the causes of things began to emerge to awareness again, as they had not done with things in sequence and in concatenation accordingly. Instead of asking which came first, the chicken or the egg, it suddenly seemed that a chicken was an egg’s idea for getting more eggs”

Instantaneousness, “instant gratification” is a key feature of the internet, computation, and social media that some have argued had led to significant shifts in the way that human beings who engage with these technologies read, think, and behave, including in terms of their expectations; for instance, we often talk about today being a time of “instant gratification,” and one of my greatest fears is the idea of having to use dial up in the super inter-connected world of today. Okay, I’m kidding, sort of, and I suppose it would depend on where I am and whether that place is very high or low tech, but I mean to point to how our temporal expectations have shifted significantly.

This question (and others’  blog posts) also remind me of Wesch’s “The Machine is Us/ing Us” which illustrates how Web 2.0 technologies including social media has changed the way we think, process information, love, and live, as well as Prensky’s stuff on “Digital Natives, Digital Immigrants,” which talk about how changing technologies have led to distinct shifts in the way people think, with implications for teaching and learning.

McLuhan also mentions the printing press,

Print created individualism and nationalism in the sixteenth century.

and money

Money has reorganized the sense life of peoples just because it is an extension of our sense lives.

2472499892_8f01527732

And the excerpt ends where McLuhan includes a quote by Jung, to illustrate how “our human senses, of which all media are extensions […] also configure the awareness and experience of each one of us”:

Every Roman was surrounded by slaves. The slave and his psychology flooded ancient Italy, and every Roman became inwardly, and of course unwittingly, a slave. Because living constantly in the atmosphere of slaves, he became infected through the unconscious with their psychology. No one can shield himself from such an influence.

This final quote makes me think about historical narratives and the relationship between slavery, the industrial revolution, and the impact of slavery on the U.S. economy, as well as how subjective questions about “impact” are. Maybe that’s part of the point and even what makes it fun to talk about, but perhaps a bigger might be: What are the implications of positioning particular technologies as more or less impactful than others?

See also: Haas, A. (2007). “Wampum as Hypertext.” SAIL.

Rhetoric of Predictions in History of Computing Technologies

Hi, folks. Haven’t updated for a while as we had a snow day over here at Virginia Tech, and I spent last week in Tampa for the Association for Teachers of Technical Writing Conference and the Conference on College Composition and Communication. Had a great time. Someone else I know pointed out how different it is to go to these conferences as faculty as opposed to as a graduate student, and I find that to be the case for me.

Anyway, for the New Media Seminar this week, we’re reading and discussing Alan Kay and Adele Goldberg’s “Personal Dynamic Media,” and I am to share a “nugget” and/or app that fulfilled Kay and Goldberg’s predictions.

“What will happen if everyone had a DynaBook?”

But before that, I’ll say in general, one idea I’ve been noticing in readings since the start of the seminar  is the idea of opening up access to computing technologies, whether in terms of use or in terms of production, to “ordinary” users, which I think is pretty cool. I’m also curious about the tendency I’m seeing to highlight the idea of “predicting” or “foretelling” the future of computing technologies. Where does this tendency come from? What are the implications of this rhetorical act? And, what do we gain from seeing how people imagine the future in these sorts of ways, especially from a perspective that takes place much later, sometimes when those predictions have largely come to fruition? Right now it seems like the main reason we even consider several of these essays notable today is for that predictive and future oriented quality.

A preliminary thought is to link the attraction to future predictions to the kind of willingness to imagine a utopic vision of a possible future, whether in terms of computing technologies or something else, that is necessary for creative and effective innovation.

Machines That Can Think and Learn and Feel and Do Other Stuff Real Good

This week we are reading Alan Turing’s “Computing Machinery and Intelligence” and thinking about “machines that can think and learn.” At the same time, I was clearing out my 1000 open browser tabs, bookmarking things I still wanted to get back to at some point when I ran into “We Know How You Feel,” a New Yorker article about affective computing and the possibilities of teaching machines how to read and respond to emotions in ways that are meant to emulate human behavior. Reading these two texts together got me thinking about the tendency to anthropomorphize computers, such as when we ask, “Can machines think?” Maybe this is just meant to function rhetorically like click bait, but I also wonder if and how it contributes to the narrative of computers/robots/machines “taking over.”

The original question, “Can machines think? I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted… Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.

I don’t really mean to be dismissive of this idea (about machines taking over, that is)–I realize that machines have replaced several kinds of jobs that were once performed by humans, rendering skills that once made lives sustainable unmarketable, and that these kinds of shifts have real implications for large groups of people, their families, and the communities around them. At the same time, it was curious to me that when I taught a course on feminisms and interaction design last semester, a topic that really seemed to interest students was this idea of how technologies might evolve to such an extent that they “take over,” rendering human beings useless and expendable. I believe we talked about Jeopardy’s Watson and robots that write news stories as examples of how computers continue to outperform humans in certain activities and how they do so in ways that move beyond physical mechanics and simple calculations. I think a major concern was how these kinds of technological developments might continue to threaten the kinds of work that people will be able to do to survive. At the same time, and while I sympathize with these real concerns, and while I understand that computers may one day grow to surpass the abilities of human beings as a collective, what they are doing is still the result of programming and automation based on human ideas and actions, right? Isn’t this kind of Turing’s point and the point of the discussion of these machines as learning machines? Will they really be able to do everything humans can do to the point that humans become obsolete? I guess I’m not entirely convinced by cyber-apocalyptic narratives, but maybe I just haven’t read enough of them to where I can imagine what this would look like.

Anyway, back to machines that feel. The New Yorker article discusses companies like Affectiva that has collected visual data on human emotional responses that can be used to predict human behavior based on their affect.

So, for example:

Affectiva is working with a Skype competitor, Oovoo, to integrate it into video calls. “People are doing more and more videoconferencing, but all this data is not captured in an analytic way,” she told me. Capturing analytics, it turns out, means using the software—say, during a business negotiation—to determine what the person on the other end of the call is not telling you. “The technology will say, ‘O.K., Mr. Whatever is showing signs of engagement—or he just smirked, and that means he was not persuaded.’

What this means to me is that there is huge potential to revolutionize usability and methods for assessing user experience in ways that could be both amazing and scary–amazing in that it indicates that possible future where machines might one day be able to think, feel, and do whatever it is that humans are able to do, but more quickly and with better precision; and scary in the sense that I think we tend to think of emotional responses as the most private and personal things we have, and the potential for that to be made public–to where we are constantly analyzed in even seemingly private activities like watching a movie–is pretty frightening, at least in terms of our current cultural values and expectations of what is private and what emotions and affect mean–it’s supposed to be what makes us human. Seems these understandings may change in the near future.

The other concern I had with regards to these kinds of technologies–a concern that was touched on in the article–was that I got to thinking that in order for these machines to work, besides needing a huge collection of data indicating all possible emotions, is that there is a need to stabilize and universalize human behavior, embodiment, and affect to some extent, which I think can make humanists (like me?) uncomfortable because it hints at another step toward normativizing human behaviors in ways that have left certain groups out. For instance,

Like every company in this field, Affectiva relies on the work of Paul Ekman, a research psychologist who, beginning in the sixties, built a convincing body of evidence that there are at least six universal human emotions, expressed by everyone’s face identically, regardless of gender, age, or cultural upbringing.

And even with over 90% accuracy, there is still the 10% of folks whose affect might not fall in line with the computer’s standardized assessment of emotion.  Are, for example, those with physical and mental disabilities (autism was mentioned in this article, and IMO represented problematically as something that needed fixing), or who have undergone facial cosmetic surgery included in the data, and how can the computer account for relatively unique exceptions in the scheme of larger patterns? My concern is that for these sorts of technologies to work, it may likely be contingent on able-bodiedness and normativizing human behavior in ways that may be harmful for those with disabilities and other groups, despite the statement that gender, age, and cultural upbringing don’t impact the results.

Specifically I am thinking of my research on YouTube videos about East Asian double eyelid surgery and how people rationalize the decision to get the surgery through a lens of emotion: patients an surgeons say that they want to widen their eyes in order to appear more expressive, alert, friendly, and approachable. I also think of the corner lip lift that is intend to create a sort of permanent smile for those who have pouty lips. What lies beneath statements like these are the idea that participants feel that their faces–or eyes, or lips–are read in ways that don’t necessarily line up with who they actually are or how they actually feel or at the very least how they want to be perceived. This also raises the question of is there a dominant paradigm of what it means to look friendly that certain, perhaps racialized, perhaps gendered bodies don’t adhere to? This would seem to be true if we even think of emotionalized stereotypes like the “angry black woman,” the “dragon lady,” or the “lotus flower.” So what do these bodies mean for technologies like Affectiva?

Next time, on Machines That Do Stuff Real Good, Machines That Can Make You Big…

Bush, As We May Think

For our second meeting of the New Media Seminar I’m participating in, we were assigned to read Vannevar Bush’s “As We May Think” and to blog about a meaningful nugget/passage.

Some things that stand out for me about the text is the ways in which Bush was concerned with several dimensions with regards to technological development, including its relationship with professional practice, widened access to information, as well as speed, size (“microphotography”!), cost, and ethical issues, of sorts. It also made me think about how both machines and human beings seem to have become increasingly multifunctional, most likely as result of that increased access to knowledge. I’m thinking about the strong DIY ethic visible on Pinterest and YouTube tutorials, for instance. I wonder what factors contributed to this direction in tech development.

Anyway, my nugget. I think there were several interesting points worth discussing, and I’m reminded of some other conversations that I may be able to incorporate. But, because I’m doing this at the very last minute, my response will be pretty brief, and will be mostly questions. I’ll start with the following, near the end of the text:

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory.

What immediately stands out to me is the clear rhetoric of progress, from “man’s” “shady past” toward “objectivity,” a “logical conclusion” wherein the goal seems to be “complete” and “objective” analysis of “his” problems. This theme regarding objective logic appears throughout the text, and stands out to me because of its cultured undertones. A second example:

Whenever logical processes of thought are employed–that is, whenever thought for a time runs along an accepted groove–there is an opportunity for the machine. Formal logic used to be a keen instrument in the hands of the teacher in his trying of students’ souls. It is readily possible to construct a machine which will manipulate premises in accordance with formal logic, simply by the clever use of relay circuits. Put a set of premises into such a device and turn the crank, and it will readily pass out conclusion after conclusion, all in accordance with logical law, and with no more slips than would be expected of a keyboard adding machine. (42)

Are there limitations to conceptualizing logic in this sort of way? That is, what complex realities become hidden when we abide by logics that consist of a limited set of reasonable outcomes?

These questions might be illustrated through a discussion surrounding this quote:

It is a far cry from the abacus to the modern keyboard accounting machine. (42)

While this statement may seem readily apparent in certain ways, is it worthwhile to talk about what logics, values, contexts, and goals this statement is contingent upon?

To turn toward a different–but related–direction,

As the scientist of the future moves about the laboratory or the field, every time he looks at something worthy of the record, he trips the shutter and in it goes, without even an audible click. Is this all fantastic? The only fantastic thing about it is the idea of making as many pictures as would result from its use. (39)

What repercussions, if any, exist with an almost obsessive attitude toward amassing knowledge? I’m imagining if it got to the point where we wanted to keep track of everything within our experience, everything throughout our day to day professional and personal lives, including our sleep patterns, which I suppose some people already do with Fitbits and such. What, if anything, is lost?

Okay, so next time (if we end up having to blog and respond to something again) I will get this done earlier. I said it on the internet, so now it will magically happen, right?