Apr. 2nd, 2012

rfmcdonald: (photo)
Woman disembarking from the 29 Dufferin at Eglinton West.

IMG_0404.JPG
rfmcdonald: (Default)
On Facebook a week ago I'd linked to the blog Hunger Games Tweets, a Tumblr blog that collected screenshots of various of the racist statements made by fans of The Hunger Games who were shocked and angered that--as explicitly described in the book--major characters were cast by African-Americans. Anna Holmes' New Yorker article goes into more detail about the genesis of the blog, written by a 29-year-old Canadian male of Caribbean descent who was stunned by the racism of many of the people who were angered and upset by casting decisions based on the book.

In retrospect, it’s easy to see why Hunger Games Tweets took off: the project is a potent mix of pop-culture criticism, social-media sharing, provocative statements, and public shaming. But more important, and no doubt more disturbing, is what Adam’s time line of ignorant tweets—what he calls “the repository of death”—says about a certain generation’s failure of imagination. (A look at the tweeters’ profile pictures suggests that most of the missives were written by people in their teens and early twenties. Jezebel reported in a postscript that most of the people quoted on Hunger Games Tweets have since taken down their accounts or made them private.)

In addition to offering object lessons in bad reading comprehension, Hunger Games Tweets—there are now more than two hundred up on the blog—illuminated long-standing racial biases and anxieties. The a-hundred-and-forty-character-long outbursts were microcosms of the ways in which the humanity of minorities is often denied and thwarted, and they underscored how infuriatingly conditional empathy can be. (“Kk call me racist but when I found out rue was black her death wasn’t as sad,” wrote @JashperParas, who amended his tweet with the hashtag #ihatemyself.) They also beg the question: If the stories we tell ourselves about the future, however disturbing, don’t include black people; if readers of “The Hunger Games” are so blind as to skip over the author’s specific details and themes of appearance, race, and class, then what does it say about the stories we tell ourselves regarding the present?

Adam says that the pivotal moment in the evolution of Hunger Games Tweets came on or around March 23rd, after he posted a tweet by someone named Alana Paul, a petite brunette who went by the handle @sw4q. Alana’s tweet was not the most offensive or nakedly racist of the bunch (that award could go to Cliff Kigar, who dropped the N-bomb, or to @GagasAlexander, who complained of “some ugly little girl with nappy…hair.”) but perhaps the most telling. “Awkward moment when Rue is some black girl and not the little blonde innocent girl you picture,” she wrote. She cc’ed a friend on the tweet, @EganMcCoy.

“That tweet was very telling, in terms of a mentality that is probably very widespread,” says Adam, speaking softly from his office high above Toronto’s downtown financial district. He doesn’t sound angry, but he also isn’t amused. The phrases “some black girl” and “little blonde innocent girl” are ringing in my head as he talks, as are thoughts about how the heroes in our imaginations are white until proven otherwise, a variation on the principle of innocent until proven guilty that, for so many minorities, is routinely upended.

Adam tells me that, on the post featuring a screenshot of Alana’s tweet, he added, “Remember that word innocent? This is why Trayvon Martin is dead.” As he says it, I am thinking the same thing: of our culture’s association of whiteness with innocence, of a child described without an accompanying adjective, of a child rendered insignificant and therefore invisible because of his or her particular shade of skin. “I am invisible, understand, simply because people refuse to see me,” explains the protagonist in another famous work of fiction, Ralph Ellison’s “Invisible Man,” which was published sixty years ago this month. “Invisible” can mean unseen, but just as often it speaks to others’ inability to see beyond something, or someone. The renaming of Rue as “some black girl” is a version of this, as is the pursuit and murder of the seventeen-year-old Martin, who, by some accounts, was shot dead by the self-professed neighborhood watchman of an Orlando-area community because all George Zimmerman could see was that he was young, male, and black.
rfmcdonald: (Default)
Savage Minds' Rex reviews The Captains, a 2011 documentary film by William Shatner wherein the actor most famous for his portryal of Captain James T. Kirk interviews the five other actors playing Captains in the Star Trek universe. His review of the film, informed by Rex's own knowledge ethnographic practice, makes me curious as to whether I should actually try to find a copy.

Things get interesting quickly because it becomes obvious that the subject of the documentary is not the interviewees but the interviewer: Shattner’s real intention is clearly to make a documentary about himself and the long road he’s trod in life, and particularly to let the entire world know that he was once a classical thespian in the mould of Olivier and Gieldgud. The other major theme is how ennobled and wise he has become being forced to carry the entire weight of the Star Trek franchise on his back across the course of his career.

As a result the show focuses prominently on the fact that the other captains also started out in theater, mostly so Shattner can ask tell them about his time treading the boards. He asks them how Star Trek has changed them, so he can tell them how it has changed him. He asks them their views on life after death and the nature of infinity so that he can brood over his inevitable mortality. It is, in short, a clinic on how not to interview people, with special focus on the preoccupied and narcissistic interviewer. Absolutely fascinating to watch.

Actually, at times, the movie is almost unwatchable — most notably when Shattner asks Kate Mulgrew how women can realistically expect to be considered for leadership positions given the fact that they menstruate. But a lot of the time Shattner gets it right: his interviewees are seasoned respondents, indeed they are people whose lives importantly revolve around talking over and over again about their experience on Star Trek. As a result, it is very easy for them to slip into well-established stories and self narrations. But Shattner doesn’t give in, ‘probing’ (as we say in the business) for real answers in a way that is both boorish, but often get results.

Normally, of course, you can’t expect to get much fieldwork done when you ask blunt questions about people’s divorces or act like a raging misogynist. But it is the wider psychodrama of these interviews that is so interesting: clearly, each of the people interviewed pretty much had no choice but to participate. I’m not sure why, but I have this strange sense that in the world of Trek when Bill wants to make a documentary about the captains, you pretty much have to talk to him. As a result, the interviews have a strong flavor about them of captive respondents doing their best to contain the interviewer, knowing that their throw-away 90 minute meeting will eventually appear on the big screen and, like what they had for breakfast, be canonized in the Trekverse forever. Talk about prolepsis.

And contain him they do, largely because each of the people being interviewed are obviously amazing. Especially — and I don’t mean to be cruel here, but it’s true — especially when compared to Shatter. I had never watched Voyager before, but I was simply amazed by Kate Mulgrew’s charisma, articulateness, and intelligence as she attempts to deal with Shattner at what is probably his worst. Although perhaps that award goes to the interview with Avery Brooks, who when not being a star fleet officer is apparently a combination of Miles Davis, Paul Robeson, and Wittgenstein. Brooks is so gnomic that it is difficult to say, but he appears to be a total genius and also the only respondent who really seems to be trying to teach Shattner, to draw him out of himself. But what we get instead is a bizarre improvised jazz crooning session between the two of them reminiscent of the beatnik scenes that appeared in sixties surf films.


What say you all?
rfmcdonald: (Default)
Matt Thompson, at Savage Minds, reintroduced me to the rather unique and good animated film Sita Sings the Blues. Thompson's summary of the film as "an incomparably unique animated feature that combines ancient Hindu mythology, a 1920s blues singer, and one artist’s failed marriage to tell the story of a every woman who lets a man walk all over her".



I prefer to think of Sita Sings the Blues as the product of play, in the sense of being an uninhibited comparison/contrasting of different sorts of narratives in different genres from different time periods, Ramayana with blues music with globalization-era autobiography.

The story unfolds in multiple layers, each taking place at divergent moments in history and represented with its own animated style. We begin in present-day San Francisco, portrayed here in squigglevision, with the couple, Nina and Dave, in domestic bliss. Dave’s sudden departure for a new job in India foreshadows the impending end of their relationship. Paley juxtaposes this with the epic myth of Sita and Rama, presented as gouache paintings come alive. Interrupting or narrating the story is a third form, a trio of shadow puppets commenting on the myth. These characters exist out of time. Finally the signature sequences are done with computer animation as a cartoonish Rama and Sita act out their story with Sita singing the words of Annete Hanshaw’s blues. Although visually set in the myth the audience is experiencing creative expressions from the early twentieth century America and encouraged to note the similarities between the two.


The film has a complicated history. Although released in 2008, issues over copyright for its 1920s songs prevented its release on DVD--as opposed to a Creative Commons production--until recently, while some critics call the narrative a colonial-style appropriation of Hindu narrative. (Me, I think that copyright laws shouldn't inhibit this sort of creative production, and don't think that the story dishonours or seizes unfairly upon the narratives of Hindu religions.)

I give it a 9 out of 10. You?
rfmcdonald: (cats)
Over at Une heure de peine, Denis Colombi has a fantastic post up: "Eléments pour une sociologie du lolcat", or in English translation, "Elements for a sociology of Lolcats". Lolcats civilize us.

We have long known this: the Internet has not affected the students nearly so much as it has cats. Whether in photos and in videos, having stuff on their heads or looking like Hitler, being taken seriously or ironically, no other species has taken such advantage of the digital world. The understanding this phenomenon of this strange elective affinity between cats and the Internet has not yet been given the attention it deserves from the social sciences: more than ever before, interdisciplinary work is needed to shed light on this major transformation of our societies. Without claiming in any way the full clarification of these questions, let me outline a few possibilities here.

[. . .]

The history of cats has been an eventful history, to say the least. It is tempting to draw parallels between the status they have acquired today with that they once possessed in the Egypt of the Pharaohs: treated like gods, revered, admired, the lolcats could be interpreted as a continuation of mummification, a way to treat the cat as an equal with his owner in giving him a place and a life on the web, the equivalent of the afterlife. [. . .]

However, this interpretation could be wrong. A reading of Norbert Elias can give another, less pleasant but deeper interpretation. Here is what he wrote in The Civilization of Manners:

In the sixteenth century, one of the popular festivities of St. John consisted of burning live one or two dozen cats. [...]

This is a show that is certainly not more than the execution of heretics by fire or torture and killing of all kinds. What makes it particularly distasteful is that it embodies in a direct and unadulterated way the pleasure that some people experience in tormenting living beings without any rational excuses. [...] Many things that once aroused feelings of pleasure reflexes have become the subject of displeasure. In both cases, we are not dealing exclusively with individual sensations. Burn cats on St. John was a social institution in the same way that today's boxing matches or horse races. In both cases, organized by the pleasures of society are the epitome of emotional standards [...]. Today we treat as "abnormal" a person who would seek to satisfy his tendencies fun by burning alive of cats, because the normal human conditioning of our stage of civilization has replaced pleasure of the sight of such acts with a fear instilled in the form of self-restraint. (Norbert Elias, The civilizing process, 1939)


The suffering of the cat was used, once upon a time, in popular festivities: the pleasure it yielded was not, as indicated by Elias, proof of a certain psychological deviance, but rather a collective phenomenon, in the same way that watching people joyfully punch each other in the face in a ring or a movie screen is a shared experience and "normal" in our societies.

"The pleasures organized by the company are the epitome of emotional standards": this is the lesson can be learned from Elias. Laughing at lolcats is not an individual experience but a taught experience: this tells us something of the degree of civilization to which we have arrived.
[. . .]

And our cats in? Lolcat reflects a new stage in the progress of civilization, a new form of habitus. There is a total submission of it to the cute, the kawaii. Our habit that no longer supports even the suggestion of hurting something cute.


Funny, but I think he has a point. And you?
rfmcdonald: (Default)
Liza Gross at PLoS Science blogs has an interview with leading primatologist Frans de Waal, "Should Chimpanzees Have Moral Standing? An Interview with Frans de Waal". There, de Waal makes the argument that the similarities in nature and capacity between human beings and chimpanzees are such that, from an ethical standpoint, experimentation on chimps should be limited to the sorts of experimentation that would be ethical on human beings. This goes further than a recent report from the United States' Institute of Medicine of the National Academies of Science, which allowed for the possibility of experimenting on chimpanzees to develop a hepatitis C vaccine.

Gross: What would you say to those who argue that there are huge gaps in cognition between monkeys and apes and humans?

De Waal: Over the years the dividing line between humans, certainly between humans and the apes, has sort of become fuzzy under the influence of field work, such as the work by Jane Goodall, Toshisada Nishida, and others, and under the influence of experimental work on cognition, which has shown all sorts of capacities that we had not suspected in the apes.

Also, neuroscience has not really helped maintain the dividing line because the brain of a human doesn’t contain any parts that the brain of an ape doesn’t have. The human brain is much bigger than, let’s say, the chimpanzee brain. It’s three times bigger. But there’s nothing in there as far as we can tell that is not in a chimpanzee brain. At the microscopic level there are a few differences and they’re probably interesting, but you would think if humans are so dramatically different, as different as the philosophers have often assumed, that you would find something in the human brain that is absolutely unique and that you would say, “Well, there’s a part there that no one else has,” but we have never found it.

Gross: What are some of the seminal experiments that revealed similarities in cognitive or behavioral traits between apes and humans, suggesting we’re not in fact unique, as many like to think?

De Waal: There are many. For example, tool use used to be considered uniquely human. And then when it was found in captivity by Köhler, this is in the 1920s, people would say, “Well, but at least in the wild they never do it.” And then it was found in the wild, and then they would say, “Well, at least they don’t make tools.” And then it was found that they actually also make tools.

So tool use was one of those dividing lines. Mirror self-recognition is a key experiment that was first conducted on the apes. The language experiments, even though we now doubt what the apes do is actually what we would call “language,” they certainly put a dent in that whole claim that symbolic communication is uniquely human.

My own studies on, let’s call it “politics,” and reconciliation behavior and pro-social behavior have put a dent in things. And so I think over the years every postulate of difference between humans and apes has been at least questioned, if not knocked over. As a result, we are now in a situation that most of the differences are considered gradual rather than qualitative.

And the same is true, let’s say, between a chimp and a monkey. There are many differences between chimps and monkeys in cognitive capacities, but we consider them mostly gradual differences.

The more we look at it, even if you take the difference between, let’s say, a human and a snake or a fish, yes, between those species the differences are very radical and huge, but even these species rely on some of the learning processes and reactions that we also know of in humans.

[. . .]

Gross: What in your view is the most compelling reason to stop invasive research on chimpanzees?

De Waal: The most compelling reason would be an ethical one. I myself have never done any invasive studies in chimps for exactly that reason. I don’t want to do that kind of thing on the chimpanzee because they are so mentally and psychologically close to us. Most people of my generation and younger who work with this species share this feeling. It’s almost like you’re working with humans, you know, they are very closely related to us.

It’s very easy to extend the moral qualms we would have with experiments on humans to chimpanzees. It’s much easier to extend them to chimpanzees than to, let’s say, rats or mice which are so much more distant from us.

Gross: What criteria should we use to decide what type of research on chimpanzees would be morally acceptable?

De Waal: I think we should keep doing non-invasive studies on chimpanzees, such as behavioral studies or comparative genomics, maybe non-invasive neuroscience. It’s hard to do the same imaging studies as we do on humans at the moment, but it’s going to happen, I think, one day.

For me, non-invasive would be defined as research that I would not mind doing on a human. And it does require a different mindset at NIH and maybe other funding agencies because sometimes if you submit proposals to them that include chimpanzees, they still will argue, “Well, you’re using animals, why don’t you go into the brain and manipulate it this way or that to enhance your study?”

The science community needs to change that mindset and treat chimpanzee studies basically the way they treat human studies. There’s a lot of things we cannot do on humans, and that we will not do on humans, and that will be the situation for chimpanzee research, I think, where we say, “Well, we can do all the same things that we do on humans, but that’s about it.”

Gross: In your commentary, you point out that the United States shares the distinction with Gabon of being the only nations in the world to hold chimpanzees in biomedical facilities. That’s surprising.

De Waal: The movement to remove chimpanzees out of research laboratories started to get teeth about ten years ago. The movement existed probably earlier but at least ten years ago certain countries like Japan and the Netherlands had chimpanzees in labs and said they stopped this kind of research for ethical reasons, it was very explicitly for ethical reasons.

And I think the U.S. is going to join the other countries, maybe not today, maybe not tomorrow, but it will happen because the whole trajectory – and that’s what’s pointed out in the IOM report – is in this direction. And my argument is why not get ahead of that trajectory, and why not do it now rather than wait a couple of years.


de Waal argues at length in a commentary elsewhere at the PLoS site.
rfmcdonald: (Default)
Liam Heneghan's 3 Quarks Daily essay takes a look at the connection between urban ecologies and urban societies. It's a long essay, worth considering in full; some representative samples are below.

Urban ecology, the environmental sciences youngest and most rambunctious cousin, is in a position to influence the design of the cities of the future. Its clout comes from its willingness to think big, to think about the ecology of entire cities as if they were just any other ecosystem. Urban ecologists call this big picture view the “ecology of the city”.

From this disciplinary perspective, Chicago is just another savannah, one where admittedly the commonest species is the human animal.

However, by taking this bird’s eye view of cities, is urban ecology losing sight of the bird-on-the-ground? I mean this quite literally. Is urban ecology losing it roots in natural history? Will the successful cultivation of relationships with decision makers, municipal authorities, city planners and other governmental powers-that-be, come at the expense of urban ecologists’ knowledge about birds, wildlife, beetles and the other creeping things inhabiting the city?

[. . .]

Urban ecology is not the first discipline to encounter the tensions accompanying distinctions between the bird’s-eye view and the bird-on-the-ground view of the city. An instructive example found in the work of Michel deCerteau (1925-1986) who makes of this tension a theory of the everyday interactions of people who both conform to and resist the strictures of the culture to which they belong.

[. . .]

An influential chapter in deCerteau’s book The Practice of Everyday Life is aptly entitled Walking in the City. In it deCerteau illustrated his broader thesis concerning the differences between tactics and strategies. Strategies are concerned with “force-relationships” that can be exercised when an entity can be separated from an environment. A city, a proprietor, a scientific institution serve as deCerteau’s examples here. Each can be held up and inspected as separate analyzable units, each has its own distinct place – its headquarters, or at the very least it occupies lines on a map. Tactics, on the other hand, are not so easily localized. Tactics are usually deployed on the sly, “poached” to use deCerteau’s term, on someone else’s territory.

A simple way of understanding what deCerteau’s is arguing is to contrast those individuals, institutions, or governments who have grand conceptions of the city, with the pedestrians, jaywalkers, flâneurs, who make their own plans, and take up the business of living in the city in ways that are simultaneously constrainted and resisting of these grand designs. He dramatizes the distinction by opening the chapter’s narrative from his perch on the 110th floor of the World Trade Center. From there one get’s a bird’s eye view, indeed a planner’s view, of the entire city. Peering down at the insect-like pedestrians swarming beneath him, he sees in their movement a type of pedestrian-grammar.

The city may be produced by those with visions of the city, but it is consumed in a creative, one might say productive way, by those who walk in the city.

All this rarified talk becomes concrete when one thinks about the everyday practices of pedestrians treading the city streets. The act of walking becomes strange, even given a slight revolutionary tinge, when one recalls how the manners of pedestrians can cut across the designs of city managers, proprietors, planners, and other strategic officers.

[. . .]
Urban ecology which only emerged as a distinct subdiscipline in ecology in the 1970s has made a lot of its own use of the in and of distinction. The distinction is regarded as a significant conceptual leap forward. It places uni-disciplinary, small scale ecological studies on one side, and multidisciplinary, multiscalar studies, especially those that examine the human and non-human aspects of nature simultaneously, on the other.

[. . .]

A study of the physical environment, the soil, or the biota of a city or a neighborhood would be considered ecology “in” the city. These studies can be aggregated to allow for generalities to emerge. Cities tend, for instance, to have their own distinctive climatic situations. Rain is more frequently in cities than in the hinterlands. City temperatures tend to increase as population grows up to a certain limit at least. These climatic differences have, in turn, implications for vegetation growing in the city. Spring comes earlier in urban areas. Decomposition of dead organic matter occurs a little faster. Tree cover changes as cities develop (decreasing in forested areas, increasing in desert areas). Urban vegetation is weedier, with more non-natives, but diversity can be high since plant diversity oftentimes follows the money. The richer the human population the lusher is the vegetation. City mammals tend to be moderately sized carnivores. All the above insights emerge from within the “ecology in the city” paradigm.

“Ecology of the city” takes an explicitly systems view of things. By system here is meant a set of entities that interact to make a connected whole. In what manner do the elements of the city the human and non-human aspects of nature interact to contribute to an emergent whole city? Among the examples that Pickett and his colleagues give are studies of the amount of pollutants or carbon taken up (sequestered being the $100 term preferred by ecologists) by all the trees in Chicago. Studies of the flow of crucial nutrients like nitrogen (a key element for the growth of vegetation, but also a contributor to the fouling of water bodies) have been done on the scale of entire cities. For instance, fascinating whole system evaluations of nutrient flow are conducted as part of the Baltimore Ecosystem Study, where Pickett is a project leader. The resource accounting tool of “ecological footprinting”, developed by William Rees and Mathis Wackernagel at the University of British Columbia in Vancouver, Canada, provides another example. Footprinting is not only a way to make us feel miserable about our personal environmental impact; it can used to have an entire metropolitan area hang its head in shame. A simple back of the envelope calculation reveals that the footprint of Chicago is larger than the state in which it is located!

In another influential early review Nancy Grimm, a professor at Arizona State University and a project leader at the Central Arizona – Phoenix Long Term Ecological Research project and her colleagues also utilize the in and of distinction and they identify similar systems-oriented hallmarks of the latter type of study. In particular, they call for integration of social science approaches with more traditional approaches to ecology, and they illustrate what this looks like with a series of increasingly sophisticated conceptual models revealing the interaction of physical, ecological, and social variables. They conclude that without insight into the integration of the human and the ecological perspectives at local and global scales urban ecology will be less effective in guiding public policy and management.
rfmcdonald: (Default)
Neil deGrasse Tyson, the man who has the best claim to being the living heir to Carl Sagan as an astronomer and a popularizer of science, has an opinion piece up on Wired Science, "We Can Survive Killer Asteroids — But It Won’t Be Easy".

I'm with him on everything, until the last paragraph, reproduced below.

Every few decades, on average, house-sized impactors collide with Earth. Typically they explode in the atmosphere, leaving no trace of a crater. Once in about a hundred million years, though, Earth is visited by an impactor capable of annihilating all life-forms bigger than a carry-on suitcase.

One killer asteroid we’ve been monitoring is Apophis, which is large enough to fill the Rose Bowl. On Friday the 13th, April 2029, it will dip below the altitude of our communication satellites. If its trajectory on that day passes within a narrow range of altitudes called the “keyhole,” then the influence of Earth’s gravity on its orbit will guarantee that seven years later, in 2036, on its next trip around the Sun, the asteroid will hit Earth directly, likely slamming into the Pacific Ocean. The tsunami it creates will devastate all the coastlines of the Pacific Rim. If Apophis misses the keyhole in 2029, we’ll have nothing to worry about in 2036.

A more recent discovery, half the size of Apophis, is expected to pass Earth at a distance of a million miles in 2023 and ten million miles in 2028, has been stirring up the scaremongers but rates only a 1 on the 1–10 scale of impact hazards. Unscarily named 2011 AG5, it will become much more visible and trackable during 2013. Earth’s gravity could conceivably convince it to collide with us in 2040, but NASA deems that a remote chance.

Some people would like to blow potentially hazardous rocks out of the sky with a nuclear bomb. Others would deploy a radiation-intensive neutron bomb (the Cold War–era bomb that kills people but leaves buildings intact) to induce a recoil and alter the asteroid’s orbit. A kindler, gentler approach would be to nudge it into a different orbit with slow but steady rockets that have somehow been attached to one side — or with a solar sail, which harnesses the pressure of sunlight for its propulsion.

The odds-on favorite solution, however, is the gravitational tractor. This involves parking a probe in space near the killer asteroid. As their mutual gravity draws the probe to the asteroid, an array of retro rockets fires, instead causing the asteroid to draw toward the probe and off its collision course with Earth.

Saving the planet requires commitment. First we have to catalogue every object whose orbit intersects Earth’s, then task our computers with carrying out the calculations necessary to predict a catastrophic collision hundreds or thousands of orbits into the future. Meanwhile, space missions would have to determine in great detail the structure and chemical composition of killer comets and asteroids.

If humans one day become extinct from a catastrophic collision, we would be the laughing stock of aliens in the galaxy, for having a large brain and a space program, yet we met the same fate as that pea-brained, space program-less dinosaurs that came before us.


Does the last paragraph hint at Tyson's boosterism for manned space travel as a way to jump-start growth of all kinds on Earth, irrespective of the evidence suggesting that it's inordinately expensive and dangerous and generally a waste of resources? Or am I reading too much into it?
Page generated Apr. 14th, 2026 10:07 pm
Powered by Dreamwidth Studios