Saturday, July 15, 2006

Does ethics have anything to do with common sense?

Web mechanic's note:
There is a longer version of this post and there are three reasons you might want to read that one.
  1. This post tries to tie together a diverse set of research findings from psychology with some current blogging and current events centered on the ethical questions around gender bias in academia but it is only me writing so it rambles about. To succeed in presenting the points lost in this ramble, I just list them in the footnotes.
  2. Already long by blogging standards, I shortened it by short changing readers of the supporting material on which I draw. Links and highlights and cursory descriptions are in the footnotes.
  3. Updates will dribble in as facts that add perspective turn up in news and blogs.

Lately, I amateurishly bumbled into, and became part of, a collection of posts and comments that could loosely be said to have a theme of ethical considerations of doing the scientific spade work in areas where popular opinions are hostile to, or inclined to misappropriate, facts and plausible working theories. Gender roles and biases regarding them, ever a cause over at Pandagon, was also in this mix. Dr. Freeride uses the term "cultural assumptions" in a post about how hopelessly mythical our claims of equality of the sexes in academia are.

For me, the outcome of all that discussion, greater than the lesson in manners that I constantly need, was to learn there is a more rigorous framework than comparison to myth for examining the origins of the conceptions and misconceptions that riddle or rule public debate. With the reservation that we could risk being cast off on a desert island of academics for indulging in the emperor's new detector for non-elite narishkeit, I recommend [as do Coturnix and Amanda] Chris's posts on essentialism. It is a fascinating tool for plumbing the sources of popular or man-on-the-street opinions. Until we cease to live in a democracy, we cannot ignore these opinions, the damp clay of politics and its cracked pottery, for they do much to determine legal and financial and religious treatment of the putative victims of bias in courts, legislatures and churches. You all know I tilt constantly at sundry forms of self delusion so a new tool that can scrub away the shared forms of self delusion is a treat. It is yet another reminder to me that I should not be surprised at sloppy thinking. As I once commented to Cul:
If you went to an insane asylum [you can call them mental hospitals nowadays: most of them have been emptied on to the streets. But lets pretend its 1951...] wouldn't you be very surprised if all the patients had the same fantasy?
Important, or at least useful, as essentialist insights into popular notions about politically significant issues may be, it is still ironic that it is, in a way, an analysis of how we don't think or take shortcuts in our thinking. The study of how we avoid being deliberate in our thinking is not new. The subject of how humans often think in shallow if speedy ways, even for important decisions gets turned over every time a new tool or technique to measure and analyze thought is developed.. The first work I heard of that pointed to the biological/psychological, if amoral, advantages of stereotypical thinking is now behind a subscription paywall. In that paper with a title that will remind you of a Lakoff title, McRae and Bodenhausen reviewed studies of the nature of stereotypical thinking and particularly, thinking about other persons or groups:
"Given basic cognitive limitations and a challenging stimulus world, perceivers need some way to simplify and structure the person perception process. This they achieve through the activation and implementation of categorical thinking .... Rather than considering individuals in terms of their unique constellations of attributes and proclivities, perceivers prefer instead to construe them on the basis of the social categories (eg. race, gender, age) to which they belong, categories for which a wealth of related material is believed to reside in long-term memory."
And they speculated on how our thought capacity came to function in that manner. The benefit, if I may summarize, was a way to rapidly reach decisions in the presence of information overload. Other studies on stereotypical thinking are not complimentary in their assessments of conservative and gender biases. But stereotypical thinking may be the dominant mode of decision making for most people...Gladwell wrote Blink to explain how we have always sized things up, to praise and finesse intuition, but not to offer a new kind of thinking.

Chris was explaining how essentialism is already being stretched into new areas:
An interesting [question] is whether we are psychological essentialists about concepts that might be considered as falling somewhere in between natural kinds and artifacts, like social concepts. Are we psychological essentialists about concepts such as gender, ethnicity, political orientation, or mental illness? Do we treat these concepts like natural kinds, or like artifacts, or as something in between? In this and subsequent posts, I'm going to discuss evidence indicating that we are, in fact, psychological essentialists about many social concepts,

Thus, it may be going too far on my sketchy acquaintance with the idea of kinds and essences to propose a more abstract kind but here goes : goodness or rightness is a natural kind or category into which we casually and reflexively sort many things which we perceive to have the following "essence": they promote, protect or affirm our self or things with which we identify. And there is an opposite "kind" for which the essence is that it competes with, injures or denies our self or interests with which we identify.

I want you to consider the barriers that impede ethical thinking if we are not mindful at the moment of decision. Ignore for a moment that mindful and deliberate are not the same concept and that necessity does not often afford the luxury of long reflection on a decision. Just weigh against your own experience the quality and utility of your own decisions made in any of these modes: stereotypical thinking, essentialist thinking, first impressions, hunches and intuitions. We use the term "common sense" in a generally positive connotation of "what everybody should know and how everybody should think" about commonly experienced issues and events. If you stop to think about it you'll probably agree with the Horace Greeley quip. Only a very technical investigation of common sense starts with the observation of how damn hard it is to derive, compute or instill. Do you have "common sense"? If what you consider to be your common sense appraisal of a situation is different from your peers, or some norm, how is it "common" sense? When you exercise it, do you ever revisit initial impressions and decisions [and impression IS decision: you get a feeling, you have taken a stand] or do you tend to later find more conscious reasonings that support the snap judgment? The research says the average "you" does this backfilling of hunches. What commonly passes for thought is not only NOT common sense, it is more like autopilot.

If it gets any attention at all, my suggestion that "good and right" is a kind from the psychological essentialist viewpoint will probably be shredded or at least get marked down for sloppiness. What I find most appealing about it as a model of mental processing is that it fits with the way we either deemphasize the negatives or the positives about many things that require our judgment, particularly as the those negatives and positives impact us personally. Nobody, well, nobody I know, intentionally works to do wrong but lots of us have to deal with gray areas. The ease with which we binarize to black or white ought to give us pause. To avoid coming into the grip of harmful or selfish decision making, if indeed we operate on the basis of some self-interest essence, is to teach ourselves to draw the largest possible circle of beings and cultures with which we can identify.

That essentialist way of saying we think categorically about what/who is good and bad popped into my head as I was reading the front page, above-the-fold story in Saturday's Boston Globe about a fierce old alpha male in the tribe of science, Nobel laureate Susumu Tonagawa who is accused by 11 colleagues at MIT of driving off a supremely qualified job applicant. This applicant just happens to be female and just happens to have areas of interest in neuroscience that heavily overlap his own. This is a rather different face of ethics in science than the questions of what is good to study and what is safe to report. Ethics among scientists has its own scale of sinfulness. I won't sort them for you but you might, as an exercise, consider ordering these examples.
  • faking data or experimental results
  • sabotaging someone else's experiments
  • withholding knowledge that would probably help a competing line of research
  • suppressing another's work by negative reviewing of a submitted paper.
  • Not reporting something fraudulent you detect in a colleague's work
  • concealing a conflict of interest or simple professional jealousy that affects cooperation with investigators whose work is related to yours

It seems that intra-scientist matters operate according to much the same human nature as we observe and complain about it in other walks of life, office politics as usual. Its not that MIT does not have a few women who did or currently do rock their world with renown and budget power but the culture of the institution, as the article hints has had to redress a spotty record regarding equal recognition of women as researchers and non harassment of women as students.

Women who are personal acquaintances of mine have filed grievances or written about the glass ceilings at their department at MIT and that colors my perception of this news. The Globe article is, and speaks carefully because it knows it is, an incomplete picture of what Tonagawa said to dissuade Karpova from working at MIT. The Globe article hints no awareness that the Stanford researcher who wrote to MIT on behalf of Dr. Karpova, the transgendered Ben Barres, has an unusual perspective on the effect of gender on the interactions of academic peers. Read Dr. Freeride's post and see. Departments more injured by Karpova's rejection and researchers more closely associated with Tonagawa come down on different sides of the controversy in their memos to the institute. Karpova's published thoughts are the elegant minimum: "No thank you MIT". What I am trying to illustrate here is that there are many perspectives on this affair and none of them except possibly Dr. Tonagawa's has even a chance of being inclusive of all the facts. From my own limited perspective, I can not imagine any benefit to MIT, to the program Dr. Tonagawa heads or to his own reputation that might result from discouraging a promising researcher from working there. And though it is remote, there is a slight diminution of my own interests. And, to be honest, I came to my conclusions about Tonagawa's problems two paragraphs into the Globe article Saturday morning and have struggled since to re-suspend judgment.

The work on essentialism, the studies of stereotypical thinking and "Blink" each counsel in their own way that we always have a ready-made perspective but seldom are aware of it or of how limited it is. What was the thought that passed through Dr. Tonagawa's mind in the minutes after Karpova's application first came to his attention? Even he might not remember. From dusty decades my own reactions I now recall and wonder what man still has these thoughts upon a first meeting with a female boss or co-worker who seems largely interested in her own ideas and not his own or who speaks with a tone of undue confidence:
"If she were another guy, I'd know how deal with this but what if she plays one of her 'woman' cards? Would I come off as intimidating? Am I already guilty of bias and liable to blackmail on that account? I am caught perpetually off balance."

Trust me gals, to some men an intimidating woman is that much more intimidating than an intimidating man.

The critical question, since our habits of thought are so firmly set and we don't always have time to expand our thinking before narrowing it back to a decision, is this: Can we revisit and if not, what exercises of thought beforehand might broaden the set of factors that will come into play reflexively in the moment of decision?

The most ethical thought you can have is the one made with the fullest context consciously born in mind. But omniscience is not available and we simply don't know how to fully subtract our own wishes from the processing.

Deliberation will do if you can summon it. Research psychology finds scant evidence that we do this summoning. Mindfulness improves upon deliberation by greatly and gracefully speeding it up through an ironic freedom from the burdens of self. That freedom falls out of detecting the interferences of self so they can be disarmed. But if not by seeing through your self then by whatever means are available to you, you should explore what positive connections and win-win dynamics may exist between you and all the fellow creatures who are not now the members of "your" crowd of good guys. Deliberation may be ideal, but intuition is fast and self interest is the fastest part of it. I am suggesting a liberalization of what an individual senses as his or her self interest is the most effective way to move more of the "others" of the world into the beneficial categories when categorical thinking can not be avoided.


Colin Caret said...

Interesting set of thoughts. I was not clear on all the connections you were trying to draw, but I take it that one of your central queries is whether we change our reflexive moral decision making. I would have to stand firmly in the "we certainly can!" camp on that issue. I grew up, like most Americans, eating a meat and potatoes diet. At some point in my life the issue of animal curelty came to my attention. I have since been vegetarian and, for a long time, it was by sheer force of will to do the right thing. That made me feel good in once respect (proud, etc.) but was a struggle in other respects. But now, when I see meat, I cannot help but associate it with thoughts of pain and suffering. I can no longer conceive of animals as a commodity to be treated how we want. I see them more like fellow humans than like the produce at the grocery store. Point being, of course we can change our reflexive moral decision making practices. But it isn't easy to do, and one thing that it requires is taking moral considerations seriously. Many people discuss moral issues and debate their relevance to politics, but few people seem to really feel the weight of moral deliberation. Few people seem to appreciate that moral deliberation should actually play a role in their behavior.

GreenSmile said...

Yes, Colin, you found the the point lurking in my foggy writing: we can change in spite of strong natural tendencies towards inertia. I will try to edit the post to make this point more clear.

Many people discuss moral issues and debate their relevance to politics, but few people seem to really feel the weight of moral deliberation. Few people seem to appreciate that moral deliberation should actually play a role in their behavior.

Odd isn't it? As if the two activities,: making decisions with moral consequences as distinguished from expositions on morality as a topic, took place in two different minds or two different worlds.

Would you consider it fair to say of your progression away from our american beef habit that it was well under way once you broadened your context by including in your thinking a awareness of the process that puts that porterhouse on your plate, specifically, what we do to cows?

Thanks for the feedback,

Colin Caret said...

Good question. For me, taking seriously the suffering to animals was enough (at that time and place in my life) to motivate me to change my behavior. Since then I have gotten more involved in thinking about ethics and society generally, through my study of philosophy. One of the things that has occured to me recently is that a side effect of studying philosophy has been a kind of hubris on my part. I am now more easily willing to concede that I might not know everything about everything, especially about morality! But something else I have come to realize recently is that moral deliberation is not enough to get me to change my behavior in every case. The extra oomph in the case of vegetarianism was a real emotional connection, a feeling of compassion for the animals that served to sufficiently motivate the change of diet. Thinking about ethics has helped me clarify some of my own considered judgments, but even on some of the moral issues where I have changed my mind I have yet to appropriately change my behavior. While moral deliberation isn't enough by itself to make me the best person I can be, at least it is a good starting point.

GreenSmile said...

I think it was in a not too long past post of Dr. Freerides's that I saw a lament to the effect of "why can't ethicists behave better?"

Psychologists have studied and neuroscientists will study what goes into the make up of the person more inclined to empathy...but they will not find the ingredient of will and the effect of observation that has worked on you.
And experience alone will not explain much. After all, the slaughterer has seen death every working day without becoming upset about it. How different a being is he from you?

I like to think that what happens in front of the eyes is as important as what happens behind them but your experience argues that its not a balance.

Unsane said...

Very nice write-up. I found the article you mentioned through my library, online.

B. Michael Payne said...

Don't you think a rule could be formulated a posteriori concerning the thing you want to call psychological categories?

Further, don't you think that even if everything in life lay in a moral gray area, as soon as one acted, the area in which he stood would have to be black or white--never gray?

GreenSmile said...

B. Michael:
I am not sure I understand your first question.

for the most part, life is all gray areas: side effects and unintended consequences are not always noticed or only show up well after the action is taken. What I have been trying to say is that we mostly work under the illusion that things are black and white....even before we act. Action takes the perception out of the realm of thought and replaces it with consequences...which are also misperceived. In fact that questionable perception of moral clarity beforehand both prevents paralysis and prompts harmful or less ethical decisions.

donna said...