Sensemaking in Organizations

In the Fall of 2010, I was taking a seminar in organizational behavior. It was a morning class, and in a much different format than I was used to. We read what, at the time, seemed like a ludicrous number of papers and then proposed questions we had about each to the professor. The professor then spent 15-20 minutes per paper summarizing and discussing the significance of each paper, answering our questions as he went along. It was a small and intimate class which made the moments that I dosed off that more embarrasing. It was a very interesting class, but the lecture-like format was not engaging enough at 9 in the morning when I had stayed up until 1-2 to read all of the required papers.

One day we read a paper that deeply impacted my perception f how research can be done and explained in organizational behavior. The paper was called "The collapse of sensemaking" by Karl Weick, an influential but controversial individual within the field. Van Mannan argued in the article I mentioned yesterday that the article I am about to describe was extremely powerful but never would have seen the light of day under Pfeffer's system. Pfeffer shot back that Weick was not formally rigorous enough which only stoked Van Maanan's dislike for Pfeffer.

The article is very, very different from what you typically see in academic literature. It is a narrative about the Mann Gulch disaster that holds some information to its chest in order to make the impacts of the revelation of Weick's theory that more convincing. The article has nearly 2500 citations according to Google Scholar. There are no formal hypotheses, no statistical anlayses, but it's also not quite a theory paper. It is a kind of paper that I have only seen Karl Weick write. I mentioned the argument between Pfeffer and Van Maannan to a professor at my institution about their discussion of Weick's style. I do not remember the specifics, but they were clear that his pursuits are only possible after tenure and that few besides him can write these narrative theoretic pieces.

The paper begins with a description of the Mann Gulch disaster. Weick relies on a book "Young Men and Fire" written by Normal Maclean who interviewed survivors of the event.  As a very brief summary, a group of young firefighters parachuted into a forest where a fire had been reported. Their role was to act quickly to prevent the fire from spreading by digging fire lines as well as repairs damage from a fire. The men unfortunately were unprepaired for a large active and fast moving fire. They found themselves in a position where fire was rapidly approaching and they needed to act fast to survive. 13 of the 16 men died that day. Of those that survived, two found a way through a rock crevice, the other survived by lighting a brush fire at his feet and lying down in the ashes. The actions of this last individual, Wagner Dodge, led Weick to begin his theorizing about the collapse of sensemaking within this group of men.

Sensemaking is the way in organizations act in ways to create order in their environment through their actions based on their purpose and culture. The theory of sensemaking apparently arose as an alternative to focusing just on the decision making process itself (as proposed by March of the Carnegie School). In other words, the organization's actions are in response to the way reality is perceived in order to maintain their perception of reality. Weick's primary argument is that the actions of the firefighters were in line with their incorrect perception of reality and when they were faced with a new reality, they were unable to 'make sense' of the situation. Their training became useless because they were no longer in a situation they could understand. Dodge was able to make sense of the situation when others could not and essentially set a fire line where he stood. This prevented the fire from coming as close to him as the ground was already burned. His command to the others to join him in the fire seemed to go against their identity as firefighters.

I don't want to get into the details of the paper as it is extremely dense and it is certainly worth a read. This paper is particularly important to me because of the way it is presented. It is intuitive and rigorous within the setting. Even though there is no data, you can tell that an extraordinary amount of thought went into the construction of the paper. I don't use sensemaking in my research and I'm not sure I agree with it in opposition to other concepts that it somewhat collides with (like the Carnegie School) but damn does WEick make a good argument.

Coding (in the social science sense)

I often wish that science did not occasionally use the same word to mean the same thing. In this case, the thing is coding. When the majority of people think of coding, they probably think of writing computer code. Those who write computer code are called coders and all is right with the world. But there is another kind of code that we use in social science much more often, qualitative coding. This essentially means that you take some output, communication, interview, etc. that cannot be directly translated into numbers and you create a scheme to do that translation. It could mean that you create a list of possible topics and match each sentence in an interview onto those topics. Going through this process can give you a quantitative idea of what a person was discussing.

I have already introduced one form of coding in one of the previous posts. Liang, Moreland, and Argote developed a kind of coding scheme to measure transactive memory in 1995. In that case, 2 individuals watched a video and rated the level of coordination, credibility, and expertise within the group overall. This measure was not as taxing as some versions of coding but it still requires 2 individuals to watch videos of all the groups and make a judgement. Once the two coders have finished, their codes are compared using a formula like cronbach's alpha. Cronbach's alpha determines how consistent the raters are at judging the same thing, in this case the groups i the videos.

Then we come to my personal difficulty using coding in my work. I have used coding for several projects, including some where the thing we were measuring was, I felt, objective. Therefore, using multiple coders is useful to help pickup on when one person missed a specific thing, but not to compare groups based on the coder's perception of their qualities. In my case, I say count the number of times this thing occurs. One coder sees 5 but the other only sees 3. The one that sees 5 may objectively be correct, but something like Cronbach's alpha just sees this as an inconsistency between the coders when the actual problem is either attention or honest mistake.

Whenever I start a coding process, I typically dread starting and looking at the outcomes because I just feel that the mistakes are arbitrary, that I don't need the coders to do this correctly, or that the coders are just not doing a good job. It's frustrating. But, it's frustrating in a way that feels unnecessary.

Machine learning has started being introduced which provides a more objective way of looking at a set of hard to quantify data though it may not do as a good a job as a person can. I think that, in the near future, machine learning will begin supplementing person coding when there are large data bases available. I think that this is a good way forward and also I like that it may reduce my personal reliance on other people. It becomes impersonal, removed from the imbued meaning of the words, and disconnected from theoretical constructs. But it is a slave that reduces the need for the me to act mechanically, which is something I suppose.

Carnegie School of Thought - Bounded Rationality

Discuss Herb Simon some and Organizations

When I began my PhD, I was introduced early on to a particular strain of work on management called the Carnegie School of thought. This work was primarily done in the 50s and 60s by researchers at Carnegie Institute of Technology (now called the Carnegie Mellon University). Herbert Simon and Jim March were the primary individuals involved, March continuing the work with Cyert.

Herb Simon was a very complex, analytical man. Though Simon began his life as a political scientist, publishing his dissertation in a book called Administrative Behavior, he became more interested later in artificial intelligence. My first introduction to him was in a cognitive psychology course. The instructor was describing how Simon began the first lecture of the Fall semester of one of his courses by asking his students what they accomplished. After all of the students had described their summers, Simon said that, over the summer he had designed a computer that could think like a person. It was an early version of artificial intelligence that based its decision making on the same ways in which people make decisions. Simon was an extremely influential person in computer science, artificial intelligence, cognitive psychology, and management. His influence in management, for the most part, is due to his encouragement and collaboration with James March.

The ideas within the Carnegie School are quite diverse and for this post I will focus on a concept called bounded rationality (also known as satisficing). Within Economics, it is assumed that actors make the best choice in any given decision. Satisficing, proposes that there are increased costs for some decisions or that the outcome is not as important to the actor, leading to the actor to willingly make a suboptimal choice. An example that Simon used to give was about lunch [I have modified the story from the original but the idea is the same]. If an actor is in their office and needs to get lunch, they could have multiple values that they desire to maximize: timeliness, cost, health, etc. An actor could determine the relative weight of those characteristics and make the optimal choice. But as Herb said "I would instead just always go to [the student center]. For those who have been there, it is obviously a non-optimal choice." The humorous example does have certain limitations but provides a good overall example of the concept. Satisficing proposes that the act of making a choice is costly and ones own desires are not always clear, leading to a "good enough" choice being much easier to determine than the best.

Though this concept, while somewhat of a refinement of the economic theory of optimization, was a revelation to the academic world. It is not without its detractors. A comment that I have heard from several proponents of bounded rationality is that it is not testable meaning it is not a proper theory. The reason for this is that people may actually be making optimal choices but they are optimizing on unknown or unmeasured criteria. I personally think that satisficing is a very useful concept though it does have a subcurrent of nondeterminism that also arises in March's concept of the Garbage Can Model of Organizational Choice. This concept is a bit unsettling, but still interesting to me.