I often wish that science did not occasionally use the same word to mean the same thing. In this case, the thing is coding. When the majority of people think of coding, they probably think of writing computer code. Those who write computer code are called coders and all is right with the world. But there is another kind of code that we use in social science much more often, qualitative coding. This essentially means that you take some output, communication, interview, etc. that cannot be directly translated into numbers and you create a scheme to do that translation. It could mean that you create a list of possible topics and match each sentence in an interview onto those topics. Going through this process can give you a quantitative idea of what a person was discussing.
I have already introduced one form of coding in one of the previous posts. Liang, Moreland, and Argote developed a kind of coding scheme to measure transactive memory in 1995. In that case, 2 individuals watched a video and rated the level of coordination, credibility, and expertise within the group overall. This measure was not as taxing as some versions of coding but it still requires 2 individuals to watch videos of all the groups and make a judgement. Once the two coders have finished, their codes are compared using a formula like cronbach's alpha. Cronbach's alpha determines how consistent the raters are at judging the same thing, in this case the groups i the videos.
Then we come to my personal difficulty using coding in my work. I have used coding for several projects, including some where the thing we were measuring was, I felt, objective. Therefore, using multiple coders is useful to help pickup on when one person missed a specific thing, but not to compare groups based on the coder's perception of their qualities. In my case, I say count the number of times this thing occurs. One coder sees 5 but the other only sees 3. The one that sees 5 may objectively be correct, but something like Cronbach's alpha just sees this as an inconsistency between the coders when the actual problem is either attention or honest mistake.
Whenever I start a coding process, I typically dread starting and looking at the outcomes because I just feel that the mistakes are arbitrary, that I don't need the coders to do this correctly, or that the coders are just not doing a good job. It's frustrating. But, it's frustrating in a way that feels unnecessary.
Machine learning has started being introduced which provides a more objective way of looking at a set of hard to quantify data though it may not do as a good a job as a person can. I think that, in the near future, machine learning will begin supplementing person coding when there are large data bases available. I think that this is a good way forward and also I like that it may reduce my personal reliance on other people. It becomes impersonal, removed from the imbued meaning of the words, and disconnected from theoretical constructs. But it is a slave that reduces the need for the me to act mechanically, which is something I suppose.