I think this is week two, though I can't be certain. The journal below is the original version, but I should note two exceptions proffered by Professor Pavlovsky: 1. She did not call a halt to the discussion, she just encouraged us to chill out. 2. Despite my outrage at Julien and Duggan, the fact is the vast majority of the lit =is= in English.
Prof. Belkin, a well-respected information scientist, will be dropping into our class in Week 3.
Journal, week two:
Apparently there was something of a commotion about the reading this week. I don’t know the details but part-way through the week the professor waved the white flag can called a halt to both the reading the the discussion. I’ve been in graduate school in one way of another since the mid-80s and this is a new one on me! One gathers the professor didn’t take this action on a whim, that either the quality of the discussion suffered or my colleagues were on the verge of rebellion. Or both.
Here’s the issue – the reading in question was a pair of literature reviews and one longitudinal analysis of information needs and literature. The reasoning for assigning this material now is sound; lit reviews pull double duty in familiarizing students with the literature and with the history and direction of the discipline. It can be a good short cut to reading all the literature and coming to conclusions oneself. Lit reviews amass the scope of the debate in a digestible format that helps students, especially, understand the larger picture in a straightforward and relatively pain free manner.
Not pain-free enough, it seems. Again I am not familiar with the details, but some of my student colleagues found must have found the articles difficult enough to get their arms around that them professor called a halt, for the time being anyway, to the discussion. There are likely many reasons for this, one of which was readily admitted by the professor: the information =is= difficult to get one’s arms around! Readers expecting to understand every tick and ninny of the articles were going to be in for a disappointment. I certainly didn’t and I doubt many students, if any, did. That’s one problem, the expectation that every reading if going to be immediately accessible. It won’t. I had a history grad professor who specialized in intellectual history. While most of us in the class understood the political and or social historical underpinnings of the era under discussion, the majority of my student colleagues at the time lacked a background in intellectual history, and it was slow going for most of us. And of course we worried; all of us were accustomed to reading and understanding quickly and efficiently, and the intellectual class proved a challenge . Of course professor Elbert had been through this before and knew that we’d all do well provided that just concentrated on reading. Read enough and the understanding will come; maybe not immediately, but with time.
I associated her suggestion with my own realization that I didn’t understand everything my parents told me when I was a teenager, but that much of their advice took new meaning in my adulthood. Though not all of their advice was accessible to my adolescent mind I stored it up, then grew up, and suddenly much of it made a lot more sense. I took Sarah’s advice and just read with an eye for eventual understanding and, like my parent’s advice, some of the reading did eventually make sense to me and, unfortunately, some never will!
So that may well be reason number one for the problem this week: an unreasonable expectation of immediate accessibility of the reading material. Anyone expecting to take all this in on the first go- around was bound to be sorely disappointed. But I don’t think that was the only problem; no, part of the blame should be placed on the medium itself. Literature reviews have always struck me as self-referential as best, masturbatory at worst. The theory is good: short analyses of the ongoing debate with an eye to the development of of the discipline and advice about future directions. The problem is not the theory but the practice (to paraphrase, in a way, Pettigrew et alia!). Authors of lit reviews take as read the idea that to be considered valid their studies must be expressed with metrics. So they set out categories or analysis, count the frequencies of articles which suit those categories, then report that, for example, that “28% of . . . 165 articles supplied were theoretically grounded,” or that “Of the 95 information behavior papers examined, 58.9% used theory with 1.99 theory incidents per article.” (Pettigrew et al., p. 45).
Metrics are great – I love metrics. Use them all the time myself. But metrics are only as good as the categories of analysis upon which they are based. Like computer programs, metrics are rules-based, meaning that a set of rules are established and the data are assessed by a strict application of those rules. The best computer programs are those which start with a valid rules set, and the same goes for databases, the ground rules are all important. The problem I find with lit reviews is that the datasets are highly subjective to begin with. Julien and Duggan’s (2000) longitudinal analysis limits itself to studies published in the English language (p. 293)! I can’t imagine a more randomly established category of analysis.
So, in my humble opinion, lit reviews start out with subjective categories of analysis which both skew the eventual results and offer a glimpse into the biases of the authors (referring us back of course to last week’s discussion of cognitive authority – see how it all fits together?!). But it gets worse – in many cases lit reviews point to each other for validation! This is where my “self-referential” critique comes in. Evidence of this practice can be found in all three of our readings for this week, but none more humorous than Julien and Duggan’s division of the literature into three periods, the first and the last of which are analyzed by the authors and then =compared= to a seperate analysis done in the middle period!
“Masturbatory” might seem a strong term, but I think it is the best description of what this genre of literature has become. Authors depend on metrics as a validation of their conclusions, but the categories of analysis upon which those metrics are predicated are composed subjectively, often it seem, randomly. Then a published lit review is taken as a source for =other= lit reviews, who then use a =review of lit reviews= as an established source! This is crazy!
So one problem may be the density of the medium, combined with the expectation that articles should be immediately accessible to readers. But I think a more significant problem is the medium itself, which lacks the objectivity to make it a cognitive authority.