Why Groups Fail to Share Information Effectively

sharks_meeting

“No, leaks aren’t on the agenda…”

In 1985 Stasser and Titus published the best sort of psychology study. Not only does it shine a new light on how groups communicate and make decisions, it also surprises, confuses and intrigues. Oddly, the results first look as if they can’t be right, then later it seems obvious they are right, then attention turns to what can be done about it.

The findings were relatively straightforward and, as is often the case with decision-making research, another blow for the fragile human ego. They found that people trying to make decisions in groups spend most of their time telling each other things that everyone already knows. In comparison people are unlikely to bring up new information known only to themselves. The result: poor decisions.

As it happens Stasser and Titus’ (1985) participants were making a relatively trivial decision—who should be student body president—but subsequent research has tested all sorts of other scenarios. Experimenters have asked people to choose the best candidate for a job (Wittenbaum, 1998), the best type of investment (Hollingshead, 1996) and the guilty party in a homicide investigation (Stasser & Stewart, 1992).

Again and again the results have shown that people are unlikely to identify the best candidate, make the best investment or spot who really committed the crime. When asked to make a group decision, instead of sharing vital information known only to themselves, people tend to repeat information that everyone already knows.

The explanations

At first these results seem deeply counter-intuitive. Surely people should be highly motivated to bring new information to the discussion, not just repeat the same old stuff? After all, the group is scuppering itself by failing to share. Three solid explanations for this strange behaviour have emerged from the research (Wittenbaum et al., 2004):

  1. Memory. Shared information is likely to be more memorable in the first place, so more likely to be brought up by someone. Also, if more people in a group know a piece of information, whether because it’s memorable or for some other reason, then there is a greater probability that one of them will recall it in the discussion.
  2. Pre-judgements. People make their minds up to varying degrees before they have a group discussion. The information on which they make their pre-judgement is likely to be shared information available to everyone. Then, when the group discussion starts, whether consciously or unconsciously, people tend to only bring up information that supports their pre-judgement. Surprise, surprise, it’s the same things everyone else is bringing up.
  3. Anxiety. Before a meeting people are unsure how important the information they know is, and are also anxious to be seen in a good light by others in the group. Information that emerges during a meeting as shared by the group comes to be viewed as more important and so people repeat it. People are seen as more capable when they talk about shared rather than unshared information (Wittenbaum & Bowman, 2004). To be on the safe side people prefer to stick to repeating things that everyone knows and, bizarrely, others like them better for it.

Together these points begin to show why it is very likely that people will fail to share information known only to themselves.

Trained doctors do no better

But there is a fourth possible explanation for the experimental results. It could be that the participants have not been specifically trained to share relevant information with each other. They were, after all, largely college students—perhaps those with more experience and training can do better?

This is why Larson et al. (1998) tested a group of doctors who are professionally trained in pooling information from different sources in order to make a diagnosis. In their experiment 25 physicians were recruited and asked to solve 2 hypothetical medical cases in groups of three. Firstly each participant watched a video on their own in which they saw a patient talking about their symptoms with their doctor (parts played by actors). Participants were, however, shown slightly different videos, thereby imparting some information to all three members of the diagnostic team, and some information only to individuals.

The experiment was set up so that it was only possible to make an accurate diagnosis if the doctors shared the information about the patient that was known only to themselves. What the experimenters found, though, was the classic dynamic where participants spent more time discussing shared rather than unshared information. Because groups which didn’t pool previously unshared information had less to go on, consequently they made less accurate diagnoses.

These findings are particularly dramatic because the participants were trained decision-makers.

How to encourage people to share

Naturally, then, ever since the first experimental demonstration of this phenomenon by Stasser and Titus (1985), the search has been on to find ways to encourage people to share the information that only they know. Here are some of the attributes of groups that do tend to divulge more of that critical unshared information with each other (from Wittenbaum et al., 2004):

  • Groups where members disagree and who display less groupthink are more likely to share unpooled information.
  • When people are told to try and recall relevant information before the meeting, this makes them more likely to mention facts that only they know.
  • Members of a group should be made aware of each other’s expertise, so they know (broadly speaking) what everyone else knows.
  • The longer meetings go on, the more likely that people will recall previously unshared information (unfortunately!).
  • People are more likely to share if they have a higher status in the group. So to encourage lower status members to share, their expertise needs to be specifically acknowledged to the group.

Next time you’re in a decision-making meeting, try consciously noticing the extent to which the group is sharing information that everyone already knows. Then, if it seems that little new information is emerging, there’s a case for using some of these techniques.

And in reality?

Much as psychologists would like otherwise, experiments are only attempts to simulate real-world situations. In reality things are more complex. Wittenbaum et al. (2004) give us one reason to be pessimistic about real world group decision-making and two reasons to be optimistic.

First the bad news. Compared with an experimental situation, in the real world people have their own goals which may conflict with those of the group. This may actively stop them sharing information, or lead them to share it in such a way as to further their own goals. This is hard to counter.

Now the good news. Most of these studies assume that the information that is only known to the minority is important for the decision. In the real world this won’t always be the case. Also, people may share information they are unsure about outside a group meeting directly to other individuals. This is more likely to happen when the information is sensitive or of unknown value.

So perhaps real-world group decision-making isn’t affected as badly as the experimental evidence suggests. Still, I can’t help being reminded of a classic episode of the British sitcom Yes, Prime Minister in which the fictional Prime Minister is asking what it is he doesn’t know about Foreign Office secrets, to which his Principal Private Secretary, Bernard, replies:

“May I just clarify the question? You are asking who would know what it is that I don’t know and you don’t know but the Foreign Office know that they know that they are keeping from you so that you don’t know and they do know and, all we know, there is something we don’t know and we want to know. We don’t know what because we don’t know. Is that it?”

About the author

Psychologist, Jeremy Dean, PhD is the founder and author of PsyBlog. He holds a doctorate in psychology from University College London and two other advanced degrees in psychology.


He has been writing about scientific research on PsyBlog since 2004. He is also the author of the book “Making Habits, Breaking Habits” (Da Capo, 2003) and several ebooks:



SOURCE: PSYBLOG

Comments