How Can Communication Technology Encourage Civility?

Communication Technology CivilityShutterstock

You’re reading a story on the web and your eyes accidentally drift down to the comments. Within moments, lost in a sea of atrocious behavior and even worse grammar, your view of humanity clicks down another few notches.

It’s an experience so common it’s spawned a mantra: Don’t Read The Comments. But why should this be so? The web is also filled with examples of altruism, kindness, and generosity. Creative, intimate communities appear online often. What is it about online comments that makes us so awful?

In this essay I will focus on exploring why we behave as we do online, and suggest some solutions for increasing civility. I’ll try to use as much social science as is possible. As this is a new area of research, some of the studies I reference are from other areas, but their results are apt. My central argument is that good people can behave poorly in online situations, but civil behavior can be encouraged by design.

Bad Is Louder Than Good

It’s a fact that bad experiences resonate louder and longer than good ones. That’s why you can read an inbox full of pleasant emails, but two hours later you’ll still be thinking about the single insulting one. In “Bad Is Stronger Than Good” (2001), Baumeister et al conclude that “bad is stronger than good across a broad range of psychological phenomena.”

This relates to online behavior in two ways. First, it may be that comments online are not as bad as we think they are. We’re all subjectively experiencing online conversations, so we’re equally subject to the “bad is stronger than good” phenomenon. Of course, it’s still worthwhile to encourage the good. Second, the human propensity for paying attention to negative input at the expense of positive input shows what a tall order increasing civility online really is.

The Bad Apple

In his 2009 study published in Research in Organizational Behavior, Will Felps found that one bad participant can have a negative effect on an entire group. His research was about real-life, in-person meetings, but it’s entirely relevant to online community.

He identified three types of negative participants: the Jerk, the Slacker, and the Depressive Pessimist. The Jerk insults others, the Slacker displays disinterest, and the Depressive Pessimist complains and says it’s all pointless. (Sounds like a typical comment thread to me.)

Felps conducted experiments where he put groups of volunteers into a room to work together on a task for a financial reward. Unbeknownst to the group, one of the members was an actor who embodied one of the three types of negative participants.

The conventional wisdom said that groups are more powerful than any one individual, so one bad apple should not have much of an impact. Felps found the opposite. Groups with the bad actor performed 30 to 40 percent worse than groups without. In addition, the bad actors caused team members to emulate their behavior. When the actor was a slacker, others would slack. In short, our behavior is like a virus. The behavior of one participant is replicated.

What this means online is that moderators should be in place to guard against negative participation, especially early in the conversation. I’ve found that the first comment effectively sets the tone for all that come after, so I recommend holding all comments in a queue until there’s a good standout comment, and then ensuring that comment appears first. Moderators should be vigilant about looking out for bad apples, recognize the destructiveness of their participation, and treat it accordingly.

Private Eyes

One of the unique things about online conversation is that many can participate but each is relatively unseen. We can be together virtually and alone in reality. Online conversation lacks the human gaze.

Looking into another person’s eyes has a profound effect on the speaker. The feeling of being seen deeply influences how we communicate, and I believe the lack of it is one of the contributors to the lack of civility online. In a study published in Biology Letters in 2006, Melissa Bateson et al showed that the cues of being watched can enhance cooperation.

Imagine a refrigerator in a common room in a workplace. Inside are unsecured beverages and an “honesty box,” where people who take drinks are supposed to put in money. Contributions are anonymous and voluntary, but expected. Now imagine an experiment where the honesty box had one of two photographs on it. One group saw a photo of flowers, the other saw a photo of a pair of human eyes. After 10 weeks, the results were calculated. The people who saw a pair of human eyes paid 2.76 times more on average.

What this means for online community is that good behavior increases when people feel seen (or, put the opposite way, bad behavior increases when people feel invisible). The feeling of being watched is so powerful that just showing a photograph of eyes is enough to more than double positive participation.

I’m not saying that we should put images of eyes beside every comment form on the web (though I’d love to see a site try it). Instead, we should design these comment experiences to enhance the feeling of being seen by the community. Imagine a row of avatars, photos of members reading the same story right now, all looking at you as you type, right beside the comment box.

Color Commentary

It’s easy (and obvious) to focus on moderation when working on community issues, but how can we adjust behavior before it gets to the moderator? The visual design of conversational spaces online can have a huge impact on the tone of the conversation.

Using rounded corners in online design can go a long way toward making technology feel more approachable. That’s why the icons on your iPhone and the corners of Apple laptops all sport rounded corners. Donald Norman called these “affordances” – the rounded corners look like something that would feel good in the hand, even if we’ll never hold them – making them more approachable.

My favorite study in this area is Ravi Mehta’s investigation published in the journal Science in 2009. In the experiment, participants were given the same tasks to complete on a computer. The only difference was that one group had a red background and the other had a blue background.

The study showed that the red group did better at tasks that required attention to detail, while the blue group did better at tasks that required creativity and emotion. The reverse was also true – the red group did worse at creative tasks and the blue group did worse at attention to detail.

I love this study because it shows that there’s no one right way (or one right color) for every task. If your community task requires attention to detail, using red is a good choice. Indeed, many error messages are red for this reason. If your task requires creativity and emotion, blue is the better choice. Interestingly, blue was the default link color on the web and often still is.

In my experience, the visual language of a website can have a huge impact on the tone of the conversation it produces. This just scratches the surface of how color theory effects participation, but it’s a good start. The core lesson is to consider the kind of interaction you seek and make sure the visual design reinforces that experience. When the interaction demands one thing (say, creativity) and the design encourages the opposite (by using red, for example), people can have a negative reaction without being aware of the connection

Patternicity

Our brains take in huge quantities of sensory data and create a coherent narrative from it. Think about it like a movie – the frames move so fast we interpret it as fluid motion. Interestingly, when the amount of input decreases, our brains do not respond with decreasing confidence. They actually do the opposite – they work harder to make sense of the limited input.

This makes sense on an evolutionary level. Evolution favors the ones that don’t get eaten, so seeing the grass move and assuming it’s a lion is a good thing. Our brains have developed not only to detect patterns, but to put our danger response on a hair trigger. It’s built in to our DNA.

Online, where we have much less social information (no physical gestures, no direct gaze), our brains work much harder to intuit meaning, and as a result, we see patterns where there are none. And we tend to see danger even when there isn’t any. What this means for online communication is that we’re predisposed to make assumptions based on limited information, and respond in a “fight or flight” manner.

Jennifer Whitson did a fascinating set of experiments, published in Science in 2008, on patternicity and feelings of control. One experiment involved showing volunteers pictures of random static and asking them if they saw an image in it.

Some of the volunteers were put into a “out of control” state. They were  quizzed about subjects they couldn’t have known anything about or asked to recall a time in their lives when they felt out of control. The other volunteers were put into an “in control” state. Their knowledge was rewarded or they were asked to recall a moment where they were in control.

The people in the “out of control” state were more likely to engage in patternicity – to see patterns where there were none. This is relevant because we frequently feel out of control when we’re online – applications freeze, networks lag, computers crash. Is it any wonder, then, that we perceive personal slights where there are none?

Conversely, when people were induced into an “in control” group, they were less likely to engage in patternicity. Feeling in control allowed them to see that there was no image, only static.

This is an important step in understanding how design can encourage civility online. An experience that creates an in-control feeling in the user will produce a person far more likely to be calm, less likely to see conspiracies or insults.

In the study, inducing an in-control experience was easy. All researchers had to do was ask people to describe a moment when they felt in control, or to recall a personal story they were proud of. We should do the same online. Ensuring that web servers are fast and reliable and that the design is understandable and consistent all contribute to producing an in-control feeling.

Combine all these studies and a path toward a more civil online discourse emerges. Use community managers and software to weed out bad apples. Design features to show that people are watching. Make sure the visual design reinforces the interaction with color and shape. And do everything you can to make people feel in control.

There is no secret recipe to eliminate all bad community participation online, just as there’s no way to eliminate all bad behavior offline. But taken together, methods like these will counteract the bias toward bad behavior online. We don’t have to succumb to our basest tendencies just because we’re looking at computers, but it’s up to the creators of digital experiences to design for civility.

Discussion Questions:

When have you felt the most “in control” in your life? What made you feel that way?

Where do you like to participate online? How could it be improved?

Have you ever been one of the three bad apples?

Discussion Summary

In my original essay, “How Can Communication Technology Encourage Civility?” I went into some detail about why we’re so attuned to bad participation, and how we can set up systems to discourage it. As so often happens when discussing community management, I spent more time on the bad actors and not enough on the good ones.

It’s easy to see how this happens. The same “Bad Is Stronger Than Good” phenomenon that makes one negative comment seem louder than 100 positive ones in a comment thread leads community managers to spend more time and attention on bad actors as well. It’s an occupational hazard.

But rewarding good behavior is just as important as punishing bad behavior, and may be a more productive community management technique in the long run. These rewards can take many forms.

Positive behavior can be rewarded with special attention paid to members who are participating in exemplary ways. That special attention could take the form of a private thank you or public praise. In content-based communities, I encourage companies to create a featured area, where the best contributions are highlighted. These positive examples can be just as powerful as negative punishments.

Some companies are uncomfortable showing preference to some members over others, but without doing so, your community managers are left with only negative expressions of authority in their toolbox.

It’s also not just the purview of moderators to reward positive behavior – these encouragements can be doled out by community members directly. Look for places where positive reinforcements can be built into the structure of the site itself.

The seminal example of a peer-reputation system is eBay, where buyers can rate their sellers (and, originally, vice versa), but explicit ratings systems are easily manipulated. Instead, I encourage companies to monitor implicit feedback loops. Look for places where community members choose to interact with each other over time, or express a positive intent in the course of other interactions. For example, a member whose contributions are frequently favorited, bookmarked, or forwarded is probably doing something right.

In the end, fully automated systems can always be gamed, and human interaction can be rapidly overwhelmed. As always, we need to create systems that involve manual interaction and systematic automation, and combine all that input to get a better picture of who to reward. The key point is to make sure that positive reinforcement is public and communicated to all members. Make sure that new members see the exemplar content before they’re invited to create their own, and the content they create will get better and better.

New Big Questions to continue the conversation:

  1. How have you personally been rewarded publicly in the past, online or off? Did the experience make you more or less interested in participating in that community? And how could the experience have been improved?
  2. Think of a time when a negative interaction really stung you. Were there positive interactions around the same time that you overlooked by focusing on the negative one? Now think about that interaction from the other side. Have you ever been the one interacting negatively? How could you have been dissuaded from interacting that way?

7 Responses

  1. Derek Powazek says:

    When have you felt the most “in control” in your life? What made you feel that way?

  2. Derek Powazek says:

    Where do you like to participate online? How could it be improved?

    • Adam_Rakunas says:

      Metafilter is still one of my favorite places for online discussion, and I think it’s because its moderation team does such excellent work. They keep threads from veering too far off-topic (or, at least, keep the derailis that seem slightly relevant and very intersting going), clamp down on the personal attacks, and make it a civil place to hang out.

      On the flip side, I recently canceled my account with Patch, a local news site outfit, because their moderation was so poor. Granted, local politics, especially with issues like traffic and housing, are going to raise passions no matter who’s behind the controls, but with the personal attacks, the semi-literate ramblings, and the pointless peanut gallery commentary that did little to add to the value of the posts could have been nipped in the bud if the Patch editors had a clear mandate to keep the conversation productive. Part of that, I think, was a fear of trying to appear objective, to make Patch a place for all kinds of points of view to have equal footing. I think only leads to a race to the bottom as people spout their talking points, promote their agendas, and do little to talk to each other. I tried, man, and it just wasn’t worth it.

      And don’t get me started on the LA Times or YouTube. Whatever gold that’s hiding in the comments is buried under mountains of excrement.

  3. Derek Powazek says:

    I know I have.

  4. spacecalculus says:

    I prefer to think of our word as electric current as belonging to a higher source.

  5. twatkins says:

    What is missing from online communication is the visual and audio characterization of any communications that is essential to convey emotion.  The use of Emoticons or character faces  just does not work.  What might work is to put a series of sliding bars along the top or bottom of an email or comment.  Label these bars with various emotions – Love, Humor, Anger, Doubt, etc.  Then allow the sender to slide the bar along each line to indicate their level of that emotion.  These bars and the slide position is then captured and put on the posted email or comment.

    To insure the use of these emotion slides, a sender could select “Reply Requested” with or without a requirement for these slides.  If the sender wants the reply to have these slides, then when the REPLY button is selected, the blank email – pre-addressed to the sender – pops up with these slides already entered and cannot be sent without them.

    You can easily image other ways to use these slides in routine personal as well as public communications.

  6. wondering14 says:

    General applause is given to the fact that many poor and lesser-educated are now online. It is they who may lack the grammar and niceties of others online. Should we screen by grammar? By remarks that would be made in everyday discussion in various areas of society, rude as they may be? Ideas are clothed in varied ways.

    Moderator manipulation is chancy. Who is the judge sitting at the “do not post” button? What contributional posts are screened out not only because of grammar or rudeness—but for other subjective reasons that a site’s sponsor may not be aware of?

    Screwing down frankness or squashing ideas may be a high price for a desired level of gentility.

    How about cross-border considerations? An Italian’s way of commenting on America–level of perceived rudeness, grammar–may be quite different from that of an Argentinian or American.

    Have studies been made of parallel comment sites, one monitored and one unmonitored? How does the content differ?