You’re reading a story on the web and your eyes accidentally drift down to the comments. Within moments, lost in a sea of atrocious behavior and even worse grammar, your view of humanity clicks down another few notches.
It’s an experience so common it’s spawned a mantra: Don’t Read The Comments. But why should this be so? The web is also filled with examples of altruism, kindness, and generosity. Creative, intimate communities appear online often. What is it about online comments that makes us so awful?
In this essay I will focus on exploring why we behave as we do online, and suggest some solutions for increasing civility. I’ll try to use as much social science as is possible. As this is a new area of research, some of the studies I reference are from other areas, but their results are apt. My central argument is that good people can behave poorly in online situations, but civil behavior can be encouraged by design.
Bad Is Louder Than Good
It’s a fact that bad experiences resonate louder and longer than good ones. That’s why you can read an inbox full of pleasant emails, but two hours later you’ll still be thinking about the single insulting one. In “Bad Is Stronger Than Good” (2001), Baumeister et al conclude that “bad is stronger than good across a broad range of psychological phenomena.”
This relates to online behavior in two ways. First, it may be that comments online are not as bad as we think they are. We’re all subjectively experiencing online conversations, so we’re equally subject to the “bad is stronger than good” phenomenon. Of course, it’s still worthwhile to encourage the good. Second, the human propensity for paying attention to negative input at the expense of positive input shows what a tall order increasing civility online really is.
The Bad Apple
In his 2009 study published in Research in Organizational Behavior, Will Felps found that one bad participant can have a negative effect on an entire group. His research was about real-life, in-person meetings, but it’s entirely relevant to online community.
He identified three types of negative participants: the Jerk, the Slacker, and the Depressive Pessimist. The Jerk insults others, the Slacker displays disinterest, and the Depressive Pessimist complains and says it’s all pointless. (Sounds like a typical comment thread to me.)
Felps conducted experiments where he put groups of volunteers into a room to work together on a task for a financial reward. Unbeknownst to the group, one of the members was an actor who embodied one of the three types of negative participants.
The conventional wisdom said that groups are more powerful than any one individual, so one bad apple should not have much of an impact. Felps found the opposite. Groups with the bad actor performed 30 to 40 percent worse than groups without. In addition, the bad actors caused team members to emulate their behavior. When the actor was a slacker, others would slack. In short, our behavior is like a virus. The behavior of one participant is replicated.
What this means online is that moderators should be in place to guard against negative participation, especially early in the conversation. I’ve found that the first comment effectively sets the tone for all that come after, so I recommend holding all comments in a queue until there’s a good standout comment, and then ensuring that comment appears first. Moderators should be vigilant about looking out for bad apples, recognize the destructiveness of their participation, and treat it accordingly.
One of the unique things about online conversation is that many can participate but each is relatively unseen. We can be together virtually and alone in reality. Online conversation lacks the human gaze.
Looking into another person’s eyes has a profound effect on the speaker. The feeling of being seen deeply influences how we communicate, and I believe the lack of it is one of the contributors to the lack of civility online. In a study published in Biology Letters in 2006, Melissa Bateson et al showed that the cues of being watched can enhance cooperation.
Imagine a refrigerator in a common room in a workplace. Inside are unsecured beverages and an “honesty box,” where people who take drinks are supposed to put in money. Contributions are anonymous and voluntary, but expected. Now imagine an experiment where the honesty box had one of two photographs on it. One group saw a photo of flowers, the other saw a photo of a pair of human eyes. After 10 weeks, the results were calculated. The people who saw a pair of human eyes paid 2.76 times more on average.
What this means for online community is that good behavior increases when people feel seen (or, put the opposite way, bad behavior increases when people feel invisible). The feeling of being watched is so powerful that just showing a photograph of eyes is enough to more than double positive participation.
I’m not saying that we should put images of eyes beside every comment form on the web (though I’d love to see a site try it). Instead, we should design these comment experiences to enhance the feeling of being seen by the community. Imagine a row of avatars, photos of members reading the same story right now, all looking at you as you type, right beside the comment box.
It’s easy (and obvious) to focus on moderation when working on community issues, but how can we adjust behavior before it gets to the moderator? The visual design of conversational spaces online can have a huge impact on the tone of the conversation.
Using rounded corners in online design can go a long way toward making technology feel more approachable. That’s why the icons on your iPhone and the corners of Apple laptops all sport rounded corners. Donald Norman called these “affordances” – the rounded corners look like something that would feel good in the hand, even if we’ll never hold them – making them more approachable.
My favorite study in this area is Ravi Mehta’s investigation published in the journal Science in 2009. In the experiment, participants were given the same tasks to complete on a computer. The only difference was that one group had a red background and the other had a blue background.
The study showed that the red group did better at tasks that required attention to detail, while the blue group did better at tasks that required creativity and emotion. The reverse was also true – the red group did worse at creative tasks and the blue group did worse at attention to detail.
I love this study because it shows that there’s no one right way (or one right color) for every task. If your community task requires attention to detail, using red is a good choice. Indeed, many error messages are red for this reason. If your task requires creativity and emotion, blue is the better choice. Interestingly, blue was the default link color on the web and often still is.
In my experience, the visual language of a website can have a huge impact on the tone of the conversation it produces. This just scratches the surface of how color theory effects participation, but it’s a good start. The core lesson is to consider the kind of interaction you seek and make sure the visual design reinforces that experience. When the interaction demands one thing (say, creativity) and the design encourages the opposite (by using red, for example), people can have a negative reaction without being aware of the connection
Our brains take in huge quantities of sensory data and create a coherent narrative from it. Think about it like a movie – the frames move so fast we interpret it as fluid motion. Interestingly, when the amount of input decreases, our brains do not respond with decreasing confidence. They actually do the opposite – they work harder to make sense of the limited input.
This makes sense on an evolutionary level. Evolution favors the ones that don’t get eaten, so seeing the grass move and assuming it’s a lion is a good thing. Our brains have developed not only to detect patterns, but to put our danger response on a hair trigger. It’s built in to our DNA.
Online, where we have much less social information (no physical gestures, no direct gaze), our brains work much harder to intuit meaning, and as a result, we see patterns where there are none. And we tend to see danger even when there isn’t any. What this means for online communication is that we’re predisposed to make assumptions based on limited information, and respond in a “fight or flight” manner.
Jennifer Whitson did a fascinating set of experiments, published in Science in 2008, on patternicity and feelings of control. One experiment involved showing volunteers pictures of random static and asking them if they saw an image in it.
Some of the volunteers were put into a “out of control” state. They were quizzed about subjects they couldn’t have known anything about or asked to recall a time in their lives when they felt out of control. The other volunteers were put into an “in control” state. Their knowledge was rewarded or they were asked to recall a moment where they were in control.
The people in the “out of control” state were more likely to engage in patternicity – to see patterns where there were none. This is relevant because we frequently feel out of control when we’re online – applications freeze, networks lag, computers crash. Is it any wonder, then, that we perceive personal slights where there are none?
Conversely, when people were induced into an “in control” group, they were less likely to engage in patternicity. Feeling in control allowed them to see that there was no image, only static.
This is an important step in understanding how design can encourage civility online. An experience that creates an in-control feeling in the user will produce a person far more likely to be calm, less likely to see conspiracies or insults.
In the study, inducing an in-control experience was easy. All researchers had to do was ask people to describe a moment when they felt in control, or to recall a personal story they were proud of. We should do the same online. Ensuring that web servers are fast and reliable and that the design is understandable and consistent all contribute to producing an in-control feeling.
Combine all these studies and a path toward a more civil online discourse emerges. Use community managers and software to weed out bad apples. Design features to show that people are watching. Make sure the visual design reinforces the interaction with color and shape. And do everything you can to make people feel in control.
There is no secret recipe to eliminate all bad community participation online, just as there’s no way to eliminate all bad behavior offline. But taken together, methods like these will counteract the bias toward bad behavior online. We don’t have to succumb to our basest tendencies just because we’re looking at computers, but it’s up to the creators of digital experiences to design for civility.
In my original essay, “How Can Communication Technology Encourage Civility?” I went into some detail about why we’re so attuned to bad participation, and how we can set up systems to discourage it. As so often happens when discussing community management, I spent more time on the bad actors and not enough on the good ones.
It’s easy to see how this happens. The same “Bad Is Stronger Than Good” phenomenon that makes one negative comment seem louder than 100 positive ones in a comment thread leads community managers to spend more time and attention on bad actors as well. It’s an occupational hazard.
But rewarding good behavior is just as important as punishing bad behavior, and may be a more productive community management technique in the long run. These rewards can take many forms.
Positive behavior can be rewarded with special attention paid to members who are participating in exemplary ways. That special attention could take the form of a private thank you or public praise. In content-based communities, I encourage companies to create a featured area, where the best contributions are highlighted. These positive examples can be just as powerful as negative punishments.
Some companies are uncomfortable showing preference to some members over others, but without doing so, your community managers are left with only negative expressions of authority in their toolbox.
It’s also not just the purview of moderators to reward positive behavior – these encouragements can be doled out by community members directly. Look for places where positive reinforcements can be built into the structure of the site itself.
The seminal example of a peer-reputation system is eBay, where buyers can rate their sellers (and, originally, vice versa), but explicit ratings systems are easily manipulated. Instead, I encourage companies to monitor implicit feedback loops. Look for places where community members choose to interact with each other over time, or express a positive intent in the course of other interactions. For example, a member whose contributions are frequently favorited, bookmarked, or forwarded is probably doing something right.
In the end, fully automated systems can always be gamed, and human interaction can be rapidly overwhelmed. As always, we need to create systems that involve manual interaction and systematic automation, and combine all that input to get a better picture of who to reward. The key point is to make sure that positive reinforcement is public and communicated to all members. Make sure that new members see the exemplar content before they’re invited to create their own, and the content they create will get better and better.
New Big Questions to continue the conversation:
- How have you personally been rewarded publicly in the past, online or off? Did the experience make you more or less interested in participating in that community? And how could the experience have been improved?
- Think of a time when a negative interaction really stung you. Were there positive interactions around the same time that you overlooked by focusing on the negative one? Now think about that interaction from the other side. Have you ever been the one interacting negatively? How could you have been dissuaded from interacting that way?