Study, Build, Repeat: Using Online Communities as a Research Platform



Research on online communities raises a number of challenges. It is difficult to get access to usage data, to users (to interview), and to the system itself to introduce new features (e.g., participation incentive mechanisms). One solution is for researchers to create an online community themselves. Although this provides more control and access, it also requires additional resources (e.g., for staff to maintain the community) and consideration of the needs of the user community after the research is completed.

Introduction: Using Online Communities as a Research Platform

We do research on social computing and online communities. We begin from the premise that a deep understanding of online social interaction and adequate evaluation of new social interaction algorithms and interfaces requires access to real online communities. But what does “access” consist of? By reflecting on our own and other’s work, we can identify four levels of access, each of which enables additional research methods:
  1. 1

    Access to usage data: This enables behavioral analysis, modeling, simulation, and evaluation of algorithms.

  2. 2

    Access to users: This enables random assignment experiments, surveys, and interviews.

  3. 3

    Access to APIs/plug-ins1: This enables the empirical evaluation of new social interaction algorithms and user interfaces, as long as they can be implemented within the available APIs; systematic methods of subject recruitment may or may not be possible.

  4. 4

    Access to software infrastructure: This allows for the introduction of arbitrary new features, full logging of behavioral data, systematic recruitment of subjects, and random assignment experiments.


In general, as one ascends the levels, more powerful methods can be deployed to answer research questions in more authentic contexts. However, the costs and risks also increase: the costs of assembling a team with the diverse skills required to design, build, and maintain online community software and the risk that the effort will not pay off: if a community system does not attract users, it does not enable interesting research.

In the rest of this chapter, we draw heavily on our personal experience to elaborate on and exemplify this approach.

History and Evolution

We have been developing our approach since the mid-1990s. We describe a few key theories, projects, and technological developments that were important intellectual and practical influences on our approach.

Artifacts as psychological theories. We found Carroll and colleagues’ (Carroll & Campbell, 1989; Carroll & Kellogg, 1989) notion of “artifacts as psychological theories” conceptually inspiring, with its argument that designed artifacts embody claims about user behavior, that it is instructive to make these claims explicit, and that evaluating the use of an artifact also amounts to evaluating these behavioral claims. As we elaborate below, the features we include in our systems often have associated psychological claims, and we sometimes design features explicitly guided by psychological theory.

Project Athena and Andrew were 1980s’ projects to deploy networked workstations throughout the MIT and CMU campuses (respectively) to improve the quality of education and support research. Creating these networks required many design choices on issues that were then at the frontier of distributed computing and personal computing, such as reliability, security, scalability, interoperability, distributed file systems, name services, window managers, and user interfaces. By deploying these systems for real use, the designers were able to evaluate how well their design choices fared in practice. Given our computer science background, we found these examples of large-scale system building and deployment done by researchers (at least in part) for research purposes inspiring. However, since we have a quite different research emphasis—focusing on different research questions and using different methods—our work looks quite different. Notably, we focus on social interaction among users, we use psychological theory to guide our designs, and we use controlled field experiments to evaluate our designs.

The Web: do it yourself. For the authors, the advent of the World Wide Web was a direct gateway to developing our research approach. If you had an idea for a new interactive system, the Web was an environment where you could implement the idea and reach a potentially unlimited audience. The problem of information overload was attracting a lot of attention in the mid-1990s, and Riedl and Konstan (already at the University of Minnesota) and Terveen (then at AT&T Labs, now also at the University of Minnesota) explored their ideas in the emerging area of recommender systems as a means to address this problem. Riedl, Paul Resnick, and colleagues developed GroupLens, a system for collaborative filtering of Usenet news (Konstan et al., 1997; Resnick, Iacovou, Suchak, Bergstrom, & Riedl, 1994), and Konstan and Riedl followed this up by creating the MovieLens (Herlocker, Konstan, Borchers, & Riedl, 1999) movie recommendation site (more on MovieLens below). Terveen, Will HIll, and colleagues created PHOAKS (Hill & Terveen, 1996; Terveen, Hill, Amento, McDonald, & Creter, 1997), which used data mining to extract recommended web pages from Usenet newsgroups. The PHOAKS Web site contained “Top 10” lists of recommended web pages mined from thousands of newsgroups and attracted thousands of daily visitors during the late 1990s. These early efforts gave us our first taste of online community-centered research. We had built Web sites that attracted users because of the utility they found there, not because we recruited them to evaluate our ideas. Yet this authentic usage enabled us to evaluate the algorithms and user interfaces we created as part of our research program.

We next describe our approach in more detail. We first discuss the type of research questions and methods it enables. We then give an in-depth portrait of the approach by describing important online community sites that we use as vehicles for our research.

What Questions Is This Approach Suitable for Answering?

Since we are describing not a single research method, but rather a general approach to doing research, there is a very broad variety of questions that can be answered. Therefore, it is more appropriate to consider methods that fit best with the approach, skills required to do follow the approach, and benefits, challenges, and risks of doing so. We organize this discussion around the four levels of access to online communities we introduce in the “Introduction.” Note that each access level enables all the methods listed at all “lower” levels as well as the methods listed at the level itself (Table 1).
Table 1

Levels of access to online communities, with enabled methods

Access to

Enabled methods

1. Usage data

Behavioral analysis, including statistical analysis and simulation

Longitudinal data enables analysis of behavior change over time

Development and testing of algorithms

2. Users

Random assignment experiments, surveys, and interviews

3. APIs/plug-ins

Empirical evaluation of new social interaction algorithms and interfaces and psychological theories as long as they can be implemented within a published API

4. Software infrastructure

Empirical evaluation of arbitrary new social interaction algorithms and interfaces and psychological theories. Novel data logging. Random assignment experiments

The primary benefit of this approach is that it enables good science. At all levels, it enables testing ideas and hypotheses with authentic data such as actual behavioral data and user responses based on their participation in an online community. Even better, at the fourth (and possibly third) level, we can perform field experiments. Field experiments are studies done in a natural setting where an intervention is made, e.g., a new system feature is introduced, and its effects are studied. According to McGrath’s (1984) taxonomy of research strategies for studying groups, field experiments maximize realism of the setting or the context while still affording some experimental control. A further benefit of this approach is that once you have put in the effort to develop a system (and if you are able to attract a user community), there is both the possibility and a natural inclination to do a sequence of studies that build upon earlier results. This too makes for good science.

This approach also creates the opportunity for productive collaborations: if you analyze data from an existing community, the owners of that community will be interested in the results, and if you build a system that attracts a real user community, organizations with an interest in the community’s topic will be interested in pursuing collaborative projects. Our experience with communities such as Wikipedia and Cyclopath (details below) illustrate this point.

However, there also are significant challenges and risks to the approach we advocate, including the following:
  • A research team needs expertise in a wide variety of disciplines and skills, including social science theory and methods, user interface design, algorithm development, and software engineering. This requires either individuals who have the time and capability to master these skills or a larger team of interdisciplinary specialists. Either approach raises challenges in an academic setting; for example, GroupLens students typically take courses ranging from highly technical computer science topics (e.g., data mining, machine learning) to advanced statistical methods to design. This can increase the time a student is taking classes, and not every student is capable of mastering such diverse skills.

  • The system development and maintenance resources needed can be considerable. It is not enough just to build software; the software must be reliable and robust to support the needs of a user community. This requires at least some adherence to production software engineering practices, e.g., the use of version control software, code reviews, and project management and scheduling. Many researchers have neither training nor skills with these tools and practices, and in some cases, such resources simply may not be available. If they are available, they represent a significant investment. For example, our group at the University of Minnesota has supported a full-time project software engineer for over 10 years, as well as a dedicated Cyclopath software engineer for 3 years, with cumulative costs of over $1 million.

  • Research goals and the needs of the system and community members must be balanced. There are many potential trade-offs here.
    • Sometimes features must be introduced to keep a system attractive to users, even if there is no direct research benefit.

    • Sometimes a system must be redesigned and re-implemented simply to keep up with changing expectations for Web applications. For example, MovieLens had to be redesigned completely about 10 years ago to include Web 2.0 interactive features; however, given how long it has been since its last update, the MovieLens experience again is dated, and we are discussing whether the effort required to bring it up to date is worth the significant design and development costs this would entail.

    • Significant time may have to be spent working with collaborative partners and user groups; for example, our team members have spent considerable time with partners from the Wikimedia Foundation, the Everything2 community, and various Minnesota transportation agencies to define problems of mutual interest and define ways to address these problems that can produce both research results and practical benefits and that follow the ethical standards of all parties involved.

    • If a site does attract a user community, it becomes difficult (and perhaps unethical) for researchers to abandon it if their interests change or their resources become depleted.

    • Since in many cases graduate student researchers do a large part of the development work, research productivity measured in papers produced is almost necessarily lower. However, we believe that there is a corresponding advantage: the papers that are produced can answer questions in ways that otherwise would be impossible. The detailed discussion of our research sites and studies below is intended to support this claim.

  • Finally, the major risk is that if the system you create fails to attract or retain sufficient users, all your effort may be wasted. While failure is in principle a good teacher, many of these types of failures are rather boring: you did not pick a problem that people really cared about, your system was too slow and did not offer sufficient basic features, etc.

We next use work we have done on a variety of online community platforms to describe our approach in detail.

How to Follow This Approach/What Constitutes Good Work


We began studying Facebook in 2005, shortly after the site was introduced to the majority of universities (Lampe, Ellison, & Steinfield, 2006). Early on, we had permission from Facebook to “scrape” data from the site using automated scripts, enabling us to conduct a study that compared behaviors captured in user profiles (like listing friends and interests) with site perceptions collected through user surveys (Lampe, Ellison, & Steinfield, 2007). Other work we did was based on surveys of college-aged users in the university system and was focused on the social capital outcomes of Facebook use in that population. We found that social capital, or the extent to which people perceived they had access to novel resources from the people in their networks, was associated with higher levels of Facebook use. That finding was confirmed in a study that looked at change in this population over time (Steinfield, Ellison, & Lampe, 2008) and has been confirmed by other researchers (Burke, Kraut, & Marlow, 2011; Burke, Marlow, & Lento, 2010; Valenzuela, Park, & Kee, 2009). This research has consistently found that people who use Facebook perceive themselves as having more access to resources from their social networks, particularly benefits from weaker ties that have often been associated with bridging social capital (Burt, 1992). This form of social capital has often been associated with novel information and expanded worldviews. Put more directly, this research has shown that people are using sites like Facebook to nurture their social networks and access the resources from them, using the features of the site to more efficiently manage large, distributed networks.

At the same time as we have examined the role of Facebook in people’s daily lives, we have continued to explore the relationships between the psychosocial characteristics of users, how they use Facebook, and the outcomes of that use. For example, we used survey research to study people’s different motivations for using Facebook and how those people used different tools to satisfy those motivations (Smock, Ellison, Lampe, & Wohn, 2011). We found that people motivated for social interaction (as opposed to entertainment or self-presentation) were more likely to use direct messaging features. In addition, in following up on work about the relationship between Facebook and bridging social capital, we found that it was not the total number of friends in a person’s Facebook network that was associated with social capital but rather the number of “actual” friends they felt were part of their articulated Facebook network (Ellison, Steinfield, & Lampe, 2011). This work has also been expanded to show that it is not simply the existence of connections that matter, but how users “groom” those connections (Ellison, Vitak, Gray, & Lampe, 2011) through actions like responding to comments, “Liking” posts, and sending birthday greetings.

The overall pattern in these studies of Facebook use highlights the complex interplay between personal characteristics of users, the types of tasks they are bringing to the system, and the behaviors they engage in as they interact with their networks.

In terms of our hierarchy of access levels, our early work was at Level 1, as we did have access to actual Facebook usage data. However, our later work instead relied on surveys of Facebook users. It is ambiguous, however, whether to consider this work being at Level 2, as we could not recruit users from within Facebook itself, but rather only through external means such as posting messages to University e-mail lists. This puts limits on research; it is impossible to accurately represent the population of Facebook users without being able to access a random sample of those users. After we began our work, Facebook created a public API (see that allowed anyone to create new add-on Facebook applications; for example, popular games like FarmVille and Words With Friends were built on this platform. Further, researchers have used the Facebook API to build Facebook apps to explore ideas such as patterns of collaboration around shared video watching (Weisz, 2010) and commitment in online groups (Dabbish, Farzan, Kraut, & Postmes, 2012). However, Facebook apps do not change the core Facebook experience, nor can they form the basis of true random assignment experiments. Most works that have used interviews, surveys, or experiments of Facebook users have used some other sampling frame, often drawn from registrar lists or convenience samples of university students.

Of course, researchers at companies such as Facebook and Google (including student interns) typically have access to their products’ software infrastructure, so they are not subject to these limits. For example, Bakshy, Rosenn, Marlow, and Adamic (2012), working for the Facebook Data Science team, conducted an experiment to investigate how information embedded as links to news articles was diffused through the network of Facebook users. They could experimentally manipulate what users saw in their Newsfeed and use system logs to measure network tie differences between conditions. Google researchers studied how Google+ users constructed user groups on the site using a combination of server-level data, surveys, and interviews (Kairam, Brzozowski, Huffaker, & Chi, 2012). Studies by researchers at these companies can offer interesting results but are necessarily limited in reproducibility due to a variety of legal and ethical hurdles that make sharing data between industry and academia complicated. Recently, Facebook has been establishing processes to enable partnerships with researchers based in academic settings, negotiating the legal and technical needs of these collaborations. These research partnerships could help provide Level 3 access to this important source of data.


We began doing research on Wikipedia in 2006, leading to a paper that studied two research topics: what types of editors produced the value of Wikipedia articles, and what is the impact of damage2 on Wikipedia articles (Priedhorsky et al., 2007). This work was an early example of the now common research genre of “download and analyze the Wikipedia dump” (Level 1 access). However, there was one important addition: we also obtained data (including some provided by the Wikimedia Foundation) that let us estimate article views. View data gave us a way to formalize the notions of article value and damage. Intuitively, it is more valuable to contribute content to articles that are viewed more, and damage to articles that are viewed more is more harmful. With our formal definitions in hand, we found that a very small minority of active editors contributed a large proportion of the value of Wikipedia articles, that their domination was increasing over time, and that the probability of an article being viewed while damaged was small but increasing (over the time period we analyzed).

We have continued to do research involving analysis of Wikipedia data, studying topics such as the following:

How editors change their behavior as they gain experience (Panciera, Halfaker, & Terveen, 2009), how diversity of experience and interests affects editing success and member retention in WikiProjects (Chen, Ren, & Riedl, 2010), the effect of reverting edits on the quality and quantity of work and on editor retention (Halfaker, Kittur, & Riedl, 2011), and the gender gap in editing3 and its effects on Wikipedia content (Lam et al., 2011).

As with Facebook, there is a sense in which anyone does have access to Wikipedia users (where “users” here means editors). Various pages serve as public forums, and communication can be directed to individual users by editing their “user talk” pages; thus, in principle, a researcher could recruit subjects simply by inserting invitations to participate on their user talk pages. However, these techniques do not enable experimental control: crucially, there is no accepted way to randomly assign editors to experimental groups. Moreover, Wikipedia editors long have had a strong resistance to being treated as “experimental subjects.” We learned about these problems through bitter experience. Our first attempt to run an experiment with Wikipedia editors was with SuggestBot, our article recommendation tool (Cosley, Frankowski, Terveen, & Riedl, 2007). In the initial version of our experiment, SuggestBot automatically inserted recommendations of articles to edit on randomly selected editors’ user talk pages. However, this went against Wikipedia norms that participation in Wikipedia is “opt in,” and the reaction from editors was very negative. We therefore changed our model so that editors had to explicitly request recommendations from SuggestBot. In a subsequent project, we attempted to recruit editors for interviews but once again fell afoul of Wikipedia norms. This time one of our team members was accused of violating Wikipedia policies, there was a proposal to ban this person from Wikipedia, and we had to abandon the study. The root cause of these reactions was that Wikipedia editors were extremely averse to being treated as “guinea pigs,” and more generally, they objected to people using Wikipedia for any purpose other than building an encyclopedia. Thus, at this point in its development, Wikipedia did not support Level 2 access as we define it.

External researchers cannot introduce new features directly into Wikipedia (Level 4 access). Thus, we implemented SuggestBot as a purely external service running on our own servers, which Wikipedia editors could opt in to; if they do, it computes recommended articles for these users to edit and inserts them on their talk pages. However, note that this does not change the Wikipedia user experience per se. Subsequently, Wikipedia did provide mechanisms for developers to implement changes to the Wikipedia user experience: user scripts (; this enables Level 3 access. Users must download and install these scripts themselves if they want the modified user experience. We used this mechanism to implement NICE (, which embodies ideas about how reverts (undoing edits) can be made in a way that is less likely to de-motivate and drive away reverted editors, especially newcomers (Halfaker, Song, Stuart, Kittur, & Riedl, 2011b). NICE is implemented as a Wikipedia user script, which anyone can download and install to change their Wikipedia editing experience. While this approach does let us test new software features for Wikipedia editors “in the wild,” it still has a number of undesirable features, including selection bias (as noted above) and a software distribution problem. If we want to make changes, users will have to explicitly download a new version or else we will have multiple perhaps inconsistent versions running.

To address these specific problems, and more generally to enable responsible scientific experiments to be done in Wikipedia, members of our team (Riedl, along with one of our current graduate students, Aaron Halfaker) joined and became active participants in the Wikimedia Foundation Research Committee. The goal of this committee is “to help organize policies, practices and priorities around Wikimedia-related research” ( In particular, they are in the process of defining acceptable protocols for recruiting subjects; more generally, they will review planned research projects to make sure that they are compatible with Wikipedia’s goals and community norms.

Transition: building our own communities. Now that we have seen both the power and limits of doing research on third-party sites, we turn to sites we currently maintain to illustrate the additional types of research that Level 4 access enables. At Minnesota, we have created a number of online communities to serve as research sites for our studies in social computing. Some have failed to attract a lasting community (e.g., CHIplace; Kapoor, Konstan, & Terveen, 2005), and some have become useful for their intended user group but have not led to significant amounts of research (e.g., Two that have succeeded are MovieLens and Cyclopath. At Michigan State (and now Michigan), Lampe took responsibility for the already existing site Everything2, a user-generated encyclopedia formed in 1999, 2 years before Wikipedia. We discuss the three sites by examining a number of studies we have conducted with each.


Origin. In the mid-1990s, DEC Research ran a movie recommendation Web site called EachMovie. While the site was popular and well received, in 1997 DEC Research decided to take it down and solicited researchers who might be interested in the dataset or the site. The GroupLens Research group volunteered to take ownership of EachMovie; while legal issues prevented a handover of the site itself, DEC did make an anonymized dataset available for download. With this dataset as a basis, MovieLens was born.

Early algorithmic research. The initial use we made of MovieLens as a research platform was to explore the performance of different recommender system algorithms (Herlocker et al., 1999; Sarwar, Karypis, Konstan, & Riedl, 2000). This work is interestingly different from our later work in several key respects:
  • The focus was on algorithms rather than interaction techniques.

  • Social science theory was not used to inform the research.

  • We primarily used MovieLens usage data (Level 1), and the experiments we did were not field experiments (deployed in the site) but rather separate “online laboratory experiments” conducted with MovieLens users (who volunteered, and whose profiles were often not even used in the experiment). In this case, MovieLens was a source of research data and subjects but not yet a living laboratory.

Turning toward people, looking to theory. However, we soon began to move up the access level hierarchy. We did this because we wanted to evaluate our algorithms in actual usage and because we expanded our interests to include user interfaces for recommended systems. Three studies used a combination of field experiments and surveys to evaluate: algorithms and interfaces to explain why an item was recommended (Herlocker et al., 1999); algorithms to select initial sets of movies for users to rate (Rashid et al., 2002); and user interfaces to present initial sets of movies for users to rate (McNee, Lam, Konstan, & Riedl, 2003).4

These three studies aimed to solve recommender system problems: which items to present to users and how to help users evaluate recommended items. However, at about the same time, we began to incorporate another perspective into our work: the use of social science theory to guide the design of our experiments and software features.

A notable early example of this work is presented in “Is Seeing Believing?” (Cosley, Lam, Albert, Konstan, & Riedl, 2003). This work used the psychological literature on conformity (Asch, 1951) to frame research questions concerning user rating behavior and rating displays in recommender systems. Most generally, there was a concern that the standard recommender system practice of displaying, for a movie that the user had not yet rated, the rating that the system predicted the user would give the movie could bias the user to rate according to the prediction. Specific research questions that were studied included the following:
  • Are users consistent in their ratings of items?

  • Do different rating scales affect user ratings?

  • If the system displays deliberately inaccurate predicted ratings, will user’s actual ratings follow these inaccurate predictions?

  • Will users notice when predicted ratings are manipulated?

The study had both practical results and theoretically interesting implications. First, we modified the MovieLens rating scale based on the findings. Second, while users were influenced by the predicted ratings they were shown, they seemed to sense when these ratings were inaccurate and to become less satisfied with the system.

From a methodological point, it is worth noting that while an experiment was done with MovieLens users, it was explicitly presented to users as an experiment rather than involving authentic ongoing use.

Theory-guided design. Our interests continued to evolve to include (in addition to algorithms and user interfaces) social community aspects of MovieLens such as how to foster explicit interaction between users and how to motivate users to participate in the community. Thus, the GroupLens team began collaborating with HCI researchers trained in the social sciences, notably Robert Kraut and Sara Kiesler of CMU and Paul Resnick and Yan Chen from the University of Michigan. Through these collaborations social science theory came to play a central role in our research. We used theory to guide our designs, with the goal to create new features that would achieve a desired effect, such as attracting more ratings for movies that had not received many. An additional benefit was that this enabled us to test theories that had been developed for face-to-face interaction in the new context of online interaction to see how they generalized. We and our collaborators used theories including the collective effort model (Cosley, Frankowski, Kiesler, Terveen, & Riedl, 2005; Cosley, Frankowski, Terveen, & Riedl, 2006; Karau & Williams, 1993; Ling et al., 2005), goal setting (Ling et al., 2005; Locke & Latham, 2002), social comparison theory (Chen, Ren, & Riedl, 2010; Suls, Martin, & Wheeler, 2002), and common identity and common bond theories of group attachment (Prentice, Miller, & Lightdale, 1994; Ren, Kraut, & Kiesler, 2007; Ren et al., 2012). One productive line of work within this approach is intelligent task routing, which extends recommender algorithms to suggest tasks for users in open content systems. This is very useful, as open content systems often suffer from problems of under-contribution. We began this work in MovieLens (Cosley et al., 2006) but subsequently applied it to Wikipedia (Cosley et al., 2007) and Cyclopath (Priedhorsky, Masli, & Terveen, 2010).

We also have collaborated with Mark Snyder to apply his research on volunteerism (e.g., Clary et al., 1998; Omoto & Snyder, 1995; Snyder & Omoto, 2008) to study motivation for participation in online communities, using MovieLens as a research site. We surveyed thousands of new MovieLens users over a 5-month period, using several standard instruments to assess their motivations for trying the site, and then correlated their motivations with subsequent behavior on the site. As with Lampe and colleagues’ study of Facebook (Smock et al., 2011), we found that people who had different motivations for joining the community behaved differently: for example, people with more socially oriented motives engaged in more basic MovieLens behaviors (like rating movies) and connected more with other users (through the MovieLens Q&A forum).5 Notice that we were able to correlate attitudes and personality characteristics with behaviors only because we had both Level 1 (usage data) and Level 2 (users; experimental control) access to MovieLens.


Cyclopath was created by Priedhorsky, former GroupLens PhD student, and Terveen. Cyclopath is an interactive bicycle routing site and geographic wiki. Users can get personalized bike-friendly routes. They can edit the transportation map itself, monitor the changes of others, and revert them if necessary. Cyclopath has been available to cyclists in the Minneapolis/St. Paul metropolitan area since August 2008. As of Spring 2012, there were over 2,500 registered users, users have entered about 80,000 ratings and made over 10,000 edits to the map, and each day during riding season, several dozen registered users and a hundred or more anonymous users visit the site and request more than 150 routes (Fig. 1).
Fig. 1

The Cyclopath bicycle routing Web site, showing a bicycle route computed in response to a user request

Like MovieLens, Cyclopath was a “target of opportunity”; where MovieLens was created with EachMovie data, Priedhorsky was motivated to create Cyclopath because he was an avid cyclist, and he had strong personal knowledge of the limits of existing methods for cyclists obtain and share routing knowledge. Of course, his intuition also was that other cyclists would find such a system useful, and obviously, a basic tenet of HCI is that taking only your own preferences in account when designing a system may well result in a system that is of interest only to yourself. Further, we did preliminary empirical work to verify our general design concepts as well as specific design ideas (Priedhorsky, Jordan, & Terveen, 2007). Also like MovieLens, Cyclopath has proved to be a productive research platform for us. However, there are a number of significant differences in the two platforms, some intrinsic to the technology and domain, and some historical, due to when they were developed. First, Cyclopath was created after GroupLens had 10 years’ experience running MovieLens as a research platform and had begun research on Wikipedia. Thus, we were able to build on and generalize results and methods from these other platforms. Second, Cyclopath has served as a significant vehicle for collaboration between GroupLens and a number of local government agencies and nonprofits that focus on bicycling. This has created diverse opportunities as well as challenges.

We elaborate on both of these themes next.

Cyclopath: Generalizing Previous Research

Personalized route finding. From the beginning, we wanted to apply our long-standing expertise on recommender algorithms to the route finding problem. We wanted Cyclopath to compute routes personalized to the preferences of the requesting user (Priedhorsky & Terveen, 2008). Thus, users are able to enter their personal bikeability ratings for road and trail segments. However, as of this writing, the Cyclopath ratings database is very sparse, an order of magnitude sparser than MovieLens; thus, traditional collaborative filtering recommender algorithms are not practical. On the other hand, we tried machine learning techniques that considered the features (such as speed limit, auto traffic, and lane width) of the segments rated by users to develop predictors of users’ bikeability preferences; these predictors were very accurate and practical (Priedhorsky, Pitchford, Sen, & Terveen, 2012).

A geographic wiki. We needed to create analogues of the essential wiki mechanisms, porting them from a text to geographic context. Thus, we developed geographic editing tools, went from watch lists to watch regions, and designed an interactive geographic “diff” visualization. We also were forced to modify the traditional wiki data model (notably, as exemplified in Wikipedia) in two major ways (Priedhorsky & Terveen, 2011). First, where (text) wiki pages have been treated as independent entities, geographic objects are linked. This forced us to come up with a new definition and implementation of a revision as an operation on the entire database, not just a single object. Second, many applications of geographic wikis require fine-grained access control: for example, certain objects may be edited only by certain users or types of users (more on this below).

Theory-based design: intelligent task routing. We identified a major problem in Cyclopath where user input was required: in the datasets we used to populate our map initially, there were thousands of cases where road and bike trail intersected geometrically, but we had no automated way to tell whether they intersected topographically (rather than, for example, a trail crossing a road via a bridge, with no access from one to the other). We thus developed a mechanism to send users a request about an area of the map, asking them to determine whether an intersection existed. This mechanism was inspired by those we had used in MovieLens and Wikipedia. However, this study extended our previous results in several interesting ways. First, we developed a visual interface that drew users’ attention to potential intersection, and this interface seemed to be attractive enough to motivate participation. Second, we found that some tasks required user knowledge and some did not. For example, to rate the bikeability of a road segment, a user has to have knowledge of that segment. However, in many cases, a user could determine whether an intersection existed just by zooming in on the map and looking at the aerial photo. This has obvious implications for routing algorithms: some tasks may require users with specific knowledge, while others only require users who are motivated to perform them.

Theory-based design: User life cycles. When we analyzed Wikipedia data to investigate whether and how editors changed over time (Panciera et al., 2009), we found little evidence for development over time. However, limits of the Wikipedia data available for analysis raised several issues concerning our conclusions. In particular, we wondered whether Wikipedia editors might have learned by doing anonymous editing before creating accounts; we also wondered how viewing behavior might have influenced their development as editors. Since we have access to all Cyclopath data, we were able to study these issues in this context. In particular, for a number of editors, we were able to associate at least some of the actions they took before creating an account (and while logged out) with the actions they took after creating an account (and while logged in). Our results were analogous to our Wikipedia results: again, we saw little evidence for “becoming” at least in terms of quantity of editing (Panciera, Priedhorsky, Erickson, & Terveen, 2010). However, in subsequent work, we looked at the type of editing users did, and here we did observe some transitions over time (Masli, Priedhorsky, & Terveen, 2011), for example, beginning by adding textual annotations to road segment and transitioning to adding new road and trail segments and linking them into the rest of the map.

Cyclopath: Catalyzing Collaboration

Cycloplath has attracted significant attention and support from several Minnesota local government agencies and nonprofits. This has led to projects to add functionality to support analysis by bicycle transportation planners and to extend Cyclopath to cover the entire state of Minnesota. These projects have led to significant technical and conceptual developments, including the following:
  • Extending the wiki data model to allow fine-grained access control (Priedhorsky & Terveen, 2011): Transportation planners consider this necessary in order to retain the strengths of open content while still allowing certain information to be treated as authoritative.

  • A “what if” analysis feature that uses the library of all route requests ever issued by Cyclopath users: This enables transportation planners to determine where new bicycle facilities are most needed, estimate the impact of a new facility, and get quick and focused feedback from the bicycling public.

In other collaborative projects, we extended Cyclopath to do multimodal (bike + public transit) routing, which required changes to our basic routing algorithm, and are extending Cyclopath to cover the entire state of Minnesota. Both projects were funded by Minnesota state and local government agencies.


Chi (2009) defined three ways to create what he called “living laboratories.” One involves building one’s own sites, but another was to “adopt” an existing system and study it in the field. Several years ago, Lampe “adopted” the already existing site Everything2, a user-generated encyclopedia formed in 1999, 2 years before Wikipedia. Everything2 was formed by the same group that established the news and discussion site Slashdot but struggled for commercial success after the first dot-com bubble. In exchange for hosting services, the site has had an agreement with Lampe for the past several years to participate in research in many ways, including providing server logs, access to users for interviews and surveys, and rights to add new features to the site for field experiments or Level 3 access in our framework above. The agreement between Lampe and the owners of Everything2 exchanged this level of access for research purposes in exchange for hosting services at the university.

Although Everything2 never achieved the widespread use of Wikipedia, it has an active user population of several thousand users and receives over 300,000 unique visits per month. This activity provided ample behavioral data to examine but also enabled a stable population from which to draw survey samples. This has allowed the Michigan team to study motivations of both anonymous and registered users of the site (Lampe, Wash, Velasquez, & Ozkaya, 2010), finding that both registered and anonymous users had heterogeneous motivations for using the site and that motivations like being entertained were not associated with contributing to the site in the same way as motivations like providing information. We also looked at how habit interacts with those motivations as a predictor of online community participation (Wohn, Velasquez, Bjornrud, & Lampe, 2012), finding that habits are better predictors for less cognitively involved tasks like voting and tagging while being less associated with more involved tasks like contributing articles. Our team also researched which types of initial use and feedback are associated with long-term participation (Sarkar, Wohn, Lampe, & DeMaagd, 2012), finding that those users who had their first two articles deleted were very unlikely to participate in the site again.

Adopting a site in this fashion can have many benefits for both the researcher and the site. For sites that are active, but perhaps not commercially viable on their own, the arrangement provides the community with stability and some measure of security. For the researcher, it can provide access to a community successful enough to research without the difficulty of trying to create a self-sustaining, viable online community. For example, Lampe tried to make several online communities related to public sector interests, none of which achieved the critical mass necessary to conduct the type of research being described here (Lampe & Roth, 2012). Adopting an active online community with a sustainable user population helps to short-circuit some of the major risks and costs of building one’s own community.

However, there are some problems with the adoption path, too. For example, with Everything2 the site ownership changed, and the original arrangement had to be renegotiated. The research location also changed, requiring yet more renegotiation. In addition, some users of the site did not appreciate the research agreement and either left the site to avoid “being mice in a maze” or demanded more active agency in the type of research being conducted (similar to Wikipedia editors’ reactions described above; for a more general treatment of this topic, see Bruckman’s chapter on “Research and Ethics in HCI”). This regular interaction with the community is an additional cost for the management of the research project. Also, just because site owners gave permission to interview and survey members of the community, it did not guarantee that those users would respond to our requests for data.

Sidebar: How Many Users?

We sometimes are asked how many members and how much activity a community must have before it serves as a viable vehicle for research. The answer is that it depends. It depends significantly on your research questions, methods, participation rate of community members, and (if appropriate) effect sizes. If one uses qualitative methods, the ability to interview even ten or so people may suffice. On the other hand, we often assign users to different experimental conditions and then do quantitative analysis of user behaviors, some of which might be rare. In such cases, hundreds of users may be required. MovieLens and Everything2 both enable this. For example, in the work reported by Fuglestad et al. (2012), nearly 4,000 MovieLens users filled out at least part of a survey. On the other hand, in Cyclopath we can obtain responses from at most several hundred users, but 50–70 is more typical. Oddly enough, it can be difficult to obtain sufficient users for our Wikipedia research as well, since, as we mentioned above, Wikipedia editors have to take some explicit action to opt in to a study. In Everything2, even though there are several hundred active users, we have found that only 150–200 users will respond to surveys during a study period.

Related Work

Since our approach is not a well-defined standard method (yet), we found it appropriate to illustrate with examples from our own research. However, other researchers in social computing and other areas of human–computer interaction have built systems as vehicles for their research and have at least sought to obtain an authentic user community for their systems. While space does not allow a detailed treatment, we wanted to direct the reader to some other noteworthy examples of researchers who have taken similar approaches to ours:
  • Alice (Pausch et al., 1995) is a 3D programming environment for creating simple animated games or stories. It is a teaching tool, designed to make the concepts of programming accessible to all, including children. Alice has been used in introductory programming classes, and there has been extensive evaluation of its effectiveness as a teaching tool (Cooper, Dann, & Pausch, 2003; Moskal, Lurie, & Cooper, 2004). This led, for example, to the development of Storytelling Alice, which is tailored specifically for middle school children, particularly girls (Kelleher, Pausch, & Kiesler, 2007).

  • Beehive (since renamed SocialBlue) is a social networking and content sharing system created by IBM researchers and deployed to IBM employees worldwide. It has been used as a platform to research issues such as participation incentives and friend recommendation algorithms (Chen, Geyer, Dugan, Muller, & Guy, 2009; Daly, Geyer, & Millen, 2010; DiMicco et al., 2008; Farzan et al., 2008; Steinfield, DiMicco, Ellison, & Lampe, 2009).

  • The International Children’s Digital Library is a Web site that makes children’s books from many languages and cultures available online for free. It was created by researchers at the University of Maryland. It has served as a platform for research on topics such as how children search for information online, effective search interfaces for children, design studies, and crowdsourced translation. See (retrieved April 9, 2012) for a lengthy list of references.

  • Von Ahn created the ESP Game and followed it up with other “games with a purpose.” These systems were used by hundreds of thousands of people on the Web, pioneered the area of human computation, and were evaluated in a number of studies, including investigations of the effectiveness of human computation and how to organize human computation (von Ahn, 2006; von Ahn & Dabbish, 2008).

  • PARC researchers created Mr. Taggy ( to explore tag-based web search and the WikiDashboard (; Suh, Chi, Kittur, & Pendleton, 2008) to investigate how people make sense of Wikipedia.

  • Dan Cosley and his students at Cornell created Pensieve as a tool to support and investigate the process of reminiscing (; Peesapati et al., 2010).

  • Eric Gilbert created We Meddle to evaluate models of predicting tie strength from social media interaction (Gilbert, 2012).

  • Brent Hecht and the CollabLab at Northwestern University have created Omnipedia, a tool that allows a user to search for the same word or term across multiple different language versions of Wikipedia to see the prevalence of that entry in the different language versions. This is an example of a tool that adds value to an online community by providing a layer of analysis that increases the opportunities to participate across groups (; Bao et al. 2012).

Summary and Future Directions

We outlined our approach to doing research on online communities, defined four levels of access researchers can have to a community, and gave a number of in-depth examples of research done at the various levels. We specifically sought to illustrate the limits of conducting research where one does not have full access to a community and the benefits—but also risks and costs—of building and maintaining one’s own community as a research platform. Most notably, our communities have enabled us to carry out numerous studies where we introduced new—and often theory-based—algorithms and user interfaces and where we were able to evaluate their effects on users’ actual behavior as well as users’ subjective reactions.

To elaborate on the final point, we are interested in ways to make the benefits of full access to an online community—crucially, the ability to introduce new software features and to conduct random assignment field experiments—widely available to the research community. There are several existing (or emerging) routes to this already as well as possible new directions.

First, there are no (or very little) technical barriers to sharing datasets. This means that groups that maintain online communities can choose to produce (suitably anonymized) datasets available for other researchers to use. Indeed, our group at Minnesota makes several MovieLens datasets available, and these datasets have been used in over 300 published papers. It would be helpful if more large-scale communities follow the lead of Wikipedia and make datasets available for analysis.

Second, researchers should work with commercial sites to try to increase the access researchers have to these sites while ensuring that the values of the community and the desires of its members are respected. The work of Riedl and Halfaker on the Wikimedia Foundation Research Committee is a model here; the results will give all researchers the chance to do controlled experiments and test interventions at a large scale.

Third, we encourage researchers who do maintain successful online communities to make it possible for other researchers to run experiments in their communities. One requirement would be to define APIs to let others write programs designed to run on a site. Another would be to create some sort of management structure to approve proposed experiments, e.g., to be sure that they do not use too many resources or abuse user expectations. The GroupLens research group at the University of Minnesota has developed a proposal to turn MovieLens into this sort of open laboratory, but the development and administrative costs are nontrivial, so dedicated funding to enable this is required.


  1. 1.

    Online communities pose special problems to both technical research and field deployments. What are those problems, and what do these researchers have to overcome to be successful?

  2. 2.

    Summarize the various ways of motivating people to contribute to a community and their pros and cons.



  1. 1.

    API stands for Application Programming Interface, a published protocol that defines a set of functionality that a software component makes available to programmers who may want to use that component. A plug-in is a piece of software that is added to a larger software application to extend or customize its functionality.

  2. 2.

    We defined “damage” to an article through a combination of algorithmic detection and manual coding to evaluate the accuracy of our algorithm.

  3. 3.

    As detailed in the paper, our analysis relied on Wikipedia editors’ self-reported gender.

  4. 4.

    When we began doing these live studies, we realized that we had to obtain Institutional Review Board approval, which we did and which has become routine across all our communities and experiments. Note that our “terms of use” say that we have the right to log and analyze behavioral data for research purposes; we also guarantee that we will not disclose any personal or identifying data in our published research. However, we do obtain IRB approval when we do surveys and interviews or introduce new features explicitly to evaluate for research purposes.

  5. 5.

    Many of our effect sizes were small, although still significant. Note that we achieved these results with thousands of users. This illustrates that size does matter: the number of users in a community limits the number and types of experiments it can support. For example, as of this writing, we typically can get 50–80 experiments for Cyclopath experiments, while we can get an order of magnitude more subjects in MovieLens. Nonetheless, we sometimes have to schedule several MovieLens experiments in sequence because there are not enough users (or at least not of the desired type, say new users) for both experiments to run in parallel.



This work has been supported by the National Science Foundation under grants IIS 08-08692, IIS 10-17697, IIS 09-68483, IIS 08-12148, and IIS 09-64695.


  1. Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgments. Groups, Leadership, and Men, 27(3), 177–190.Google Scholar
  2. Bakshy, E., Rosenn, I., Marlow, C., & Adamic, L. A. (2012). The role of social networks in information diffusion. In International World Wide Web Conference (WWW) ’12, Rio de Janeiro, Brazil.Google Scholar
  3. Bao, P., Hecht, B., Carton, S., Quaderi, M., Horn, M., & Gergle, D. (2012). Omnipedia: bridging the wikipedia language gap. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (pp. 1075–1084). ACM.Google Scholar
  4. Burke, M., Kraut, R., & Marlow, C. (2011). Social capital on facebook: Differentiating uses and users. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems (pp. 571–580). New York, NY: ACM Press.Google Scholar
  5. Burke, M., Marlow, C., & Lento, T. (2010). Social network activity and social well-being. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (pp. 1909–1912). New York, NY: ACM Press.Google Scholar
  6. Burt, R. S. (1992). Structural holes. Cambridge, MA: Harvard University Press.Google Scholar
  7. Carroll, J. M., & Campbell, R. L. (1989). Artifacts as psychological theories: The case of human-computer interaction. Behaviour and Information Technology, 8(4), 247–256.CrossRefGoogle Scholar
  8. Carroll, J. M., & Kellogg, W. (1989). Artifact as theory-nexus: Hermeneutics meets theory-based design. In Proceedings of Human Factors in Computing Systems (pp. 7–14). New York, NY: ACM Press.Google Scholar
  9. Chen, J., Geyer, W., Dugan, D., Muller, M., & Guy, I. (2009). Make new friends, but keep the old: Recommending people on social networking sites. In Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI ’09) (pp. 201–210). New York, NY: ACM Press.Google Scholar
  10. Chen, J., Ren, Y., & Riedl, J. (2010). The effects of diversity on group productivity and member withdrawal in online volunteer groups. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (pp. 821–830). New York, NY: ACM Press.Google Scholar
  11. Chi, E. H. (2009). A position paper on ‘living laboratories’: Rethinking ecological designs and experimentation in human-computer interaction. In Presented at the Proceedings of the 13th International Conference on Human-Computer Interaction. Part I: New Trends, San Diego, CA.Google Scholar
  12. Clary, E. G., Snyder, M., Ridge, R. D., Copeland, J., Stukas, A. A., Haugen, J., et al. (1998). Understanding and assessing the motivations of volunteers: A functional approach. Journal of Personality and Social Psychology, 74(6), 1516–1530.CrossRefGoogle Scholar
  13. Cooper, S., Dann, W., & Pausch, R. (2003). Teaching objects-first in introductory computer science. In Proceedings of the 34th SIGCSE Technical Symposium on Computer Science Education (SIGCSE ’03) (pp. 191–195). New York, NY: ACM Press.Google Scholar
  14. Cosley, D., Frankowski, D., Kiesler, S., Terveen, L., & Riedl, J. (2005). How oversight improves member-maintained communities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’05) (pp. 11–20). New York, NY: ACM Press.Google Scholar
  15. Cosley, D., Frankowski, D., Terveen, L., & Riedl, J. (2006). Using intelligent task routing and contribution review to help communities build artifacts of lasting value. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (pp. 1037–1046). New York, NY: ACM Press.Google Scholar
  16. Cosley, D., Frankowski, D., Terveen, L., & Riedl, J. (2007). SuggestBot: Using intelligent task routing to help people find work in Wikipedia. In Proceedings of the International Conference on Intelligent User Interfaces (pp. 32–41). New York, NY: ACM Press.Google Scholar
  17. Cosley, D., Lam, S. K., Albert, I., Konstan, J. A, & Riedl, J. (2003). Is seeing believing?: How recommender system interfaces affect users’ opinions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03) (pp. 585–592). New York, NY: ACM Press.Google Scholar
  18. Dabbish, L., Farzan, R., Kraut, R., & Postmes, T. (2012). Fresh faces in the crowd: Turnover, identity, and commitment in online groups. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (pp. 245–248). New York, NY: ACM Press.Google Scholar
  19. Daly, E. M., Geyer, W., & Millen, D. R. (2010). The network effects of recommending social connections. In Proceedings of the Fourth ACM Conference on Recommender Systems (RecSys ’10) (pp. 301–304). New York, NY: ACM Press.Google Scholar
  20. DiMicco, J., Millen, D. R., Geyer, W., Duganm, C., Brownholtz, B., & Muller, M. (2008). Motivations for social networking at work. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work (CSCW ’08) (pp. 711–720). New York, NY: ACM Press.Google Scholar
  21. Ellison, N. B., Steinfield, C., & Lampe, C. (2011). Connection strategies: Social capital implications of facebook-enabled communication practices. New Media & Society, 13, 873–892.CrossRefGoogle Scholar
  22. Ellison, N., Vitak, J., Gray, R., & Lampe, C. (2011). Cultivating social resources on facebook: Signals of relational investment and their role in social capital processes. In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media (pp. 330–337). Menlo Park, CA: AAAI Press.Google Scholar
  23. Farzan, R., DiMicco, J. M., Millen, D. R., Dugan, C., Geyer, W., & Brownholtz, E. A. (2008). Results from deploying a participation incentive mechanism within the enterprise. In Proceedings of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (CHI ’08) (pp. 563–572). New York, NY: ACM Press.Google Scholar
  24. Fuglestad, P. T., Dwyer, P. C., Filson Moses, J., Kim, J. S., Mannino, C. A., Terveen, L., et al. (2012). What makes users rate (share, tag, edit…)? Predicting patterns of participation in online communities. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (pp. 969–978). New York, NY: ACM Press.Google Scholar
  25. Gilbert, E. (2012). Predicting tie strength in a new medium. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW ’12) (pp. 1047–1056). New York, NY: ACM Press.Google Scholar
  26. Halfaker, A., Kittur, A., & Riedl, J. (2011). Don’t bite the newbies: How reverts affect the quantity and quality of Wikipedia work. In Proceedings of the 7th International Symposium on Wikis and Open Collaboration (pp. 163–172). New York, NY: ACM Press.Google Scholar
  27. Halfaker, A., Song, B., Stuart, D. A., Kittur, A., & Riedl, J. (2011). NICE: Social translucence through UI intervention. In Proceedings of the 7th International Symposium on Wikis and Open Collaboration (pp. 101–104). New York, NY: ACM Press.Google Scholar
  28. Herlocker, J. L., Konstan, J. A., Borchers, A., & Riedl, J. (1999). An algorithmic framework for performing collaborative filtering. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’99) (pp. 230–237). New York, NY: ACM Press.Google Scholar
  29. Hill, W. C., & Terveen, L. G. (1996) Using frequency-of-mention in public conversations for social filtering. In Proceedings of the ACM 1996 Conference on Computer Supported Cooperative Work (CSCW ’96) (pp. 106–112). New York, NY: ACM Press.Google Scholar
  30. Kairam, S., Brzozowski, M. J., Huffaker, D., & Chi, E. H. (2012) Talking in circles: Selective sharing in Google+. In Conference on Human Factors in Computing Systems (CHI), Austin, TX.Google Scholar
  31. Kapoor, N., Konstan, J. A., & Terveen, L. G. (2005). How peer photos influence member participation in online communities. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’05) (pp. 1525–1528). New York, NY: ACM Press.Google Scholar
  32. Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65(4), 681–706.CrossRefGoogle Scholar
  33. Kelleher, C., Pausch, R., & Kiesler, S. (2007). Storytelling alice motivates middle school girls to learn computer programming. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07) (pp. 1455–1467). New York, NY: ACM Press.Google Scholar
  34. Konstan, J. A., Miller, B., Maltz, D., Herlocker, J., Gordon, L., & Riedl, J. (1997). GroupLens: Applying collaborative filtering to usenet news. Communications of the ACM, 40(3), 77–87.CrossRefGoogle Scholar
  35. Lam, S. K., Uduwage, A., Dong, Z., Sen, S., Musicant, D. R., Terveen, L., et al. (2011). WP:Clubhouse? An exploration of Wikipedia’s gender imbalance. In Proceedings of the 7th International Symposium on Wikis and Open Collaboration (pp. 1–10). New York, NY: ACM Press.Google Scholar
  36. Lampe, C., Ellison, N., & Steinfield, C. (2006). A face(book) in the crowd: Social searching vs. social browsing. In Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work (pp. 167–170). New York, NY: ACM Press.Google Scholar
  37. Lampe, C., Ellison, N., & Steinfield, C. (2007). Profile elements as signals in an online social network. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’07) (pp. 435–444).New York, NY: ACM Press.Google Scholar
  38. Lampe, C., & Roth, B. (2012). Implementing social media in public sector organizations. Paper presented at the iConference ’12, Toronto.Google Scholar
  39. Lampe, C., Wash, R., Velasquez, A., & Ozkaya, E. (2010). Motivations to participate in online communities. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (pp: 1927–1936). New York, NY: ACM Press.Google Scholar
  40. Ling, K., Beenen, G., Ludford, P., Wang, X., Chang, K., Li, X., et al. (2005). Using social psychology to motivate contributions to online communities. Journal of Computer-Mediated Communication, 10(4), article 10.Google Scholar
  41. Locke, E. A., & Latham, G. P. (2002). Building a practically useful theory of goal setting and task motivation: A 35 year odyssey. American Psychologist, 57(9), 705–717.CrossRefGoogle Scholar
  42. Masli, M., Priedhorsky, R., & Terveen, L. (2011). Task specialization in social production communities: The case of geographic volunteer work. In Proceedings of the 4th International AAAI Conference on Weblogs and Social Media (ICWSM 2011) (pp. 217–224). Palo Alto, CA: AAAI Press.Google Scholar
  43. McGrath, J. E. (1984). Groups: Interaction and performance. Inglewood, NJ: Prentice Hall, Inc.Google Scholar
  44. McNee, S. M., Lam, S. K., Konstan, J. A., & Riedl, J. (2003). Interfaces for eliciting new user preferences in recommender systems. In: Proceedings of the 9th International Conference on User Modeling (pp. 178–188). Berlin, Heidelberg: Springer-Verlag.Google Scholar
  45. Moskal, B., Lurie, D., & Cooper, S. (2004). Evaluating the effectiveness of a new instructional approach. SIGCSE Bulletin, 36(1), 75–79.CrossRefGoogle Scholar
  46. Omoto, A. M., & Snyder, M. (1995). Sustained helping without obligation: Motivation, longevity of service, and perceived attitude change among AIDS volunteers. Journal of Personality and Social Psychology, 68(4), 671–686.CrossRefGoogle Scholar
  47. Panciera, K., Halfaker, A., & Terveen, L. (2009) Wikipedians are born, not made: A study of power editors on Wikipedia. In Proceedings of the ACM 2009 International Conference on Group Work (pp. 51–60). New York, NY: ACM Press.Google Scholar
  48. Panciera, K., Priedhorsky, R., Erickson, T., & Terveen, L. (2010). Lurking? Cyclopaths?: A quantitative lifecycle analysis of user behavior in a geowiki. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI ’10) (pp. 1917–1926). New York, NY: ACM Press.Google Scholar
  49. Pausch, R., Burnette, T., Capeheart, A. C., Conway, M., Cosgrove, D., DeLine, R., et al. (1995). Alice: Rapid prototyping system for virtual reality. IEEE Computer Graphics and Applications, 15(3), 8–11.CrossRefGoogle Scholar
  50. Peesapati, S. T., Schwanda, V., Schultz, J., Lepage, M., Jeong, S. Y., & Cosley, D. (2010). Pensieve: Supporting everyday reminiscence. In Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI ’10) (pp. 2027–2036). New York, NY: ACM Press.Google Scholar
  51. Prentice, D. A., Miller, D. T., & Lightdale, J. R. (1994). Asymmetries in attachments to groups and to their members: Distinguishing between common identity and common-bond groups. Personality and Social Psychology Bulletin, 20(5), 484–493.CrossRefGoogle Scholar
  52. Priedhorsky, R., Chen, J., Lam, S. K., Panciera, K., Terveen, L., & Riedl, J. (2007). Creating, destroying, and restoring value in Wikipedia. In Proceedings of the 2007 International ACM Conference on Conference on Supporting Group Work (pp. 259–268). New York, NY: ACM Press.Google Scholar
  53. Priedhorsky, R., Jordan, B., & Terveen, L. (2007). How a personalized geowiki can help bicyclists share information more effectively. In Proceedings of the 2007 International Symposium on Wikis (WikiSym ’07) (pp. 93–98). New York, NY: ACM Press.Google Scholar
  54. Priedhorsky, R., Masli, M., & Terveen, L. (2010). Eliciting and focusing geographic volunteer work. In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work (CSCW ’10) (pp. 61–70). New York, NY: ACM Press.Google Scholar
  55. Priedhorsky, R., Pitchford, D., Sen, S., & Terveen, L. (2012). Recommending routes in the context of bicycling: Algorithms, evaluation, and the value of personalization. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW ’12) (pp. 979–988). New York, NY: ACM Press.Google Scholar
  56. Priedhorsky, R., & Terveen, L. (2008). The computational geowiki: What, why, and how. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work (CSCW ’08) (pp. 267–276). New York, NY: ACM Press.Google Scholar
  57. Priedhorsky R., & Terveen, L. (2011). Wiki grows up: Arbitrary data models, access control, and beyond. In Proceedings of the 7th International Symposium on Wikis and Open Collaboration (WikiSym ’11) (pp. 63–71). New York, NY: ACM Press.Google Scholar
  58. Rashid, A. M., Albert, I., Cosley, D., Lam, S. K., McNee, S., Konstan, J. A., et al. (2002). Getting to know you: Learning new user preferences in recommender systems. In Proceedings of the 2002 International Conference on Intelligent User Interfaces (IUI2002) (pp. 127–134). New York, NY: ACM Press.Google Scholar
  59. Ren, Y., Harper, F. M., Drenner, S., Terveen, L., Kiesler, S., Riedl, J., & Kraut, R. E. (2012). Building member attachment in online communities: Applying theories of group identity and interpersonal bonds. Mis Quarterly, 36(3)Google Scholar
  60. Ren, Y., Kraut, R., & Kiesler, S. (2007). Applying common identity and bond theory to design of online communities. Organization Studies, 28(3), 377–408.CrossRefGoogle Scholar
  61. Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., & Riedl, J. (1994). In R. Furuta & C. Neuwirth (Eds.). GroupLens: An open architecture for collaborative filtering of Netnews. Proceedings of the International Conference on Computer Supported Cooperative Work (CSCW ’94) (pp. 175–186). New York, NY: ACM Press.Google Scholar
  62. Sarkar, C., Wohn, D. Y., Lampe, C., & DeMaagd, K. (2012). A quantitative explanation of governance in an online peer-production community. Paper presented at the 30th International Conference on Human Factors in Computing Systems, Austin, TX.Google Scholar
  63. Sarwar, B. M., Karypis, G., Konstan, J. A., & Riedl, J. (2000). Analysis of recommender algorithms for E-commerce. In Proceedings of the 2nd ACM Conference on Electronic Commerce (pp. 158–167). New York, NY: ACM Press.Google Scholar
  64. Smock, A. D., Ellison, N. B., Lampe, C., & Wohn, D. Y. (2011). Facebook as a toolkit: A uses and gratification approach to unbundling feature use. Computers in Human Behavior, 27(6), 2322–2329. doi:10.1016/j.chb.2011.07.011.CrossRefGoogle Scholar
  65. Snyder, M., & Omoto, A. M. (2008). Volunteerism: Social issues perspectives and social policy implications. Social Issues and Policy Review, 2(1), 1–36.CrossRefGoogle Scholar
  66. Steinfield, C., DiMicco, J. M., Ellison, N. B., & Lampe, C. (2009). Bowling online: Social networking and social capital within the organization. In Proceedings of the Fourth International Conference on Communities and Technologies (pp. 245–254). New York, NY: ACM Press.Google Scholar
  67. Steinfield, C., Ellison, N., & Lampe, C. (2008). Social capital, self esteem, and use of online social network sites: A longitudinal analysis. Journal of Applied Developmental Psychology, 29(6), 434–445.CrossRefGoogle Scholar
  68. Suh, B., Chi. E. H., Kittur, A., & Pendleton, B. A. (2008). Lifting the veil: Improving accountability and social transparency in Wikipedia with wikidashboard. In Proceedings of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (CHI ’08) (pp. 1037–1040). New York, NY: ACM Press.Google Scholar
  69. Suls, J., Martin, R., & Wheeler, L. (2002). Social comparison: Why, with whom, and with what effect? Current Directions in Psychological Science, 11(5), 159–163.CrossRefGoogle Scholar
  70. Terveen, L., Hill, W., Amento, B., McDonald, D., & Creter, J. (1997). PHAOKS: A system for sharing recommendations. Communications of the ACM, 40(3), 59–62.CrossRefGoogle Scholar
  71. Valenzuela, S., Park, N., & Kee, K. F. (2009). Is there social capital in a social network site?: Facebook use and college students’ life satisfaction, trust and participation. Journal of Computer-Mediated Communication, 14(4), 875–901.CrossRefGoogle Scholar
  72. von Ahn, L. (2006). Games with a purpose. IEEE Computer Magazine, 39(6), 92–94.CrossRefGoogle Scholar
  73. von Ahn, L., & Dabbish, L. (2008). General techniques for designing games with a purpose. Communications of the ACM, 51(8), 58–67.Google Scholar
  74. Weisz, J. D. (2010). Collaborative online video watching. Doctoral dissertation, Carnegie Mellon University.Google Scholar
  75. Wohn, D. Y., Velasquez, A., Bjornrud, T., & Lampe, C. (2012). Habit as an explanation of participation in an online peer-production community. In Proceedings of the 30th International Conference on Human Factors in Computing Systems (CHI) (pp. 2905–2914). New York, NY: ACM Press.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Loren Terveen
    • 1
  • Joseph A. Konstan
    • 1
  • Cliff Lampe
    • 2
  1. 1.Department of Computer Science and EngineeringThe University of MinnesotaMinneapolisUSA
  2. 2.School of Information, University of MichiganAnn ArborUSA

Personalised recommendations