Skip to main content

Why crowdsourcing fails

Abstract

Crowdsourcing—asking an undefined group of external contributors to work on tasks—allows organizations to tap into the expertise of people around the world. Crowdsourcing is known to increase innovation and loyalty to brands, but many organizations struggle to leverage its potential, as our research shows. Most often this is because organizations fail to properly plan for all the different stages of crowd engagement. In this paper, we use several examples to explain these challenges and offer advice for how organizations can overcome them.

Introduction

We have studied the crowdsourcing of knowledge—defined as inviting an undefined group of external contributors to work on tasks—for several years. When used to its full potential, crowdsourcing provides access to knowledge beyond an organization’s local base and can help organizations generate and select ideas (Afuah and Tucci 2012; Jeppesen and Lakhani 2010; Dahlander and Piezunka 2014; Felin et al. 2017; West and Bogers 2014). It does so by tapping into the expertise of externals—people these organizations might never have heard of before and would not know how to reach otherwise.

Take the case of the Canadian mining company GoldCorp. Its mines seemed to have run dry, and its CEO decided to pursue a novel path in the form of crowdsourcing. In the process, he reshaped the company’s approach to discovery. “Traditionally, mining companies have worried about how strong your back is, not how big your brain is. We wanted to do something that no one in the industry had done, to tap into the intellectual capital of the world”, the CEO said (Taylor and LaBarre 2006). They started a crowdsourcing initiative asking external contributors where to drill, and the crowd detected new ways of finding gold previously unknown to GoldCorp. The end result was a skyrocketing share price and the additional discovery of a different kind of gold—previously unknown talent that was subsequently hired.

While anecdotal cases like GoldCorp illustrate the potential of crowdsourcing, our research has shown that a large and often silent majority of organizations trying to engage in crowdsourcing often fail (Dahlander and Piezunka 2014). The crowdsourcing platform Quirky raised US$185 million in venture capital and went bust in 2015. Despite initial interest, many of their crowdsourced products had limited appeal. BP experienced similar disappointment in the wake of the DeepWater Horizon accident, when the company reached out to the crowd for ideas about how to tackle with the catastrophe in the Mexican gulf. More than 100,000 ideas were submitted and evaluated by more than 100 experts, and yet no “silver bullet technology” was discovered, as one of BP’s engineers told The Guardian.

Organizations often fail to crowdsource successfully because crowds differ in how they are organized compared to traditional sourcing (Afuah and Tucci 2012; Felin et al. 2017; Ghezzi et al. 2018; Jeppesen and Lakhani 2010; Puranam et al. 2014). Instead of managing employees or suppliers, organizations work with external contributors who self-select into the process. Crowds must be managed in a different way in order to fully tap into the potential of crowdsourcing.

Research in the last decade has identified some of the key challenges that managers face when implementing crowdsourcing, and how these challenges can be overcome. Early research on crowdsourcing documented its potential value and how it worked (Afuah and Tucci 2012; Jeppesen and Lakhani 2010), and more recent research has started to think about its challenges and trade-offs, and how they can be managed (Lifschitz-Assaf 2018; Piezunka and Dahlander 2015, 2019; Winsor et al. 2019). Knowing how to master these challenges has become particularly crucial as insights from crowdsourcing have begun to inform other forms of sourcing (Su, Levina, and Ross, 2016). Although it is difficult to estimate the exact number of companies that are or have used crowdsourcing, recent industry reports suggest that it is growing.

Overview of our research methods

Our findings stem from a larger research program on crowdsourcing that we have worked on over a decade focusing on the challenges organizations face when they engage in crowdsourcing. The following articles build on two main streams:

First, an extensive literature review on crowdsourcing where we developed a conceptual model of the major steps of crowdsourcing (Dahlander et al. 2019). We reviewed the academic literature published in the top management journals in this process.

Second, multiple empirical examinations where we have worked in collaboration with a company that supports organizations in their crowdsourcing by providing a software tools that allows them to gather and manage suggestions. The organizations use this software to collect ideas from and communicate with contributors. The software is also used to select which ideas to implement. We have studied these crowdsourcing initiatives and their contributions and discussions to get a contextual understanding, visited and interviewed managers at the company, and analyzed the whole dataset for what kinds of ideas get selected, rejected and ignored (see Dahlander and Piezunka 2014; Piezunka and Dahlander 2015, 2019). What is worth noting is that as we have deepened the collaboration with the company providing the data. We have expanded the dataset and improved methods in developing this research stream. Our papers explain the methods more carefully and the number of observations used. We structure this piece around four key challenges that we have identified via our own work, the associated academic literature, and our findings from working directly with managers.

Organizational design problems of crowdsourcing

Problem 1: managing crowds involves multiple steps

A problem we observed repeatedly is that companies want to use crowds to get ideas, but they fail to organize for crowdsourcing. A key challenge is that it involves various decisions with complex interdependencies. We have summarized the process into four main steps: (1) define the task to be completed; (2) broadcast to a pool of potential contributors; (3) attract a crowd of contributors; and (4) select among the input received.

Think of the interdependencies between these stages. If you define the task too broadly, then it has significant implications for the types of people who engage, the size of the crowd, and the challenge of sifting through the submitted ideas. This makes a simple task suddenly tricky, because interdependent decisions cannot be made in isolation (Levinthal 1997; Puranam et al. 2012; Rivkin and Siggelkow 2003).

A key challenge beyond interdependence is irreversibility. Once the crowdsourcing initiative is configured and has been broadcasted, it is very difficult to adjust, as external contributors start engaging on building the current configuration. Thus, when an organization decides to configure one aspect of their crowdsourcing initiative in a particular way and afterward realizes that doing so requires another aspect, it may be unable to adjust. This challenge of irreversibility is exemplified by the experience of the Natural Environment Research Council, which in March 2016 announced a plan to tap into the crowd for a name for their next polar vessel. The task was poorly designed from the beginning: previous suggestions were visible to the contributors, leading to a herding effect, which eventually led to the jokingly number one nominated suggestion “Boaty McBoatface”. Once the media—traditional and social—got hold of it, it was difficult to withdraw.

An implication for organizational design is to outline the different steps in the process and to outline interdependencies between the steps. This sounds like commonsense, but many organizations fail to appreciate the downstream implications of, for instance, vaguely defining the problem. Thompson’s (1967) classical notion of reciprocal interdependence can be used to illustrate this. Reciprocal interdependence is more complicated than sequential interdependence where the output of one unit becomes the input of another, as it is cyclical. A similar form of reciprocal interdependence occurs in crowdsourcing, with the added difficulty that resolving the coordination through information sharing and mutual adjustments is harder when the involved parties transcend a single organization. From this follows that organizations could outline the process carefully when initiating crowdsourcing, and consider possible interdependencies to avoid future coordination problems.

Problem 2: building a crowd

Our research has used data from thousands of different organizations to document the challenge inherent in building a crowd (Dahlander and Piezunka 2014). Rather than seeking to develop innovations, these organizations turned to crowdsourcing merely to articulate concerns and ideas. We found that it is difficult to build a crowd even when there are low participation barriers. For each successful crowdsourcing initiative such as Dell IdeaStorm or Starbucks ideas, there are plenty of cases where an organization never manages to build a crowd. Some 90% of organizations in our sample collect less than one idea per month. In other words, success stories can be deceiving. Focusing exclusively on initiatives that reach a certain stage can lead to partial or erroneous conclusions about the lack of challenges in crowdsourcing, as well as about why some organizations manage to build a crowd while others fail.

Our research points to strategies that can be used to increase the odds of building a crowd. One reason for sharing suggestions is to get feedback from peers and firms (Jeppesen and Frederiksen 2006; Bogers et al. 2010). In our research, feedback was given publicly with more or less specified feedback. Our work shows crowds are more likely to grow when companies use proactive attention (the extent to which the organization push the direction by posting own suggestions) and reactive attention (whether ideas posted receive feedback from the company (Dahlander and Piezunka 2014). In other words, receiving feedback from organizations increases the likelihood of those organizations building a crowd. In addition, the effect is larger when an organization is active in the beginning in the formative phase of the crowd. This is because it is important to weave in newcomers and get them attached to the crowd. Unfortunately, this stands in contrast to what many organizations do: they are only willing to invest their time and attention if the crowd has reached a more critical threshold. Our findings also suggest that organizations are well served by giving feedback to newcomers who show up the first time rather than people who are already well-integrated. Companies often neglect newcomers, as they are more concerned about their core members. This is unfortunate as it is a missed opportunity to grow the crowd.

Problem 3: distant search, narrow attention

An oft-cited advantage of crowdsourcing is the ability to get ideas from people who are previously not known to the organization (Afuah and Tucci 2012). If leveraged to its full potential, it can create an opportunity to get distant ideas. Managers report this as the primary reason to engage in crowdsourcing in the first place. Using a subset of the dataset (as explained earlier) of organizations that used crowdsourcing to collect ideas, we found that organizations, when choosing among the ideas that contributors submitted, often stick to what they know. They tend to choose ideas that seem familiar. We labeled this the distant search, narrow attention phenomenon in our research (Piezunka and Dahlander 2015). This is in line with previous research that suggests people often claim they appreciate novel ideas yet shy away when presented with them (Mueller et al. 2011; Boudreau et al. 2016). Novel ideas are often associated with greater uncertainty, and managers often dismiss ideas that would disrupt their area of responsibility (Sethi et al. 2012).

We also found that the tendency to pick the familiar over the distant idea increases when organizations have many ideas to choose from. In other words, organizations that generate more ideas, eventually select ideas that are more familiar. This finding may seem paradoxical. It indeed runs counter to people’s intuition about creative processes. People often strive to generate as many ideas as possible in order to be creative and to find distant ideas, but our research suggests that the very attempt to do so may undermine the initial intent. While research on brainstorming has found a linkage between the number generated of ideas to the innovativeness of the best idea (Girotra et al. 2010), our research underscores the need to focus on the innovativeness of the best-selected idea.

These findings raise a number of organizational design challenges for how to organize internally to avoid these issues. If the internal evaluators get tired when evaluating a lot of ideas and are thus more likely to miss novel ideas due to fatigue, then scaling internal capability by assigning more people to the job would help. This does require organizations to commit more resources—something they are often not willing to do. Also, organizations can work harder to nudge people to contribute fewer ideas yet aim for higher average idea quality. If an organization cares about strong outcomes, such as the next big innovation, then this would come at the cost of missing ideas that do not fit into the status quo.

One interesting observation is that organizations tend to prioritize contributions from the crowd by more sophisticated methods than simply number of likes, votes, or comments. Some organizations, for example, have experimented with using Elo comparisons to prioritize the order in which ideas should be implemented. These comparisons, named for their founder Arpad Elo, use a concept similar to chess: a good player who beats another good player gets a higher seat in the rankings as opposed to playing against someone who is lower in the ranks. Similarly, an idea that outruns a previously popular idea rises to the top more quickly (Füller et al. 2009). Recent research also points to the potential role that artificial intelligence may play in sifting through a pool of ideas and selecting the most promising ones (Christensen et al. 2017).

A key insight of this work is that crowdsourcing is different from the wisdom of crowds (Surowiecki 2005; Csaszar 2018; Becker et al. 2019). In the wisdom of crowds, having more participants is beneficial as they all provide standardized input, which can be aggregated. One can take the average of people’s guesses. In contrast, in the case of crowdsourcing, every single contribution may require attention. Therefore, companies must be careful about how many ideas to source. Instead of simply scaling idea sourcing, organizations should seek contributors that provide them with novel, challenging ideas. Interestingly, even recent research on the wisdom of crowds (Piezunka et al. 2020) points out that once the question of what organizations can learn becomes the center focus, size can be harmful and organizations need to find ways to integrate contrarian voices.

Problem 4: crowd involvement increases accountability

Our research illustrates that companies receive more ideas than they can actually use. With many ideas not being selected, the company then needs to think about ways to provide feedback to increase engagement and encourage people to improve. This is a way to build a tie with the contributor beyond the first idea. We found that in the vast majority of cases—88%—organizations do not provide any type of feedback, positive or negative, to individual contributors. This is unfortunate as even contributors who receive an explicit rejection are more likely to come up with a second idea, and are even more likely to have it accepted. In other words, by receiving negative feedback with the rejection, the contributors learn what the organization wants (Piezunka and Dahlander 2019). If rejections help contributors, then it begs the question, what can organizations do to scale up rejections from what is usually a low number? Scholars in other fields have also noted that rejections make people stronger as they learn from the experience and improve over time (Wang et al. 2019). They use a regression discontinuity (they explore differences around a threshold for winning a grant with those just above and below), and find that those who are rejected improve more over time.

Our research takes it a step further by analyzing the content of the rejection. This is important as all rejections are not created equal. Our research suggests that the idea of “echoing” content is less effective than matching the contributor’s style. For instance, if the contributor has an informal idea, it makes sense for the organization to respond informally. Recent research on organizational culture shows that employees succeed in organizations when they are a good cultural fit (Srivastava et al. 2018), and our research suggests that it is also the role of organization to customize their feedback to give the employee the feeling that (s)he is a good cultural fit. More research is needed along these lines to provide solid recommendations for managers about how to design feedback processes.

We lack empirical answers to many open questions about what organizations can do to help crowd members to come up with new, better ideas that are more aligned with the strategy of the firm. Nevertheless, we have stumbled upon some other approaches used by companies. For instance, companies are increasingly using AI to provide feedback to prevent crowds being left unattended. In other words, when the selection burden is too great and there are too many ideas to reject, a company can turn to an AI algorithm to provide negative feedback. This, however, can backfire and be interpreted as insincere for not devoting real people to providing feedback. In such a case, “online ostracism” meaning of ignoring others may be desirable.

We also observe that companies use crowds as a “second filter”, as the crowd not only generates ideas, but also evaluates and filters ideas. Organizations should, however, do so with caution; the organization must stay heavily involved in the evaluation and filtering process—the entire process cannot be outsourced. Consider the case of LEGO Ideas. LEGO invites people around the world to come up with new ideas for LEGO building sets. Ideas are shared with others and the crowd votes on their favorites and provides comments. This helps LEGO ensure newcomers get feedback, and reduces their demand uncertainty of ideas that have market potential. Once a suggested design crosses a certain threshold in terms of votes, LEGO evaluates it internally and considers it for selection. Thus, we suggest that the organizations listen carefully to the crowd, but not blindly follow it.

This, though, raises the possibility that a very popular idea can be rejected for lack of fit with the company’s business model, which could have negative ramifications on the crowd. For instance, the German grocery chain Spar announced the ‘Spar Bag Design Contest’ where the crowd was ultimately not satisfied with the jury decision. Ideas that were loved by the crowd were rejected by the firm, which resulted in negative reactions spreading throughout the community (Gebauer et al. 2013). Organizations must thus carefully manage the crowd and its expectations.

Organizations need to look for ways to provide feedback that helps to manage expectations and guide the crowd in the right direction. One case in point is MathWorks, which sometimes organizes in independent steps where intermediate solutions are shared for others to build upon. It has also become common to use comments to let crowd members interact. This can reduce the problem of the organization providing feedback (Dahlander and Piezunka 2014; Piezunka and Dahlander 2019), but can also increase friction among crowd members. We summarize the four organizational design challenges in Table 1.

Table 1 Summary of organizational design challenges of crowdsourcing

Discussion

While using crowds has its benefits, a common lament of our observations above is that using crowds can also cause internal organizational design problems. An added challenge is that the typical problems of organizing such as aggregating efforts, assigning tasks to people and coordinate their actions (see e.g., Puranam 2018) are more difficult where the crowd members are not working in the same organization, they self-select into tasks they enjoy working on and there is little central authority that can dictate what crowd members need to work on. These challenges are often overlooked by managers, which results in wasted resources, disappointment, potential friction within the crowd, and overall, lost opportunity. We have taught and consulted with companies on crowdsourcing for several years. Many managers leave the classroom excited and wanting to use it more often to find new ideas, but then they struggle with implementation. A major reason for this is that engaging a crowd can lead to internal employees’ jobs being put at risk. As an organizational designer, you thus have to think about what tasks to give to the crowd. One can outsource boring, mundane, and nice-to-have tasks rather than the ones with the utmost strategic priority. This allows the organization to leverage the development of tasks that internal employees may not be interested in. Another solution we have observed is to use internal and external crowds sequentially—the organization first asks internally through crowdsourcing if there are any employees willing to contribute, and if unsuccessful, then opens it up to the external crowd.

Crowdsourcing has expanded from the aggregation of independent work. Puranam (2018) points out that crowds—typically defined as an organized herd of actors—are, in an organizational context, anything but unorganized. In fact, crowdsourcing has mimicked some of the organizational principles of communities where ideas are shared, discussed, and improved. The role of the organization is thus to cultivate these crowds. This, however, often implies that the organization that turns to the crowd must devote resources, time, and attention to managing it. In talking to many different companies, we have found that managers often do not fully appreciate these challenges. They have read some success stories and believe it is easier to manage crowds than it actually is. They take a “wait and see” approach to explore whether the crowd bears fruit, and then decide to engage (Dahlander and Piezunka 2014). In reality, however, it is in the early stage of the formation of the crowd that the organization is most needed to provide input, feedback, and encourage creativity.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.

References

  1. Afuah A, Tucci C (2012) Crowdsourcing as solution to distant search. Acad Manage Rev 37(3):355–375

    Article  Google Scholar 

  2. Becker J, Porter E, Centola D (2019) The wisdom of partisan crowds. Proc Natl Acad Sci USA 116(22):10717–10722

    Article  Google Scholar 

  3. Bogers M, Afuah A, Bastian B (2010) Users as innovators: a review, critique, and future research directions. J Manage 36(4):857–875

    Google Scholar 

  4. Boudreau KJ, Guinan EC, Lakhani KR, Riedl C (2016) Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science. Manage Sci 62(10):2765–2783

    Article  Google Scholar 

  5. Christensen K, Nørskov S, Frederiksen L, Scholderer J (2017) In search of new product ideas: identifying ideas in online communities by machine learning and text mining. Creat Innov Manage 26(1):17–30

    Article  Google Scholar 

  6. Csaszar FA (2018) Limits to the wisdom of the crowd in idea selection. In: Joseph J, Baumann O, Burton R, Srikanth K (eds) Organization design, advances in strategic management. Bingley, Emerald

    Google Scholar 

  7. Dahlander L, Piezunka H (2014) Open to suggestions: how organizations elicit suggestions through proactive and reactive attention. Res Policy 43(5):812–827

    Article  Google Scholar 

  8. Dahlander L, Jeppesen LB, Piezunka H (2019) How organizations manage crowds: define, broadcast, attract and select. Res Sociol Organ 64:239–270

    Article  Google Scholar 

  9. Felin T, Lakhani KR, Tushman ML (2017) Firms, crowds, and innovation. Strat Organ 15(2):119–140

    Article  Google Scholar 

  10. Füller J, Mühlbacher H, Matzler K, Jawecki G (2009) Consumer empowerment through Internet-based co-creation. J Manage Inf Syst 26(3):71–102

    Article  Google Scholar 

  11. Gebauer J, Füller J, Pezzei R (2013) The dark and the bright side of co-creation: triggers of member behavior in online innovation communities. J Bus Res 66(9):1516–1527

    Article  Google Scholar 

  12. Ghezzi A, Gabelloni D, Martini A, Natalicchio A (2018) Crowdsourcing: a review and suggestions for future research. Int J Manage Rev 20(2):343–363

    Article  Google Scholar 

  13. Girotra K, Terwiesch C, Ulrich KT (2010) Idea generation and the quality of the best idea. Manage Sci 56(4):591–605

    Article  Google Scholar 

  14. Jeppesen LB, Frederiksen L (2006) Why do users contribute to firm-hosted user communities? The case of computer-controlled music instruments. Organ Sci 17(1):45–63

    Article  Google Scholar 

  15. Jeppesen LB, Lakhani KR (2010) Marginality and problem-solving effectiveness in broadcast search. Organ Sci 21(5):1016–1033

    Article  Google Scholar 

  16. Levinthal DA (1997) Adaptation on rugged landscapes. Manage Sci 43(7):934–950

    Article  Google Scholar 

  17. Lifshitz-Assaf H (2018) Dismantling knowledge boundaries at NASA: the critical role of professional identity in open innovation. Adm Sci Q 63(4):746–782

    Article  Google Scholar 

  18. Mueller JS, Melwani S, Goncalo JA (2011) The bias against creativity: why people desire but reject creative ideas. Psychol Sci 23(1):13–17

    Article  Google Scholar 

  19. Pentland B, Vaast E, Ryan Wolf J (Forthcoming) Measuring and explaining process dynamics with digital trace data. Manage Inf Syst Q

  20. Piezunka H, Dahlander L (2015) Distant search, narrow attention: how crowding alters organizations’ filtering of suggestions in crowdsourcing. Acad Manage J 58(3):856–880

    Article  Google Scholar 

  21. Piezunka H, Dahlander L (2019) Idea rejected, tie formed: organizations’ feedback on crowdsourced ideas. Acad Manage J 62(2):503–530

    Article  Google Scholar 

  22. Piezunka H, Aggarwal VA, Posen HE (2021) The aggregation-learning tradeoff. Organ Sci

  23. Puranam P (2018) The microstructure of organizations. Oxford University Press, Oxford

    Google Scholar 

  24. Puranam P, Raveendran M, Knudsen T (2012) Organization design: the epistemic interdependence perspective. Acad Manage Rev 37(3):419–440

    Article  Google Scholar 

  25. Puranam P, Alexy O, Reitzig M (2014) What’s ‘new’ about new forms of organizing? Acad Manage Rev 39(2):162–180

    Article  Google Scholar 

  26. Rivkin JW, Siggelkow N (2003) Balancing search and stability: interdependencies among elements of organizational design. Manage Sci 49(3):255–350

    Article  Google Scholar 

  27. Sethi R, Iqbal Z, Sethi A (2012) Developing new-to-the-firm products: the role of micropolitical strategies. J Market 76(2):99–115

    Article  Google Scholar 

  28. Srivastava SB, Goldberg A, Manian VG, Potts C (2018) Enculturation trajectories: language, cultural adaptation, and individual outcomes in organizations. Manage Sci 64(3):1348–1364

    Article  Google Scholar 

  29. Su N, Levina N, Ross JW (2016) The long-tail strategy of IT outsourcing. MIT Sloan Manage Rev 57(2):81–89

    Google Scholar 

  30. Surowiecki J (2005) The wisdom of crowds: why the many are smarter than the few and how collective wisdom shapes business, economies, societies and nations. Anchor Books, New York, NY

    Google Scholar 

  31. Taylor WC, LaBarre PG (2006) Mavericks at work: why the most original minds in business win. Harper Collins, New York, NY

    Google Scholar 

  32. Wang Y, Jones B, Wang D (2019) Early-career setback and future career impact. Nat Commun 10:4331

    Article  Google Scholar 

  33. West J, Bogers M (2014) Leveraging external sources of innovation: a review of research on open innovation. J Prod Innov Manage 31(4):814–831

    Article  Google Scholar 

  34. Winsor J, Paik J, Tushman M, Lakhani KR (2019) Overcoming cultural resistance to open source innovation. Strat Leadership 47(6):28–33

    Article  Google Scholar 

Download references

Acknowledgements

No acknowledgements. The authors crafted the manuscript without external support.

Funding

We did not receive any funding for this study.

Author information

Affiliations

Authors

Contributions

LD and HP worked jointly on the writing of the translational article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Henning Piezunka.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dahlander, L., Piezunka, H. Why crowdsourcing fails. J Org Design 9, 24 (2020). https://doi.org/10.1186/s41469-020-00088-7

Download citation

Keywords

  • Crowdsourcing
  • Organizational design
  • Innovation