When you think about it, you can probably name hundreds of examples of people working together collectively to achieve a single effort. And at its most basic form, that is what crowdsourcing is: leveraging groups of people online to meet a goal. Most recently, crowds of online workers have begun accessing the cloud to help test software, and it’s proven to be a successful way to fully test a variety of software before launching.
Crowdsourcing software testing meets many of the challenges of today’s testing world:
Even though this model seems to be a panacea to many software testing headaches, not much information is available on how such a system works, how to sell your development team on the concept and how to get started. The book, “Leveraging the Wisdom of the Crowd in Software Testing,” by Mukesh Sharma and Rajini Padmanaban, may give you all the information your team needs to find its own place in the crowd.
Sharma and Padmanaban break down their crowdsourcing process into 10 chapters that cover nearly every aspect of the model including advantages of use, implementations, challenges, limitations and even situations when crowdsourcing your software testing is a bad idea.
In our IT Downloads Area, you can download the Foreword, Overview and Chapter 1 from “Leveraging the Wisdom of the Crowd in Software Testing.” The Overview includes brief explanative excerpts from each chapter, giving you a more detailed view of the entire publication.
Chapter 1 begins with introducing the crowd. After all, it seems many people have differing views of what makes up a crowdsourced group. And many also may not realize, but beta testing, which has been used for years in software development, is a successful form of crowdsourcing. Also, the crowds used for testing can be comprised of people that are already part of the enterprise, people external to the organization or even a mix of both types of users, depending on the need for subject matter expertise when testing a particular type of software.
Developers and CIOs will both find this book to be more than useful. It presents a new spin on an old concept: gathering users to test new software. From a budgetary standpoint, it makes a lot of sense. And in terms of flexibility and scale, crowdsourced testing might be the perfect answer to the question of “who do we get to test this before we launch?”