Ok, this is getting a little surreal. Three weeks ago, I wrote about G2 Crowd, an outfit that had undertaken what I thought was the novel approach of using crowdsourcing and data analytics to take on the likes of Gartner and Forrester to become what was being called the “Yelp for business software.” Then I learned of IT Central Station, which had taken largely the same approach, and was also being called the “Yelp for business software.” Now, yet another outfit has come to my attention that’s doing pretty much the same thing, and it, too, has received the same designation of … you guessed it.
That outfit is TrustRadius, the Austin, Texas version of Chicago-based G2 Crowd and New York-based IT Central Station. I found it pretty remarkable that these three sites launched within an eight-month span, beginning with IT Central Station in September of last year, and G2 Crowd and TrustRadius following in February and May of this year, respectively. I recently had the opportunity to speak with TrustRadius co-founder and CEO Vinay Bhagat, and I asked the obvious question: Had he seen what G2 Crowd and IT Central Station were doing and decided to climb on board, or was it all a colossal coincidence? Apparently, it was indeed happenstance:
As far as how I arrived at this idea, for the last decade or so, I had started and run a software company called Convio. We were a software-as-a-service platform in the non-profit sector, and became the lead provider of capabilities in that marketplace. In the course of scaling that business, I had two epiphanies. One, as a buyer of technology, we bought about 30 different business apps ourselves, and it was really hard to get authentic intelligence. We were a mid-market company—by the time we sold the company we were doing about $80 million to $100 million in revenue. And truthfully, we were too small to access analysis from the likes of Gartner and Forrester—it was just cost-prohibitive for us. So for me, put aside whether you have problems with the Gartner/Forrester model, there’s a huge portion of the market that is underserved, that doesn’t use traditional technology market research because it’s too expensive.
And on the whole idea of purchasing technology, we made a lot of mistakes in our technology purchasing—we ended up switching marketing automation platforms a couple of times; we bought an HR system that was really expensive for us—$100,000 a year, and had huge human implications because we had to roll it out to a lot of people. Then we found it didn’t functionally work for us, even though it was in the upper right-hand quadrant of Gartner, ironically. In other cases, we bought financial systems that we just didn’t know how to deploy or implement properly, and spent two years implementing. We went down the wrong implementation path, and had to dig ourselves out of a hole and re-implement it. So to me, there was clearly an opportunity for shared learning to drive collaboration across enterprises.
Bhagat’s second epiphany had to so with transparency:
As a vendor ourselves, my ethos has always been—and maybe it was because of serving the non-profit sector—that we were better off being transparent. If you over-represent or oversell your product, it will come back to bite you. You may get the deal, but if you’re trying to drive a recurring revenue business where you want customers to renew, and you want positive word-of-mouth to occur, there are more negative consequences to misrepresentation, overselling, and stretching the truth, vs. just being transparent.
So whenever our company goes out on a sales call, I say, “Here are the things we don’t do, as well.” I found that authenticity was respected within our marketplace. So those two epiphanies led me to start thinking about this idea in January of last year. At the time, IT Central Station and G2 Crowd weren’t around. And then when Convio finally got acquired in May , my CTO and I started developing technology for this company in July . We didn’t launch our public site until May of this year.
I asked Bhagat how he would differentiate TrustRadius from G2 Crowd, and he responded with the example of the social media software category:
We’ve profiled 75 different products within that landscape, but what we try to do is understand the nuances of how tools are being used within a broader category. So within social media, there are about eight different use cases, ranging from listening to publishing to analytics. Rather than treat all of those tools in one mass group, we try to discern what those different use cases are, which tools fit those use cases, and then we source content—reviews and insights around those specific products. I think a key distinction between us and G2 Crowd is that I’m much more focused on in-depth, vetted, rich content, as opposed to the notion of a rating.
I think a rating can help you discern between something that’s very good and something that’s very bad in the eyes of its users, but it doesn’t really delineate between two products that may have somewhat similar ratings, and may be applicable to completely different use cases. And ratings can have other issues associated with them. We tend to find that less expensive and simpler-to-use products tend to skew towards higher ratings, because it’s easier to be satisfied with something that basically meets your needs and is very cheap. Whereas the more complex enterprise products tend to have more critical ratings, but it’s not to say that they aren’t the right tool for the right circumstance.
So what we’re trying to do is to take a more finessed look at how to crowdsource user reviews, and really focus on trying to understand the nuances between different tools in the marketplace and the purposes they serve. And we try to source what we think of as true expert ratings. Some of the folks in G2 Crowd have actually rated 100 different products, and a large part of that is their incentive system, where they give people prizes for the number of ratings they do and so forth—they pay people for their reviews.
Bhagat said TrustRadius has a very different philosophy:
We try to find the true power user behind the tool. It can be different people in different departments. For HR tools, we source perspectives from the key HR managers as well as the key IT business analysts using the tools. The viewpoint is it’s not about getting hundreds of ratings; it’s about getting a representative number of quality insights from people who have deep knowledge that you can really trust. To drive that quality of insight, aside from our targeting approach in terms of how we source people, we put people through a questionnaire whereby they answer up to 40 questions about the product. It takes a fair amount of insight to do that. They have choices. If they’re an executive and they don’t have time to spend, they can do a short-form survey, as well. About two-thirds of the content on our site right now is what we call long-form, with answers to as many as 40 questions on a product. They share detailed insights about how they’re using the tool; you also understand what their demographics are—company size, vertical, and so forth, so you can find the reviewers who are most closely akin to you, which we believe is really important. If you’re an executive with a large, global corporation, you’re going to want to hear from people at other large, global corporations, not necessarily what SMBs think of a given product.
I asked Bhagat how he differentiates TrustRadius from IT Central Station, and he said TrustRadius takes a more structured approach than IT Central Station does:
I think they have more rich content, like we do, vs. G2 Crowd. They have the notion of expert content on their site, as well as average content. And I think their move towards expert content was somewhat driven by seeing what we are doing on our site. So I think they’re using many of the same techniques. One key difference in our content is that we impose more structure. Their reviews tend to be open-format, where someone can write what they want.
The benefit of a structured approach is twofold. One is that it makes it easier for the consumer of the content to compare specific areas. So if I’m reading five reviews of a BI tool, and I’m curious about usability, I can look at five reviews on TrustRadius and immediately go to the section on usability and understand what five people think about its usability. It’s harder to do that with IT Central Station, just because the content is unstructured. The other thing we afford ourselves through structure is the ability to do curation, where we aggregate content from multiple reviews into different reports. The first incarnation of that curation right now is side-by-side comparisons. And we’re somewhat focused on different domains right now. IT Central Station is more focused on the pure IT audience—I think they are focused on BI, platform-as-a-service, and virtualization. We’ve initially focused more on business applications—80 percent of our audience are business professionals, and 20 percent are IT professionals. That will change with time as we do more core IT. But the reason we started with business apps first was we felt that that portion of the marketplace had less strong intelligence right now.
With the move to SaaS, we’re seeing more and more technology being bought by line of business people. Those folks typically have less experience in doing procurement through a rigorous methodology—we felt there was more value to add there, initially. The other reason is we felt that for business applications, we could apply the same standard set of questions, whether someone was buying an HR, CRM, or Web analytics product. For core IT applications, we felt the question set would probably need to vary by product, and therefore we deferred on that for now while we scale our content in core business apps.
Bhagat went on to share his crowdsourcing strategy, to explain why power users bother to provide reviews in the first place, and to offer his assessment of the future of traditional IT market research and analysis. I’ll cover all of that in a forthcoming post.