Star Ratings and Review Comments in a Review System Pdf

Idea in Cursory

The Promise

Review systems such as driver ratings for Uber and Lyft, product reviews on Amazon, and hotel recommendations on TripAdvisor increasingly inform consumers' decisions. Adept systems give buyers the conviction they need to make a purchase and yield higher sales (and more returning customers) for sellers.

The Trouble

Many systems don't alive up to their promise—they accept too few reviews or the reviews are misleading or unhelpful. Behind many review-organisation failures lies a common supposition: that building these systems represents a technological challenge rather than a managerial one.

The Solution

Those building and maintaining these systems must make blueprint decisions that pb to better experiences for both consumers and reviewers.

Online reviews are transforming the way consumers cull products and services: We turn to TripAdvisor to plan a holiday, Zocdoc to notice a md, and Yelp to find new restaurants. Review systems play a cardinal part in online marketplaces such as Amazon and Airbnb as well. More broadly, a growing number of organizations—ranging from Stanford Health Care to nine of the 10 biggest U.S. retailers—at present maintain review ecosystems to assist customers learn about their offerings.

Managed well, a review system creates value for buyers and sellers alike. Trustworthy systems can give consumers the conviction they need to buy a relatively unknown product, whether a new book or dinner at a local restaurant. For instance, research by 1 of us (Mike) found that higher Yelp ratings lead to college sales. This issue is greater for independent businesses, whose reputations are less well established. Reviews also create a feedback loop that provides suppliers with valuable information: For instance, ratings let Uber to remove poorly performing drivers from its service, and they tin can give producers of consumer appurtenances guidance for improving their offerings.

But for every thriving review system, many others are barren, attracting neither reviewers nor other users. And some amass many reviews but fail to build consumers' trust in their informativeness. If reviews on a platform are all positive, for instance, people may assume that the items existence rated are all of high quality—or they may conclude that the system can't assist them differentiate the good from the bad. Reviews can be misleading if they provide an incomplete snapshot of experiences. Fraudulent or self-serving reviews can hamper platforms' efforts to build trust. Research by Mike and Georgios Zervas has found that businesses are peculiarly probable to appoint in review fraud when their reputation is struggling or competition is particularly intense.

Backside many review-system failures lies a common supposition: that building these systems represents a technological claiming rather than a managerial ane. Business organization leaders often invest heavily in the engineering behind a system simply fail to actively manage the content, leading to common issues. The implications of poor blueprint choices tin be severe: Information technology's hard to imagine that travelers would trust Airbnb without a way for hosts to establish a reputation (which leans heavily on reviews), or that shoppers could navigate Amazon as seamlessly without reviews. As academics, Hyunjin and Mike have researched the design choices that lead some online platforms to succeed while others fail and have worked with Yelp and other companies to help them on this front (Hyunjin is besides an economics research intern at Yelp). And as the COO of Yelp for more than a decade, Geoff helped its review ecosystem become one of the earth's dominant sources of data almost local services.

All-positive reviews on a platform don't help differentiate the good from the bad.

In recent years a growing torso of research has explored the design choices that tin lead to more-robust review and reputation systems. Cartoon on our research, teaching, and piece of work with companies, this article explores frameworks for managing a review ecosystem—shedding light on the problems that can ascend and the incentives and design choices that tin assist to avoid common pitfalls. Nosotros'll look at each of these issues in more particular and describe how to accost them.

Not Enough Reviews

When Yelp began, it was past definition a new platform—a ghost town, with few reviewers or readers. Many review systems feel a shortage of reviews, particularly when they're starting out. While most people read reviews to inform a buy, only a modest fraction write reviews on whatsoever platform they use. This state of affairs is exacerbated by the fact that review platforms have strong network effects: It is specially difficult to attract review writers in a world with few readers, and hard to attract readers in a world with few reviews.

We suggest three approaches that can help generate an adequate number of reviews: seeding the system, offer incentives, and pooling related products to display their reviews together. The right mix of approaches depends on factors such as where the arrangement is on its growth trajectory, how many individual products volition be included, and what the goals are for the arrangement itself.

Seeding reviews.

Early-stage platforms tin can consider hiring reviewers or drawing in reviews from other platforms (through a partnership and with proper attribution). To create enough value for users in a new city to start visiting Yelp and contributing their own reviews, the company recruited paid teams of office-time "scouts" who would add personal photos and reviews for a few months until the platform caught on. For other businesses, partnering with platforms that specialize in reviews tin also be valuable—both for those that want to create their own review ecosystem and for those that want to show reviews but don't intend to create their own platform. Companies such equally Amazon and Microsoft pull in reviews from Yelp and other platforms to populate their sites.

For platforms looking to grow their own review ecosystem, seeding reviews can exist especially useful in the early stages because it doesn't require an established make to incentivize activeness. All the same, a large number of products or services can make information technology costly, and the reviews that you get may differ from organically generated content, so some platforms—depending on their goals—may benefit from swiftly moving beyond seeding.

Offering incentives.

Motivating your platform'due south users to contribute reviews and ratings tin can exist a scalable option and can also create a sense of community. The incentive you employ might be fiscal: In 2014 Airbnb offered a $25 coupon in exchange for reviews and saw a 6.iv% increase in review rates. Nonetheless, nonfinancial incentives—such every bit in-kind gifts or status symbols—may also motivate reviewers, especially if your brand is well established. In Google's Local Guides program, users earn points any time they contribute something to the platform—writing a review, adding a photograph, correcting content, or answering a question. They tin convert those points into rewards ranging from early access to new Google products to a free 1TB upgrade of Google Bulldoze storage. Yelp'due south "elite team" of prolific, loftier-quality reviewers receive a special designation on the platform along with invitations to private parties and events, amidst other perks.

Financial incentives can go a challenge if you take a large product array. But a bigger business organization may be that if they aren't designed well, both financial and nonfinancial incentives can backfire by inducing users to populate the organization with fast just sloppy reviews that don't help other customers.

Pooling products.

By reconsidering the unit of review, yous can make a single comment utilize to multiple products. On Yelp, for example, hairstylists who share salon infinite are reviewed together nether a single salon listing. This aggregation greatly increases the number of reviews Yelp tin amass for a given business, because a review of any single stylist appears on the business organisation's page. Furthermore, since many salons experience regular churn amid their stylists, the salon's reputation is at least as of import to the potential client as that of the stylist. Similarly, review platforms may be able to generate more than-useful reviews by request users to review sellers (as on eBay) rather than separating out every product sold.

Deciding from the offset whether and how to pool products in a review system tin can exist helpful, because it establishes what the platform is all about. (Is this a identify to learn about stylists or well-nigh salons?) Pooling becomes particularly attractive as your production space broadens, because you have more items to pool in useful means.

R1906H_DONAKER_A Sean McCabe

A take chances to this approach, however, is that pooling products to achieve more reviews may fail to requite your customers the information they demand about any particular offer. Consider, for case, whether the experience of visiting each stylist in the salon is quite different and whether a review of one stylist would be relevant to potential customers of another.

Amazon's pooling of reviews in its bookstore takes into account the format of the book a reader wants to purchase. Reviews of the text editions of the same title (hardback, paperback, and Kindle) appear together, but the audiobook is reviewed separately, under the Audible brand. For customers who want to learn most the content of the books, pooling reviews for all sound and physical books would exist beneficial. But because audio production quality and information nigh the narrator are pregnant factors for audiobook buyers, at that place may be a benefit to keeping those reviews separate.

All these strategies can help overcome a review shortage, allowing content development to go more self-sustaining as more readers do good from and engage with the platform. Still, platforms have to consider not simply the book of reviews simply also their informativeness—which can be affected by selection bias and gaming of the organization.

Selection Bias

Have you ever written an online review? If and so, what made you decide to comment on that detail occasion? Research has shown that users' decisions to leave a review often depend on the quality of their experience. On some sites, customers may be likelier to exit reviews if their experience was good; on others, only if information technology was very expert or very bad. In either case the resulting ratings can suffer from selection bias: They might not accurately stand for the full range of customers' experiences of the production. If merely satisfied people exit reviews, for example, ratings will be artificially inflated. Selection bias can become even more pronounced when businesses nudge simply happy customers to leave a review.

EBay encountered the challenge of selection bias in 2011, when it noticed that its sellers' scores were suspiciously high: Most sellers on the site had over 99% positive ratings. The company worked with the economists Chris Nosko and Steven Tadelis and found that users were much likelier to leave a review afterwards a expert experience: Of some 44 million transactions that had been completed on the site, only 0.39% had negative reviews or ratings, but more than twice as many (1%) had an bodily "dispute ticket," and more than seven times as many (3%) had prompted buyers to exchange messages with sellers that implied a bad experience. Whether or not buyers decided to review a seller was in fact a improve predictor of future complaints—and thus a better proxy for quality—than that seller's rating.

Some sites get reviews only if an feel was very good or very bad.

EBay hypothesized that it could improve buyers' experiences and thus sales by correcting for raters' pick bias and more conspicuously differentiating higher-quality sellers. It reformulated seller scores as the pct of all of a seller's transactions that generated positive ratings (instead of the per centum of positive ratings). This new measure yielded a median of 67% with substantial spread in the distribution of scores—and potential customers who were exposed to the new scores were more likely than a control group to render and make another buy on the site.

By plotting the scores on your platform in a similar way, you can investigate whether your ratings are skewed, how severe the trouble may be, and whether additional data might help you fix it. Any review system tin can exist crafted to mitigate the bias it is most likely to face up. The entire review process—from the initial inquire to the messages users go equally they blazon their reviews—provides opportunities to nudge users to behave in less-biased ways. Experimenting with design choices tin assist show how to reduce the bias in reviewers' cocky-selection also equally any tendency users accept to charge per unit in a particular way.

Require reviews.

A more than heavy-handed approach requires users to review a purchase before making some other one. But tread carefully: This may drive some customers off the platform and can lead to a flood of noninformative ratings that customers use as a default—creating racket and a different kind of error in your reviews. For this reason, platforms oftentimes look for other ways to minimize selection bias.

Allow private comments.

The economists John Horton and Joseph Gold found that on the freelancer review site Upwork, employers were reluctant to leave public reviews after a negative feel with a freelancer merely were open to leaving feedback that but Upwork could see. (Employers who reported bad experiences privately nonetheless gave the highest possible public feedback almost twenty% of the time.) This provided Upwork with important information—about when users were or weren't willing to leave a review, and nearly problematic freelancers—that it could apply either to modify the algorithm that suggested freelancer matches or to provide aggregate feedback about freelancers. Aggregate feedback shifted hiring decisions, indicating that it was relevant boosted information.

Pattern prompts carefully.

More generally, the reviews people leave depend on how and when they are asked to exit them. Platforms can minimize bias in reviews by thoughtfully designing unlike aspects of the surroundings in which users make up one's mind whether to review. This approach, often referred to as choice architecture—a term coined past Cass Sunstein and Richard Thaler (the authors of Nudge: Improving Decisions About Wellness, Wealth, and Happiness)—applies to everything from how prompts are worded to how many options a user is given.

In one experiment we ran on Yelp, we varied the messages prompting users to leave a review. Some users saw the generic message "Next review awaits," while others were asked to help local businesses get discovered or other consumers to find local businesses. We plant that the latter group tended to write longer reviews.

Fraudulent and Strategic Reviews

Sellers sometimes try (unethically) to boost their ratings by leaving positive reviews for themselves or negative ones for their competitors while pretending that the reviews were left past real customers. This is known as astroturfing. The more influential the platform, the more people will endeavor to astroturf.

Because of the damage to consumers that astroturfing can exercise, policymakers and regulators have gotten involved. In 2013 Eric Schneiderman, then the New York Land attorney general, engaged in an performance to address it—citing our enquiry every bit part of the motivation. Schneiderman's office appear an agreement with 19 companies that had helped write fake reviews on online platforms, requiring them to stop the practice and to pay a hefty fine for charges including false advertising and deceptive business organization practices. But, as with shoplifting, businesses cannot just rely on police force enforcement; to avoid the pitfalls of fake reviews, they must prepare their own protections as well. As discussed in a paper that Mike wrote with Georgios Zervas, some companies, including Yelp, run sting operations to place and address companies trying to exit fake reviews.

A related challenge arises when buyers and sellers rate each other and craft their reviews to elicit higher ratings from the other party. Consider the concluding time you stayed in an Airbnb. Later on you were prompted to leave a review of the host, who was also asked to leave a review of you. Until 2014, if y'all left your review before the host did, he or she could read it earlier deciding what to write about you. The result? You might recall twice before leaving a negative review.

Platform blueprint choices and content moderation play an important role in reducing the number of fraudulent and strategic reviews.

Prepare rules for reviewers.

Design choices begin with deciding who tin can review and whose reviews to highlight. For example, Amazon displays an icon when a review is from a verified purchaser of the product, which can assist consumers screen for potentially fraudulent reviews. Expedia goes further and allows just guests who have booked through its platform to leave a review in that location. Enquiry past Dina Mayzlin, Yaniv Dover, and Judith Chevalier shows that such a policy can reduce the number of fraudulent reviews. At the same time, stricter rules about who may leave a review can be a blunt instrument that significantly diminishes the number of genuine reviews and reviewers. The platform must decide whether the do good of reducing potential fakes exceeds the price of having fewer legitimate reviews.

No matter how skilful your system'due south design, you need content moderators.

Platforms also decide when reviews may be submitted and displayed. Afterward realizing that nonreviewers had systematically worse experiences than reviewers, Airbnb implemented a "simultaneous reveal" rule to deter reciprocal reviews betwixt guests and hosts and allow for more than-complete feedback. The platform no longer displays ratings until both the guest and the host have provided them and sets a deadline afterward which the power to review expires. Afterward the company made this alter, research past Andrey Fradkin, Elena Grewal, and David Holtz institute that the boilerplate rating for both guests and hosts declined, while review rates increased—suggesting that reviewers were less afraid to leave feedback after a bad experience when they were shielded from retribution.

Phone call in the moderators.

No matter how good your system'southward blueprint choices are, you lot're jump to run into problems. Spam can skid in. Bad actors can endeavor to game the system. Reviews that were extremely relevant 2 years ago may get obsolete. And some reviews are just more useful than others. Reviews from nonpurchasers can be ruled out, for example, but fifty-fifty some of those that remain may exist misleading or less informative. Moderation can eliminate misleading reviews on the basis of their content, non just considering of who wrote them or when they were written.

Content moderation comes in three flavors: employee, community, and algorithm. Employee moderators (frequently called community managers) can spend their days actively using the service, interacting online with other users, removing inappropriate content, and providing feedback to management. This option is the almost costly, only information technology can assistance you apace sympathise what'due south working and what's not and ensure that someone is managing what appears on the site at all times.

Community moderation allows all users to help spot and flag poor content, from artificially inflated reviews to spam and other kinds of corruption. Yelp has a simple icon that users can post to submit concerns about a review that harasses some other reviewer or appears to exist about some other business organization. Amazon asks users whether each review is helpful or unhelpful and employs that information to choose which reviews are displayed start and to suppress particularly unhelpful ones. Often only a pocket-sized fraction of users will flag the quality of content, however, and so a disquisitional mass of engaged users is needed to make community flagging systems work.

The tertiary approach to moderating content relies on algorithms. Yelp's recommendation software processes dozens of factors near each review daily and varies the reviews that are more than prominently displayed as "recommended." In 2014 the company said that fewer than 75% of written reviews were recommended at any given fourth dimension. Amazon, Google, and TripAdvisor have implemented review-quality algorithms that remove offending content from their platforms. Algorithms can of course go beyond a binary classification and instead assess how much weight to place on each rating. Mike has written a paper with Daisy Dai, Ginger Jin, and Jungmin Lee that explores the rating aggregation problem, highlighting how assigning weights to each rating can assist overcome challenges in the underlying review process.

Putting Information technology All Together

The experiences of others have always been an important source of information almost product quality. The American Academy of Family Physicians, for example, suggests that people turn to friends and family to learn well-nigh physicians and become recommendations. Review platforms have accelerated and systematized this process, making information technology easier to tap into the wisdom of the oversupply. Online reviews accept been useful to customers, platforms, and policymakers alike. We have used Yelp data, for example, to look at issues ranging from understanding how neighborhoods change during periods of gentrification to estimating the impact of minimum-wage hikes on concern outcomes. But for reviews to exist helpful—to consumers, to sellers, and to the broader public—the people managing review systems must retrieve advisedly well-nigh the design choices they make and how to near accurately reflect users' experiences.

A version of this commodity appeared in the Nov–December 2019 issue of Harvard Business organization Review.

joneswithile.blogspot.com

Source: https://hbr.org/2019/11/designing-better-online-review-systems

0 Response to "Star Ratings and Review Comments in a Review System Pdf"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel