Google Maps explains how it handles review bombing

Google Maps explains how it handles review bombing

SHARE IT

03 February 2022

Google explains how it handles reviews posted on Google Maps, describing the stages and the actions it takes to review and publish user-generated reviews.

Google Maps team said in a video that:

Our team is dedicated to keeping the user-created content on Maps reliable and based on real-world experience. That work helps to protect businesses from abuse and fraud and ensures reviews are beneficial for users. Its content policies were designed to keep misleading, false and abusive reviews off our platform.

The moderation systems, which are Google's "first line of defense because they're good at identifying patterns," examine every review for possible policy violations. They look at, for example, the content of the review, the history of an account, as well as whether there was any unusual activity associated with a particular place.

Ian Leader, product lead of user-generated content at Google Maps stated that the machines get rid of the "vast majority of fake and fraudulent content" before any user sees it. The monitoring process can take a few seconds and if the systems see that there is no problem with a review, it becomes immediately available for reading.

However, the systems are not perfect, Leader added, saying:

For example, sometimes the word 'gay' is used as a derogatory term, and that’s not something we tolerate in Google reviews. But if we teach our machine learning models that it’s only used in hate speech, we might erroneously remove reviews that promote a gay business owner or an LGBTQ+ safe space.

That's why the Google Maps team often runs quality tests and carries out additional training to teach the systems various ways some words and phrases are used to strike the balance between removing harmful content and keeping useful reviews on Maps.

View them all