Last year we saw Google put the emphasis on pages that had a high level of expertise, authoritativeness and trustworthiness. That is still very important, but now they also have a new emphasis in this version. Not surprisingly, it’s mobile, something Google has been pushing hard the past couple of years as we see the percentage of mobile users becoming more than desktop users.
And not only is there an emphasis on mobile, Google is now having their quality testers not just from a desktop but also from mobile devices. So it is even more important for websites to ensure they are mobile friendly – aside from the mobile friendly tag and ranking boost – and handling their mobile visitors correctly.
And we also learn the real reason and logic behind why Google added featured snippets and knowledge cards to the search results.
E-A-T
As a refresher, Google is having their raters look specifically at a site’s E-A-T… That is, analyzing the page’s “expertise, authoritativeness and trustworthiness” or the lack of it.
From a general point of view this isn’t anything new. Google has always stressed that you want to make sure your content is not only quality content but that it highlights any expertise, that the author of the content is some servant authority on the subject, and if the site as a whole is trustworthy or not.
This was first added into the Quality Rater’s Guidelines last year, and continues to be a major part of it. We go into more detail specifically on E-A-T in our coverage of the Google Quality Rater’s Guidelines last year.
Your Money, Your Life
The idea of pages that were considered part of “Your Money, Your Life”, or YMYL for short, was also something that was added last year when Google completely revised these guidelines.
Google hasn’t made any major changes to this section either, but again if you are familiar with it, we also cover this and much more detail here.
Mobile Quality Guidelines
Brand new to the quality guidelines is mobile. And not only is it new, but it is one of the two major emphasis in the rewrite. Most of the examples Google includes in the handbook are showing mobile results and no longer the desktop versions we have seen in previous versions.
Mobile Devices and Potential Problems on Mobile Websites
In the new section about mobile, Google details multiple issues that tend to cause issues on websites when viewed from a mobile device. While we have often heard of these issues, and some are flagged in the Google mobile friendly test, here are the issues Google wants the raters to consider when viewing a site from a mobile device.
- Entering data may be cumbersome
This isn’t much of a surprise and why Google Chrome and other browsers offer auto-complete for forms. Even said, there are still mobile sites that still offer a very poor form user experience to their visitors. - Small screen sizes
Some websites just aren’t that perfect on mobile, even if they are a “mobile friendly” site. So they want raters to be aware of how the site conducts itself on small screen devices, which means those who haven’t ensured their mobile site works well could run into problems here. - Some webpages are difficult to use on a mobile phone
Google mentions many of the usual pitfalls we see when trying to use a mobile site such as side-scrolling, menus and navigation menus that either are too small or don’t work, images that don’t resize for mobile, and sites that use Flash or other elements that cannot be viewed on a mobile device. - Internet connectivity can be slow and inconsistent
While page speed is not a part of the mobile friendly algorithm (although Google has hinted on multiple occasions it likely will be) Google talks about how things like switching networks, app openings and swapping, voice commands, and webpage load times are issues that can cause problems for mobile users.
Google also stresses that smartphones should make tasks easy, not problematic.
Mobile & Android
While people use many different types of devices when looking at a website on mobile, Google is making specific requests that their raters consider the pages if it’d been viewed on an Android device. And even if a rater is using a desktop, they are instructed to evaluate as if they are on a smart phone.
The recommendation of android only should recent concerns, especially amongst webmasters who are iOS users you want to make sure that your website looks great on an Android device, even if you’ve only ever personally looked at it from an iOS one
Fortunately, most mobile designs look really identical regardless of whether you’re using iOS, Android or a Windows phone, despite some SEOs feeling the need to micro manage between site designs for both iOS and Android.
ADDED: Google clarified the Android issue to say it only applies to result blocks that are of the app install variety. Update here.
Know Queries & Know Simple Queries
While last time we covered the search quality rating guidelines, both Your Money, Your Life and E-A-T were the big new additions. This time the new quality concepts they are introducing are Know Queries and Know Simple Queries.
Know Simple Queries
Know Simple queries tend to be the type of queries that often show a featured snippet or other type of knowledge boxes. They are searches where someone is looking for a very specific answer and not looking for general broad information about the keyword. Another way to think of it is long tail versus short tail keywords.
They examples they use are things like “how tall is <someone>” or “what is the <company> stock price” and these are often what we see with knowledge boxes.
They also talk about how Know Simple Queries don’t necessarily ask a question. For example, “how tall is <someone>” would have the same answer as “<someone>’s height”.
Know Simple Queries are ones that can be answered in a short list or in 1-2 sentences.
Know Simple Queries are the ones that most often trigger featured snippets.
Know Queries
Know Queries tend to be ones were the result couldn’t be answered in a short list or 1-2 sentences, because the result would either be too broad or would need to be much more detailed. While “how tall is <someone>” would be a Know Simple Query, simply searching for “<someone>” would be a Know Query.
Google does say that mobile can influence whether something is considered a Know Query or a Know Simple one. Specifically, they say for searches like “weather” when done on mobile, it would be considered a Know Simple query since the assumption is the person is looking for current weather information for their current location. If they were looking for weather elsewhere, their query would be tailored for that by including the location keyword when searching.
Do and Device Action Queries
Google also talks about device action queries, which doesn’t just mean installing or switching to an app. It actually covers the type of searches where they want to “do” something on a site, including purchasing but also downloading, obtaining or interacting.
Likewise, the Device Action Queries are the ones where we expect our phones to do something specific based on the query, whether it is phone someone, sending a text, opening an app or using other features such as the calendar or alarm/timer.
This also shows the importance Google is placing on apps that do things on smart phones, since users on an Android device are technically “searching” when doing any of these Device Action Queries.
Local Search
Explicit Locations
A new concept Google has added is called “Explicit Locations”. This is the term Google uses when a local searcher is performing a local search but also includes the location as well.
For example, instead of just searching for “restaurant” where Google would assume the searcher is looking for restaurants in their current location, this new term refers to when the searcher specifies the location – and often a signal for looking for a local result in a non-local city. Searching for “Seattle restaurant” would be an example of an “Explicit Locations” search.
The “Explicit Locations” concept is also the focus of a Google patent, Inferring Geographic Locations for Entities Appearing in Search Queries.
Local Queries and User Locations
Local is stressed so much more in this version of the rater’s guidelines. And raters are told to consider local intent when they are rating searches which could be local queries. Essentially, searches that could lead to a local 3-pack result appearing are ones that Google is classifying as Local Queries.
They also state that local searches don’t necessarily include the specific location in the search but it should be implied. If someone searches for a Chinese Restaurant, the assumption is the searcher wants a local one and not one 300 miles away.
However, some searches can have both local and non-local intent. For example, if someone searches for a big brand such as Best Buy or Walmart, it isn’t always clear whether the person is searching for the local store or the main company’s website to buy online.
There are also searches where depending on the location, it could have different results. The specific example they use is for a popular restaurant in Sunnyvale called Turmeric. So if someone is searching for the keyword “turmeric” in Sunnyvale, chances are high they are looking for the restaurant. But someone searching for that keyword elsewhere in the US or the world are likely looking for the spice.
When Quality Raters are evaluating some results, they are told where the results were triggered from, so they are rating the query with potential geographical intent available to them.
Local Results and Nearby
When a searcher is looking for something nearby, Google interprets nearby based on the type of query. “Users might be willing to travel a little farther for certain kinds of local results: doctors’ offices, libraries, specific types of restaurants, public facilities like swimming pools, hiking trails in open spaces, etc.”
It goes further to say that nearby can be interpreted by Google to be further afield. “Sometimes users may accept results that are even farther away, such as a very specialized medical clinic.”
Multiple User Intent Queries
Sometimes a search can have multiple intents, which makes it harder to determine the intent, although again, geography could play a role.
For the example “Harvard”, there are multiple intents such as the website, directions or just more information about the school itself.
“Walmart” is a second example. While most searchers intend to either find their local store result or go to the homepage to shop online, there still could be a small percentage of people searching for information about the company, such as the leadership or stock prices.
Web Search Result Blocks
We have a new official name, and that is for the card-like results we see from Google on mobile. These are called web search result blocks. These are just the standard individual organic search results. And Google considers them very user friendly on mobile.
Special Content Result Blocks
All those cards that show for Know Simple Queries in things like featured snippets and knowledge boxes. So things like the weather box, movie carousels, food calories, sports scores, YouTube videos and featured snippets are all technically known as “Special Content Result Blocks”.
Google also gives the real reason why these types of results are included in the search results. The motivation wasn’t to cause fewer clicks on the actual search results or to force certain types of sites from the search results; the actual reason is much simpler and shows that Google realized just how significant mobile would be.
They are added to “help users immediately get information or content” to help mobile phone users “accomplish their tasks very quickly” because “mobile phones can be difficult to use.” So it is really about giving mobile users a better user experience, something I have said before.
Device Action Result Blocks
Another type of search that we find is merging mobile search, mobile OS and their apps even closer. These are the types of queries that prompt a result block that either offers to open the app or do a device action such as an alarm, making a phone call or sending a text.
And yes, quality raters are now rating these types of “queries”.
Needs Met Rating Guideline
What is Needs Met?
The Needs Met Rating is brand new to the guidelines, and on a basic level, it is almost like looking at it from a Panda perspective. This is one of the new ratings for website owners to use to determine whether or not a site is “quality” or not.
Needs Met refers to raters focusing on the mobile searcher’s needs and thinking about “how helpful and satisfying the result is for the mobile user?”
This also falls back on something Gary Illyes has been saying both on Twitter and at conferences this year, saying site owners need to look at their site from the perspective of “how many visitors have I helped today?” and not just “how many visitors did I get.”
As a side note, raters are also considering Needs Met from an Android mobile device perspective, not from any other mobile device. So this does place slightly more importance on how a site appears on Android rather than iOS or Windows Mobile.
What is Fully Meets?
Fully Meets is the highest score available for the Needs Met rating. It refers to a site – or special content block such as a featured snippet – that fully meets the query of the searcher.
Google considers Fully Meets as ratings that “should be reserved for results that are the complete and perfect response or answer’ so that no other results are necessary for all or almost all users to be fully satisfied.
What type of sites & queries earn Fully Meets?
For starters, there are some kinds of queries where a searcher is clearly looking for a specific page on a specific site. For example, queries of “Amazon” or “Amazon.com” (yes, people still use Google to type in and search for a specific URL to go to) that lead to the Amazon homepage would get a Fully Meets rating.
Other searches that would lead to a specific site, such as “<movie name> imdb”, “<keyword> Wikipedia” and “<brand name> website” that all lead to the appropriate page on the specified site would all get Fully Meets.
Fully Meets ratings are also easier to give for queries that are more detailed than shorter more generic queries. This is because the searcher is more specific in what they are looking for, so Google can give them better results that end up earning a Fully Meets rating. But this also works the other way, a site might not get Fully Meets for the sole reason that the searcher wasn’t specific or detailed enough.
That said, Google considers this type of rating difficult to achieve for many queries and websites. They suggest that if raters aren’t fully clear on whether it truly is a Fully Meets result or not, to default to the lower rating.
Fully Met and App Results
Google does show app results as being examples of Fully Meets ratings. Remember, Google has changed their guidelines to be almost exclusively for mobile results, so seeing app indexed results (such as “open in app” with the search result) can also get a Fully Meets rating. So another point into the “yes, your site should probably think about getting an app” column.
Fully Meets & Local Results
Google also cites many examples where local results are considered Fully Meets. This includes both the local knowledge panel type result as well as the local 3-pack results.
They include examples for both “gas stations near me”, which result in a local 3-pack for the three nearest gas stations, “nearby coffee shops” which also show a local 3-pack, as well as for “chevron at shoreline and middlefield” for a knowledge panel result of a specific business at a specific location.
Because Google served the appropriate results, they are considered Fully Meets, even though they are only local results. Displaying local knowledge panels and 3-packs are the best results over just regular organic search results for local.
Highly Meets Results
Highly Meets are the next step down on the quality scale for meeting the needs of the searcher. These results are typically a “good fit” for the specified query, but for whatever reason, fail to achieve the Fully Meets rating, typically because the answer isn’t fully given, but where a searcher can generally get to the answer from the result.
Sometimes there is confusion in the query itself. For example, when someone searches for “Target”, are they looking for Target’s website? The location of the closest Target? Hours of a local Target?
Highly Meets & Disambiguation
Google had multiple examples where a query could have multiple meanings, so those results earned a Highly Meets instead of Fully Meets, even though the choice for which rating the site gets is directly based upon the query. While Apple vs. apple is one example we often hear about, there are plenty of other queries that make it not quite clear which variation or meaning the person is looking for.
Apple by itself could mean the company or the fruit, while “granny smith apple” or “Apple watch” are much more specific and could merit a Fully Meets instead.
Product Pages & Highly Meets
There has been concern in the past about how much value Google places in product pages. But Google provides many examples of search results for products that do garner a highly meets rating – provided the search result delivers that product.
They use several examples such as “kids backpacks” leading to a section of kid’s backpacks on a retailer site and “broadway tickets” query leading to the Ticketmaster page for Broadway shows.
Moderately Meets
Now we start sliding down the scale from the “really awesome” results.
Moderately Meets refers to content that would be “helpful for many users or very helpful for some users.” So these are generally still quality results, but don’t quite meet the needs for the searcher.
Moderately Meets results could be less up-to-date, less comprehensive or simply from sites that aren’t really an authority in that space.
Slightly Meets
Google judges content that gets a rating of Slightly Meets as results that are low quality, outdated, clearly neglected, or far too broad or specific. These are results that are helpful to only some or few searchers.
It also covers things that are minor interpretations, where there is an alternative meaning or interpretation but one that doesn’t see a lot of interest.
One interesting note is that Google considers Wikipedia an example of Slightly Meets in this section, as it returns a result that is too broad to easily answer the searcher’s query.
We also see an ezinearticles.com result here (yes, they are still around) and Google considers this Slightly Meets as the content was “created by a person without expertise” and “even though the article is about the query, the page is low quality and untrustworthy.”
Fails to Meet
This is the lowest rating… but there is one major thing all webmasters and SEOs should know – ALL sites that are NOT mobile friendly will be rated as “Fails to Meet”.
Other than non-mobile friendly, fails to meet would include the usual crap content suspects like scraper sites, pages ranking for terms the page has nothing to do with as well as really outdated sites.
This rating is also given to sites that are helpful to no users or very few of them.
Specificity of Queries & Landing Pages
Google considers “results for specific queries are easier to rate on the Needs Met scale because we know more about what the user is looking for.” This lines up with how they earlier stated that sometimes excellent pages can rate lower on the Needs Met scale simply because the query is too broad.
Featured Snippets, Special Content Boxes & Needs Met
Even featured snippets, answer boxes and the like can get a low Needs Met rating as well as the highest one. We have all seen examples of weird or odd results showing up in these boxes, and quality raters are rating these as well.
But this can also apply to featured snippets that don’t quite answer the question. For example, a query about doctor’s salary returning a featured snippet that talks about the cost of a doctor’s education would fail to meet the query.
Remember, you can always submit feedback from the link at the bottom of these types of odd or bad results.
Dated Content & Needs Met
Google sees results with pages of content that are less up to date as being a trigger for Moderately Meets, if there isn’t a reason for it to be rated lower.
Outdated content is a different story. This is content where it is outdated because there has been new updated information that makes the result inaccurate. This type of outdated content earns a Slightly Meets rating.
This also applies to news sites as well, in some instances. For example, searching for a celebrity should return very recent news as well as results such as Wikipedia and the celebrity’s own homepage. Bringing up a result about a decade old divorce or movie would be considered dated content for just the celebrity’s name. However, that doesn’t mean these results on a whole are bad, many sites maintain a very useful archive of older news stories. But those are generally only useful if someone is searching for that specific event, such as “Britney Spears’ divorce” or “Kate Winslet Titanic filming”.
Google can also consider content that is merely a day old as being stale, which makes sense when you think about it. For example, traffic conditions for yesterday are useless for today’s commute.
Changing Dates
Google also makes mention to the fact some webmasters will change the date from the date it was originally published, or that the original publish date does not necessarily reflect when it was last modified or updated.
Google suggests quality raters use the Wayback Machine to check for this.
Relationship between Needs Met & E-A-T
Needs Met ratings judge based on the query, the results and the quality of those results in answering the query when someone clicks through.
On the other hand, EAT has nothing to do with the query… it is simply the rating of the webpage or website itself, independent of the search that might lead someone there.
For this reason, a page can have a Not Fully Meets rating yet still have an excellent EAT. The guide also quotes the “useless is useless” mantra we have heard from Google when it comes to this.
However, a result cannot get a Highly Meets rating if the landing page has a low EAT or “other undesirable characteristics.” Those undesirable characteristics? In one example, Google refers to the lack of contact information, lack of author information, lack of evidence of authority or expertise, as well as heavy monetization that distracts from the main content.
Medium EAT is reserved for those sites that are average expertise.
Random Tidbits
PDFs
Are your PDFs mobile friendly? If not, it might be something to consider doing. Not only would it make PDFs easier for mobile users to read, but quality raters are also being asked to evaluate PDFs on mobile devices when they are returned as a search result.
Classified Ad Sites
One omission worth noting is that Google removed a classified site as being an example of a highest quality site. The example was for “the leading website for classified ads” – presumably Craigslist. So it is interesting that this was the only example that was removed from this section.
Like many sites that rely on some kind of user generated content can sometimes fall victim to less-than-stellar content, especially if they don’t have very active moderators policing spam. It could be that the content issue was becoming more obvious and it was harder for Google to consider it an example as a great site.
Porn Ads
If you have a non-porn site, yet run porn ads on the site, either self-placed ads or ones running through an ad network, those pages now automatically get rated as low quality. Google considers them “very distracting” with the potential to “provide a poor user experience.”
They do ask raters to reload the page a few times to see the range of ads on a site. This might help sites that might accidently run a rogue porn ad that has managed to slip through an ad network’s filters. But the majority of sites that run porn related ads tend to run a ton of them on the page, enough that it would probably warrant a low rating simply from the sheer number of ads.
Sneaky Redirects & Whois
When a webpage goes through a couple redirects, Google used to previously suggest that raters do a whois search to see if any of those pages are related. They detailed how to use a whois search and match up related domains. But Google has removed this section now.
Sneaky Redirects & Affiliate Links
While not new, it is worth pointing out that Google still classifies being redirected through affiliate links as a “sneaky redirect.” Pages with these should be classified as “Lowest”.
They specifically mention “clicking the same URL several times takes you to different landing pages on a rotating set of domains” and “redirected to well-known merchant websites, such as Amazon, eBay, Zappos, etc to complete a transaction.”
Heavy Monetization
Google makes specific mention to sites with heavy monetization that distracts from the main content as being deserving of a low rating, particularly for YMYL topics.
They also continue to stress that sites shouldn’t be disguising ads as the main part of the content on the page. “Ads that are designed to look like main content should be considered deceptive.”
Lyric Sites
Google drops two tidbits of information about lyrics in the search results.
Where do the lyrics come from that Google shows directly in the search results? There has been much speculation about it, but Google details it in the quality rater’s guidelines. Their lyrics are actually licensed through Google Play.
They talk specifically about lyrics on various popular lyric sites, and this could be the real reason why Google decided to show some lyrics on the search results page – many lyrics on the web are not 100% accurate. And if Google is licensing the lyrics themselves, they can verify their accuracy.
In terms of Needs Met rating, Google considers lyric pages on lyric sites to be Moderately Meets.
Coupon Sites
Surprisingly, Google does consider coupon sites as valuable and are “very popular in the US.” And they can get Highly Meets in the Needs Met scale.
Aggregate Data
If you are still publishing aggregate data and still believe it works well, or huge lists of items that lead to different pages, you should take note. Google considers aggregate style landing pages as less valuable. For example, a recipe for fried chicken is better than a page showing a listing of 25 different chicken recipes.
Misspelled Queries
Remember the good old days when you could rank well for variations of queries, before Google got better at the “did you mean?” and then show those results instead. Google asks raters to consider misspelled or mistyped queries as if they were spelled correctly.
Want Even More from Quality Rater’s Guidelines?
We covered multiple areas of the guidelines in depth last year – they are all still valid, but we didn’t see any other changes Google made to them. So they are still very much worth reading to fully understand the quality guidelines.
- Our original deep dive into last year’s Google Search Quality Rater’s Guidelines.
- All About Supplementary Content in the Google Quality Rater’s Guidelines
- The Role of Reputation in the Quality Rater’s Guidelines
- How Google Views Advertising in Quality Rating Guidelines
Like last year, we are also going to be looking at some of the more important changes that affect certain segments of SEO, such as featured snippets, local, content, advertising/affiliates and more.
I will likely be active on Twitter about this as well, on @Jenstar and @TheSEMPost. And there could be random discussions and comments about it on our Facebook page.
Also, if you are in or near Dallas, Texas, I am speaking Tuesday at State of Search. I will be talking about what we learned from the new Quality Rater’s Guidelines and how you can use it to your advantage in the search results, including for things like featured snippets and the importance of mobile.
Added: Google has now released the guidelines.
Jennifer Slegg
Latest posts by Jennifer Slegg (see all)
- 2022 Update for Google Quality Rater Guidelines – Big YMYL Updates - August 1, 2022
- Google Quality Rater Guidelines: The Low Quality 2021 Update - October 19, 2021
- Rethinking Affiliate Sites With Google’s Product Review Update - April 23, 2021
- New Google Quality Rater Guidelines, Update Adds Emphasis on Needs Met - October 16, 2020
- Google Updates Experiment Statistics for Quality Raters - October 6, 2020