Bloggers love to marvel at the wonders of #SEOscience because the guardians of the Google make us believe it’s the key to earning links.
The dirty secret is that anyone can be a scientist in this industry, and you don’t need any qualifications in computer science to state your claim as an #SEOscience expert.
The SEO industry needs scientists because we must test everything Google says, as we can’t in good conscience take their Webmaster Guidelines at face value. You can find the secrets to top rankings within this impenetrable tome for webmasters, but only between the lines of every sentence.
Find those clues and the secrets to ranking will be yours. However, your ultimate test is actually as simple as trying to test anything at all. Once you do that, you’ll truly become an SEO rockstar/data prophet/marketing ninja/social media guru/you name it.
Or Not.
Could you tell I was being a bit facetious there? The reality is that #SEOscience is chock full of massive flaws. While the guidelines search engines have given us aren’t 100% clear, it’s completely unreasonable to expect them to be. Even Googlers can’t predict exactly how their own search engine is going to stack the deck of “search engine results pages” (SERPs). Google’s algorithm is as secret as the recipe for Coca Cola or how Colonel Sander’s makes his chicken so finger lickin’ good.
So called SEO scientists obsess over the algorithm when all it really does is deal out a pack of cards in the order of millions of mini-relevancy experiments per second. And because the algorithm is constantly trained on a live set of humans determining relevance for themselves, the truth is that Google is testing us — not the other way around. And the billions of click-throughs to websites displayed on billions of different search results comprise the real data Google analyzes to determine what’s relevant to us. After all, we’re the human users Google serves and the constituent parts of the audiences that drive the lion’s share of Google’s revenue.
The way those clicks are even occurring has changed massively over the last 10 years, and yet #SEOscience is still talking about Google’s output rather than search users’ input.
How users actually use search engines is the wild card that #SEOscience refuses to acknowledge. #SEOscience experiments look for causes that remain unambiguously unmeasurable in the discussion — at best, it’s chaos theory. In practice, SEO is little more than an attempt to correlate your site visibility with the needs of search users. Is it even reasonable to think our studies can reveal anything more than a correlation? No, it’s not.
Moreover, should we really expect experiments to tell us more about what search engines deem relevant about a site, when the ultimate sign of relevance becomes ever more closely matched to the intent of the user? Of course not, given the simple truth that the won click is quite literally the deciding factor in the search algorithm equation.
The outcome is the result of a giant Turing test of a magnificent list-building engine that can play simultaneously against millions of people inputting billions of queries. Google’s dominance in search perpetuates a self-fulfilling prophecy: the algorithm can continue to perfect the relevance of its search results because it is constantly feeds a trove of search and user data on which to train its algorithm. As a result, Google search gets more human with time.
Google aims to democratically elect web pages in response to text-based queries, SEO be damned. That we can observe a correlation between the number of links pointing to a site and how well-recognized it is in the real world does not necessarily point to a cause-and-effect relationship, but it’s no coincidence either.
In fact, we know that links reflect the real world, as it’s exactly the presumption upon which Google is based. Sites with more links pointing to them are probably stronger indicators of the site’s utility than a machine’s understanding of the text on the page. We also know that once Google atomized the web according to the voting power of the link, people started using search engines more. They just organized the web in a way that people became better at using it. The initial experiment worked and so the Grand Experiment began perpetuating itself.
Google is as guilty as us SEOs of relying on correlation. That Google was already counting on that correlation is evident from the fact that they built an entire concept of ‘authority’ around it — a concept which is intuitively measured against its similarity to the real world.
And it really is conceivable that search could be organized differently in the future. Search Engine indexes always need to be reorganized, and ranking updates are a testament to that need. The major ranking factors that change with every Google dance are often based in the new training sets Google is introducing to search data. And these sets are everywhere: Image, News, and Video results; Google Instant feedback into search queries; human editors in Panda; spam data in Penguin; and freebase data in the Knowledge Graph. It’s no great mystery how these data sets influence search engines!
#SEOscience experiments are often “janky” and unbalanced by their very nature. If you’re trying to play against the dealer (rather than with the user), you’re probably going to draw junk conclusions from junk data. #SEOscience experiments that don’t analyze user behavior will nearly always tend towards a certain degree of pointlessness.
That is not to point fingers either. It really isn’t anyone’s fault. After all, how can the tiny corner of the internet where you’re visible tell you anything conclusive about Google’s internal workings? The behavior of search end users has changed drastically over the last 10 years. And the entire structure of the web has changed from one of a giant archive to one of an always-on network. If Google has to adapt to the changing behavior of web users, then it should be no surprise that you will too.
Google’s model of clustering topics by authority has gone from a link graph built off the static backbone of DMOZ and the Yahoo! Directory to the knowledge graph built off the dynamic ontological net of Freebase data (ontologies describe the relationships between concepts). That alone shows how much Google’s understanding of the web has evolved. And Google’s design priorities for search results pages (SERPs) have shifted from a reflection of structural data about the web to a topic-centric reflection of the average end user’s search intentions.
So, if the search index and associated rankings are a reflection of what works for the user, then what do we know about search engines and their ranking algorithms?
Well, not all that much if we’re being completely honest. Ultimately, relevance is in the eye of the beholder and at the end of a mouse click — it’s not in the algorithm.
What we do know is that following the pretty basic points made in the Google Guidelines does tend to yield improved rankings. Yes, some (such as the definition of quality) may be open to interpretation, but as Googlers say themselves, “Webmasters who spend their energies upholding the spirit of the basic principles will provide a much better user experience and subsequently enjoy better ranking than those who spend their time looking for loopholes they can exploit.”
In short, following the spirit of the Google Guidelines really will deliver specific and measurable results. You could test each point and you’d still have only wasted time understanding a computer when your goal should have been to understand people.
#SEOscience perpetuates stagnation. I implore SEOs to concentrate their efforts on the information seeker and relay those insights back to the community, rather than worrying about disavowal files or the like. #SEOscience as it currently stands is inherently useless when you consider that the actual mission of an SEO is to engage the search engine user.
Thus, it’s time to reframe our #SEOscience problem as not one of determining the whims of Google, but one of pinpointing the whims of users.
Follow the spirit rather than the letter of Google Webmaster Tools. Who cares what the algorithm is? If you’re not betting on #creativeSEO solutions, you’re essentially betting against yourself. Without #CreativeSEO remaining a stable part of the larger community discussion, we risk ignoring the most important ingredient in the question of Relevance.
Jonathan Allen
Latest posts by Jonathan Allen (see all)
- A Few Big Thoughts on the #Twitter10K Tweet News - February 10, 2016
- Why #SEOscience Is Always a Massive Fail - January 28, 2015
- Analyzing SEO WordPress Plugins - December 1, 2014
steve plunkett says
How long have you been doing SEO? A. have never used hashtag #SEOScience. B. Am not an algo chaser, those people focus on the problem, not a solution.. C. I study user behavior and Google Machine Learning. D currently studying voice search and KBO. E. My title is Sr. Search Scientist. i’ve been SEO since 1993.. helped create yahoo directory and DMOZ.. that was the science of ontonology..
but.. a lot of your article hits home.. there are entirely too many algo chasers… but are they are scientists.. my degrees are in psychology and criminal justice.. im looking at the user… not sure where you got your science from.. maybe it is the hashtag? more than likely.. because testing isnt #SEOScience but #whiteCoatSEO. =)
Jonathan Allen says
I’ve been doing SEO since 2002, so 12 years now. The intention of the article was not to call out any specific activity on the #SEOscience hashtag. I can see how it might not have been clear, but I just intended to include the hashtag to invite discussion on social media – and there’s another hashtag #creativeSEO that was designed to stand in contrast.
If you use that hashtag, I wasn’t trying to claim you were an algo chaser. But I like the distinction you made of focusing on the problem and not the solution. I totally agree. By the same token, I have no beef with your job title either! I think #SearchScience is eminently more sensible than the concept of #SEOscience! And specifically because search science is trying to under user needs, not the implicit needs of search engine algorithms.
And you make a great point that testing isn’t science. In many ways it summarizes in one sentence what took me an entire article to say! I just find it annoying that a lot of tests get reported as “science,” when most of the tests are not well constructed or fair.
And thanks for saying that much of it hits home – my main aim for writing the piece was ask if we’re really all absorbing the claim of science that is so frequently being made and find if there were people out there were having the same frustrations. And maybe if they were, we could talk about it more on the #SEOscience hashtag.