My previous post on SEO interview questions I had a commenter ask where they could find the answers to these questions. Most of them are purposely open ended to get an idea of the level of experience and knowledge of your candidate. I will take the more specific ones and provide some explanations. I have also added an additional question not on the original post. If you are looking to hire or want to dazzle your prospective employers at an interview this post may be helpful.
What areas do you think are currently the most important in organically ranking a site?
Obviously a subjective answer, but domain trust, inbound links/anchor text, and properly formatted title tags are a good start.
10) What kind of strategies do you normally implement for backlinks? What do you think about link buying, link bait, and other specific backlink strategies?
There are too many correct answers for this one, so let’s go with the wrong answer: “Reciprocal link requests”
42) What is the Ultimate Answer to Life, the Universe, and Everything?
42 of course. If your candidate doesn’t know this please shoot him with the Point-of-view gun.
22) What is page segmentation? (ever heard of VIPS?)
VIPS is a research paper from Microsoft that stands for Vision-based Page Segmentation which is just an offshoot of the general topic of page segmentation. It is an analysis of how a user understands web layout structures based on visual perception and is independent from the underlying code and technologies. Each section of the page is segmented into blocks and different degrees of relevance are put on each block. This explains one reason why links in content areas are more heavily weighed than sidebar and navigational links (another reason is through the use of shingling algorithms, which I’ll get into on another question). Since this is a visual topic, I’ll give you a visual example from the research paper:
23) What’s the difference between PageRank and Toolbar PageRank?
Internally PageRank is constantly updated while toolbar PageRank is updated every 2-3 months. Toolbar PageRank is a single digit integer while the internally calculated PageRank is more like a floating-point number. And the final answer: Who cares?
24) What is Latent Semantic Analysis (LSI – Indexing)?
The process of analyzing the relationships between terms in sets of documents. The engine looks not only at the query, but also looks for common terms in the document set. Documents that are semantically similar will carry more weight than those that are not. This is often a misunderstood concept.
25) What is Phrase Based Indexing and Retrieval and what roles does it play?
Phrase based indexing is used to classify good and bad phrases based on certain criteria inside the entire document. The number and proximity are taken into account. It also is capable of predicting the presence of other phrases on the page and will assign a higher or lower value depending on if those phrases or present or not.
26) In Google Lore – what are ‘Hilltop’, ‘Florida’, and ‘Big Daddy’?
Hilltop: An old and often contested algorithm that calculates PageRank based on expert documents and topical relevancy. The theory behind it was to decrease the possibility of manipulation from buying high PR links from off topic pages. This was implemented during the Florida update, which is our next topic.
Florida: The highly controversial update implemented by Google in November of 2003, much to the chagrin of many seasonal retail properties. There were several theories as to what was included in this update; Over optimization filter, competitive term filter, and the Hilltop algorithm. This update had catastrophic results on many web merchants.
Big Daddy: A test data center used by Google to preview algorithm changes. This information was made public around November of 2005 by Matt Cutts and allowed marketers to preview upcoming SERP’s.
What is a shingling algorithm and how is it used?
A shingling algorithm is a page segmentation method similar to VIPS, but less resource intensive and more likely to be used in search engine algorithms. These shingling algorithms look for blocks of content that do not occur frequently across a web site and look for blocks with certain desired features. When the engine stores this information, the navigational, advertisements, and other non-content areas are omitted. This increases speed, saves on storage space, and theoretically makes the results more relevant because of the increase in unique content.