This was the first organic track of the day, moderated by Danny with speakers from the four major engines. Vanessa Fox from Google, Amit Kumar from Yahoo, Peter Linsley from Ask (you should maybe do a few more posts Peter), and Eytan Seidman from Micro$oft Live Search.
* Ask was up next, and it was mostly a reiteration of the Microsoft presentation. Again it was repeated that duplicate content was not a penalty. What is it with this “not a penalty” thing? It’s a damn penalty if your pages don’t rank because your retarded algorithm can’t figure out which one is more important. One new thing he mentioned was that Ask only looks at indexable content for duplication and not other areas like templates and navigational areas. They’re basically using page segmentation and duplicate content analysis together.
* Here comes Yahoo. Same drivel. This time the term was “approximate duplication”. Please define. It’s very difficult to make a site with 50 thousand pages not have some type of “approximate duplication” or “substantially identical” content. One of you engineers give me a range or something. (on another note, all of the “engineers” had trouble working PowerPoint. Except the Microsoft guy).
* Next up was Vanessa Fox. And no, unfortunately she wasn’t nude. Her presentation I kind of glazed over. She had duplicate pictures of Alyson Hannigan from Buffy the Vampire Slayer (and no, again unfortunately she wasn’t nude either). It seemed to be a basic explanation of duplicate content that I thought could have been much more advanced. Google gets an F for info, and an A for mentioning Buffy.
* In Q&A time there weren’t too many questions that were interesting. I had my hand up to ask one but never got the chance. My question was that if they were using page segmentation to analyze for duplicate content would they devalue certain sections of content on a page with “substantially identical” content areas. I also wanted them to define or give us some kind of a range that they considered “substantially identical”, but I knew they wouldn’t answer that one. There where a few interesting ideas thrown around about certain variables that would automatically tell the bots a certain page was a duplicate or some reporting features inside the webmaster consoles to show what they considered duplicate. I seriously doubt they will ever have tools like that however, they would be easily gamed.
That was it for that session. I thought it could have been a little more advanced than it was. Up next: How to spam social media networks. Stay tuned.