So in my off/unemployed time I've been building up my SEO and online reputation skills - a friend recommended to me a few good sites, figured I'll post one here for future reference :)
http://www.woorank.com/ - Great site - for *free* it will scan your site's code and overall will tell you weak spots (alt texts are off, no meta tags or descriptors, etc.) - been using this to help speed up the site-optimization process :)
Obviously for a fee they'll dig into things like your individual pages, but still - a good site overall.
I'm working on a HTML/XML parser in excel that will allow me to dump the raw code in and pull out "problem" areas because while a site like woorank is helpful, often enough I need to compile documents detailing exactly what needs to be changed, and where. Anything to speed it up :)
Monday, May 17, 2010
Tuesday, April 27, 2010
Back in action!
Great articles on Search engines & their respective Bots (why Yahoo.com named theirs "Slurp" is completely beyond me):
http://ghita.org/search-engines-dynamic-content-issues.html
http://en.wikipedia.org/wiki/Robots_exclusion_standard
Posting these as I'm assisting in some SEO/online reputation work now :)
More to come!
Still finding that excel is the best top-level site-content parser, especially for search engines - they only have a few variable responses (probably tested and limited because they're trying to find the balance between stopping automated querying of their services (much like the old API they scrapped - by-passed ads and their revenue) and confusing their users with too many different cases.
http://ghita.org/search-engines-dynamic-content-issues.html
http://en.wikipedia.org/wiki/Robots_exclusion_standard
Posting these as I'm assisting in some SEO/online reputation work now :)
More to come!
Still finding that excel is the best top-level site-content parser, especially for search engines - they only have a few variable responses (probably tested and limited because they're trying to find the balance between stopping automated querying of their services (much like the old API they scrapped - by-passed ads and their revenue) and confusing their users with too many different cases.
Subscribe to:
Posts (Atom)