In my last article, Search Engine Optimization: What it is, and why you shouldn’t care, I suggested that Search Engine Optimization (The practice of designing your Web page in such a way that it appears as high as possible in a list of search results) was not only misleading to both users and Web designers, but that it opened the door to malicious practices. To make matters worse, current SEO “rules” are suppressing many great innovations even while they allow deceitful Web sites to gain high rankings.

In a comment on my last article, Frank summarized what I expect are pretty general feelings about Search Engine Optimization. Essentially, Frank suggests that Search Engine Optimization helps provide “good no-nonsense copy that helps a searcher get their question answered.” Frank suggests that “search engines are not something to be fought but rather embraced.” In general terms, I agree completely. As I said, who could imagine life without search engines? And isn’t it obvious that Web designers should consider how their Web site will be placed in search engine results pages (SERPs) when they build their site? Of course it is. The basic ideas behind SEO are very valid, and are not to be fought.

Needless to say, however, there are some pretty significant flaws in the whole process that need to be addressed. This blog is primarily aimed at small business and personal Web site designers – people on a typically low budget who need to (or want to) build a Web site and attract visitors. In my last few posts, I’ve offered several different methods for building a content management system. The idea behind content management is that you can write the HTML code for your page once, and make changes frequently and easily without having to delve back into your page’s codes or re-upload your page or site every time. Unfortunately, every method of designing such a content management system has one major drawback: They make the page’s content completely invisible to search engines! This is, in my opinion, the most serious flaw in search engine technology. The robots and spiders used to index Web pages do not index any content delivered through JavaScript (which includes all of the content delivered by the easy-to-create content management systems I’ve described). Search engines view each Web page using a text-only process. For example, consider a recent article I wrote and placed on my site using my favorite content management system, Texty. This lengthy article should show up (albeit far down in the list) in searches for “One Laptop per Child,” for example. Unfortunately, search engines will never see this term because they do not pay attention to the actual contents of the page. For comparison, take a look at the article itself and compare it to this page, which shows what the search engines actually see. You’ll notice that not one word of the article appears to the search engines. This is bad enough, but consider an e-commerce site trying to build a manageable e-commerce system. Their entire catalog, including any product details pages, would be completely invisible to search engines (see catalog and detail search engine views).

What does this mean to small business and personal Web site owners? Essentially, it means that without using more advanced (expensive) Web design and building tools, or spending much more time working with the raw HTML of their pages, their sites are forever banned from search engines results pages (SERPs). And, if their pages are not included in SERPs, no one will ever see them. Search Engine Optimization is almost impossible for small, low-budget Web sites. Although many people may think this is OK (after all, they might say, I don’t want to see a bunch of low-budget Web pages when I do a search!), small business owners know that the information they provide on their Web sites is just as valid as the information that larger Web sites have. Don’t they deserve a chance to be recognized?

My question to the search engines is, why not index the Web the way people actually see it? Why ignore content just because it is not hard-coded into the HTML for the site? Most search engines would reply that they use text-only indexing because most braille-based browsers for the blind view Web pages the same way (they only show the text-based content). Unfortunately, that’s not an answer, it just deflects the question – why, then, do braille-based browsers show only the HTML-derived text, and not the “true” content of the page? Search engines will also say that they only index the text-based portion of the page to prevent “tricks” that would display one type of content to the search-engines, and another to the viewer. In truth, however, using a text-only indexing method, search engines are allowing (possibly encouraging?) this practice, rather than preventing it. It would be very easy for me to put some search engine-friendly text within the HTML codes for my page, but then use the same content management tools I’m using to display different content to the user. This is actually a very common practice. Have you ever performed a search for something, clicked on one of the results, and found that the page did not have anything to do with what you were searching for? It could be that the page uses the search engine deficiencies to allow it to be indexed for one type of search, while displaying a completely different type of content to the viewer.

Fortunately for me, the contents of my site are meant to serve as examples of the techniques I describe in this blog. So if my Web pages are not indexed by Google or other search engines, I’m not too upset about it. However, many of my readers will not be so lucky. They are actually trying to attract customers or visitors to their site. How can they do this if the search engines are ignoring their good content?

It’s time for search engines to update their methodology to start indexing the Web as people actually see it. Rather than dictating how Web pages should be built, search engines should be accomodating the way pages are really made. While I don’t have any problems with the way search engines weight results based on incoming links to a particular page or site, I do have a serious objection to the practice of ignoring good content just because of the way it is delivered to the viewer. Let’s face it. Google, Microsoft, and Yahoo all have plenty of money to invest in research to find new ways to index the “true” Web. So why not start indexing what people see, rather than what Web developers are able to fit into the extremely narrow limits of outdated search engine tools?

That’s all for today. In my next post(s), I’ll talk about creating an RSS feed for your Web site. I’m also working on an interesting process for building and delivering an e-newsletter using Zoho Creator to collect and manage subscriptions, build content, and deliver the e-mail. It’s been a pretty interesting project, and I’m looking forward to sharing it with you!

Leave a Reply:

Your email address will not be published. Required fields are marked *


*