Internet Beginning
The Internet was actually a collection of FTP (File Transfer Protocol) sites that users could access to download (or upload) files.
The first search tool used on the Internet is Archie in 1990 by Alan Emtage. Archie wasn’t exactly a search engine which we use today as Google.
A program that indexed the plain-text documents that later became the first web sites on the public Internet. is known as Gopher which is created by Mark McCahill, at the University of Minnesota in 1991.
Both of these programs worked in essentially the same way, enabling users to search the indexed information by keyword.
Wandex was the first program to both index and search the index of pages on the Web which is developed by Mathew Gray. This program was the first program to crawl the Web, and later became the basis for all search crawlers. After that, search engines took on a life of their own. From 1993 to 1998, the major search engines that you’re probably familiar with today were created.
A Search Engine
Type a word or phrase into a search box and click a button. Wait a few seconds, and references to thousands (or hundreds of thousands) of pages will appear.
So…what exactly is a search engine?
On the back end, a search engine is a piece of software that uses algorithms (algorithms are computer programs used for calculation and data processing; it has a list of well-defined instructions for completing a task) to find and collect information about web pages.The information collected is usually keywords or phrases that are possible indicators of what is contained on the web page as a whole, the URL (Uniform Resource Locator – which is the web address that appears in your address bar of your browser) of the page, the code that makes up the page, and links into and out of the page. That information is then indexed and stored in a database.
On the front end, the software has a user interface, a word or a phrase, in an attempt to find specific information. When the user clicks a Search button, the algorithm then examines the information stored in the back-end database, and retrieves links to web pages that appear to match the search term the user entered.
A Search Engine Results Page (SERPs)
A search engine that’s visible to users, are the search engine results pages, known as (SERPs). This is the collection of pages that are returned with search results after a user enters a search term or phrase and clicks the Search button.
The SERP is where you ultimately want to end up; and the higher you are in the search results, the more traffic you can expect to generate from a search. Your goal will want you landing on the first page of the SERP, specifically, in the top 10 or 20 results that are returned for a given search term or phrase.
Most people begin reading the titles and description of the top results.
A Crawlers, Spiders, and Robots
The query interface and search results pages truly are the only parts of a search engine that the user ever sees. Every other part of the search engine is behind the scenes, out of view of the people who use it every day. That doesn’t mean it’s not important, however. In fact, what’s in the back end is the most important part of the search engine, and it’s what determines how you show up in the front end.
This, crucial to your success, back end process that takes place is the collecting of information about web pages by an agent called a crawler, spider or robot. The crawler literally looks at every URL on the Web that’s not blocked from it; using an algorithm, it collects keywords and phrases on each page, which are then included in the database that powers a search engine. In the most basic sense, all three programs, crawlers, spiders, and robots, are essentially the same. They all collect information about each and every web URL.
This information is then cataloged according to the URL at which they’re located and are stored in a database. Then, when a person uses a search engine to locate something on the Web, the references in the database are searched and the search results are returned.
A Databases
Every search engine contains or is connected to a system of databases where data about each URL on the Web (collected by crawlers, spiders or robots) is stored. These databases are massive storage areas that contain multiple data points about each URL.
The data might be arranged in any number of different ways and is ranked according to a method of ranking and retrieval that is usually proprietary to the company that owns the search engine.
A Quality Considerations
A Quality Considerations
The method of ranking called PageRank (for Google) or even the more generic termquality scoring. This ranking or scoring determination is one of the most complex and secretive parts of SEO.
When you are considering the importance of databases, and by definition, page quality measurements in the mix of SEO, it might be helpful to compare it to something more familiar… customer service.
What compromises good customer service is not any one thing. It’s a conglomeration of different factors… greeting the customer, attitude, helpfulness, and knowledge, just to name a few. These are a combined effort that come together to create a pleasant experience. A web page quality score is essentially the same.
Some of the elements that are known to be weighted to develop a quality score are as follows:
- Domain names and URLs
- Page content
- Link structure
- Usability and accessibility
- Meta tags
- Page Structure
It’s a combination of these and other factors that are used to create the quality score.
The better the quality score your site generates, the better the search engine results will be, which means the more traffic you will have coming from search engines.
Putting Search Engines to Work for You
Search engine optimization is essentially the science of designing your web site to maximize your search engine rankings. This means that all of the elements of your web site are created with the goal of obtaining high search engine rankings.
Those elements include the following:
- Entry and exit pages
- Page titles
- Site content
- Web site structure
In addition to these elements, you also have to consider things such as keywords, links, and meta-tagging. All of this means that the concept of search engine optimization is not based on any single element. Instead, search engine optimization is based on a vast number of elements and strategies. It’s also an ongoing process that doesn’t end once your web site is live.
A Manipulating Search Engines
SEO is about manipulating search engines!
You Can:
- Create a web site that contains meta tags, content, graphics and keywords that help improve your site ranking.
- Use keywords liberally on your site, so long as they are used in the correct context of your site topic and content.
- Encourage web site traffic through many venues, including article directory submissions, social media, forum and blog participation, just to name a few.
- Submit your web site to search engines and directories manually, rather than wait for them to pick up your site in the natural course of cataloging web sites.
These are just a few basic rules for putting search engines to work for you.
Page Rank
The Google’s PageRank actually started as part of a research project that Larry Page and Sergey Brin were working on at Stanford University.
“PageRank relies on the uniquely democratic nature of the Web by using its vast link structure as an indicator of an individual page’s value. Google interprets a link from page A to page B as a vote, by page A, for page B. But Google looks at more than that sheer volume of votes, or links a page receives; it also analyzes the page that casts the vote. Votes cast by pages that are themselves “important” weigh more heavily and help to make other pages “important.””
In other words, it’s a mystery. A page that has more links (with equal votes) might rank lower than a page that has a single link that leads to a “more important” page. The lesson? Create pages for visitors, not for search engines.
A Tip to High Page Rank
There are many ways to help you receive higher Google PageRank, but one of the most important factors to be discovered by the Google Algorithm would be for it to notice backlinks from relevant, high authority sites. The more relevant an inbound link from another site pointing to your site is, the higher PageRank you can potentially receive. If you are not already doing it, you need to put an SEO plan in motion that targets relevant content websites.
One more thing to keep in mind… if you have a large amount of inbound links pointing to your site, that aren’t related to your site’s content, it can be detrimental for a higher Google PageRank.
Google Algorithm
The Google Algorithm is the heart and soul of the ever so popular Google search engine. One element of online marketing that has people scratching their head in confusion is the algorithm that actually determines what the Google PageRank should be.
Search engines as a whole are very complicated programs, even to the most brilliant mathematical minds.
The algorithm that the Google search engine uses, establishes the baseline to which the web pages are compared for their database. For the Google algorithm, there are more than 200 factors that are used to establish this baseline. Google makes about half-a-dozen changes to that algorithm each week to reflect changes in the way people actually perform searches.
This mysterious baseline that we are talking about varies from search engine to search engine. Some search engines look more closely at links than others do, some look at keyword and context, some look at meta data, but most combine more than one of those elements in some unknown ratio that is completely proprietary.
No comments:
Post a Comment