Welcome to SEO Perfection

Meet the Vaibhav

Hi,Friends visit my seo blog seo perfection.

Looking for something?

Subscribe to this blog!

Receive the latest posts by email. Just enter your email below if you want to subscribe!

SEO Related Questions & Answer




Q.  Link Structure


When it comes to SEO and search engines, cache is king. One of the most simple metrics of measuring the wealth and authority of a web property is how many pages are indexed and whether or not preferred landing pages are established.

However, before visitors can arrive, search engines need to find, spider, index and rank your pages for relevance. This means you should take every opportunity to increase indexation through creating viable link structures based on a hierarchy of importance for keywords or landing pages.
Visitors don’t always come in the front door of your website; they might enter through the contact page, about us page, privacy policy or lackluster / unintentional pages due to search engine algorithms attempting to sort the best possible match for their query.
This is why it is important to match what people think is important with what search engines see as important. This is accomplished through using a structured approach to navigation and internal linking.
You can have thousands of pages in a website, but if they are not connected by a common thread or are not linked properly, then chances are search engines may never discover them.
Using the metric of deep links and approaching SEO from the premise of inbound links to page and outbound links leaving a page, what remains in between is what determines how much vital ranking factor that page has to pass along to other pages in the website.
Don’t squander links, make each link count by carefully mapping out a hierarchy of keywords that you intend to create SERP positioning for and then performing an internal site audit and identify pages which could leverage more internal link flow for a preferred landing page.
For example, if I have a website based on a particular product or niche and the content on the pages is a little light, then I could always add a blog and leverage the blog by creating content-rich pages to facilitate internal links to the languishing pages suffering from link attrition.
We often use a 65% / 35% ratio for deep linking within a site to promote the preferred page hierarchy within a web property, however what ratio you use is up to you. This means that 65% of the links should go to the homepage from other websites so the homepage can then act as a “catch all” to funnel link flow back to other vital areas of the website.
And 35% of the deep links (links to other pages other than the website) should target specific pages with an array of anchor text (based on the ideal keywords those pages are to appear for).
We refer to this as “keyword clusters” which are semantically aligned keywords based on a root phrase that have synonymous or semantic shingles (groups of words) that overlap (based on the root keyword or phrase).
Once you map out the ranking objective, you can determine the needs of a page to calculate the thresholds needed to produce buoyancy.  The selected page must exhibit the proper proportion of internal links from other pages in the site in order to communicate to search engines that this page is important.
Without Googlebot, Slurp and Bing’s web crawler’s spidering your content, the rankings are a moot point, unless you have other ways to promote those pages (such as using other websites with high traffic, rankings or authority).
The ratio of internal linking is a preference which depends on three things:
1. If all of your pages are intended to get indexed. 
2. If you have established a primary and secondary landing page for your targeted keywords. 
3. How many pages you are willing to create, edit or optimize to produce the most conducive internal link ratio. 

Q. Keyword Research


There's no getting around it. Keyword research is a vitally important aspect of your search engine optimization campaign. If your site is targeting the wrong keywords, the search engines and your customers may never find you, resulting in lost dollars and meaningless rankings. By targeting the wrong keywords, you not only put valuable advertising dollars at risk, you are also throwing away all the time and energy you put into getting your site to rank for those terms to begin with. If you want to stay competitive, you can't afford to do that.
The keyword research process can be broken down into the following phases:
Phase 0 - Demolishing Misconceptions
Phase 1 - Creating the list and checking it twice
Phase 2 - Befriending the keyword research tool
Phase 3 - Finalizing your list
Phase 4 - Plan your Attack
Phase 5 - Rinse, Wash Repeat 

Q .Difference between page rank and page ranking?

-Alexa ranking is traffic ranking, but Page Rank is to see how important your website in google's eye
-Google pagerank measures popularity by backlinks and Alexa measures popularity by traffic.


Q. What is the difference between a Directory and a Search Engine? 

Search Engines and Directories both allow you to run searches for web sites, but the results will likely be different due to the method they use to build their database of web sites.
Search Engines 
Search engines use computer programs called Robots to automatically go from page to page through the web, reading content, and adding it to their databases. To speed up the process of getting your site indexed, they usually have a way for you to submit your site for indexing. You usually only have to tell a Search Engine the URL (address) of your site and it takes care of the rest. 
Directories 
Directories are run by humans who review web sites and categorize them within their directories. This leads to a more abridged set of sites which can good or bad depending on what you're searching for. 
Yahoo and DMOZ are the two biggest directories on the web, but other important directories include local directories for your state, town, chamber of commerce, etc. 
Due to the human editing of directories, there is usually a charge for submitting your website for review. Yahoo charges $299/year to be in their directory but DMOZ is currently free.

Q.What is a Meta Tag?

A meta tag is a line of HTML coding that contains metadata about a webpage. Meta tag information doesn't change how the page looks; it won’t be seen by the website viewer, unless they are viewing your source code. There are two common types of meta tags — meta description tags and meta keywords tags.
Meta description tags describe, in some way, the webpage. For instance for this wiseGEEK article page, we might use “everything you want to know about meta tags.” The meta keywords tag lists other words that a visitor might be searching for, like meta tags, meta tag, HTML and meta tags, tags, SEO. 

Both types of tags are located in the heading section of your HTML code and usually below the title. You might have the following heading for your webpage: 
<HEAD>
<TITLE>How to Create a Meta Tag</TITLE> 
<META name =“description” content=”Everything you want to know about Meta Tags”>
<META name =”keywords” content=”meta, meta tags, meta tag, HTML and meta tags, tags, SEO”>
</HEAD>

Q. Robot meta tag?

The Robots META Tag is meant to provide users who cannot upload or control the /robots.txt file at their websites, with a last chance to keep their content out of search engine indexes and services.
<meta name="robots" content="robots-terms">
Examples of the Robots META Tag
The content="robots-terms" is a comma separated list used in the Robots META Tag that may contain one or more of the following keywords without regard to case: noindex, nofollow, all, index and follow.
noindex
Page may not be indexed by a search service.
<meta name="robots" content="noindex">
nofollow
Robots are not to follow links from this page.
<meta name="robots" content="nofollow">


Q. robot.txt ?

Introduction to "robots.txt"
There is a hidden, relentless force that permeates the web and its billions of web pages and files, unbeknownst to the majority of us sentient beings. I'm talking about search engine crawlers and robots here. Every day hundreds of them go out and scour the web, whether it's Google trying to index the entire web, or a spam bot collecting any email address it could find for less than honorable intentions. As site owners, what little control we have over what robots are allowed to do when they visit our sites exist in a magical little file called "robots.txt."
"Robots.txt" is a regular text file that through its name, has special meaning to the majority of "honorable" robots on the web. By defining a few rules in this text file, you can instruct robots to not crawl and index certain files, directories within your site, or at all. For example, you may not want Google to crawl the /images directory of your site, as it's both meaningless to you and a waste of your site's bandwidth. "Robots.txt" lets you tell Google just that.
Creating your "robots.txt" file
So lets get moving. Create a regular text file called "robots.txt", and make sure it's named exactly that. This file must be uploaded to the root accessible directory of your site, not a subdirectory (ie: http://www.mysite.com but NOT http://www.mysite.com/stuff/). It is only by following the above two rules will search engines interpret the instructions contained in the file. Deviate from this, and "robots.txt" becomes nothing more than a regular text file, like Cinderella after midnight.
Now that you know what to name your text file and where to upload it, you need to learn what to actually put in it to send commands off to search engines that follow this protocol (formally the "Robots Exclusion Protocol"). The format is simple enough for most intents and purposes: a USERAGENT line to identify the crawler in question followed by one or more DISALLOW: lines to disallow it from crawling certain parts of your site.
1) Here's a basic "robots.txt": 
User-agent: *
Disallow: /
With the above declared, all robots (indicated by "*") are instructed to not index any of your pages (indicated by "/"). Most likely not what you want, but you get the idea.
2) Lets get a little more discriminatory now. While every webmaster loves Google, you may not want Google's Image bot crawling your site's images and making them searchable online, if just to save bandwidth. The below declaration will do the trick:
User-agent: Googlebot-Image
Disallow: /
3) The following disallows all search engines and robots from crawling select directories and pages:
User-agent: *
Disallow: /cgi-bin/
Disallow: /privatedir/
Disallow: /tutorials/blank.htm
4) You can conditionally target multiple robots in "robots.txt." Take a look at the below:
User-agent: *
Disallow: /
User-agent: Googlebot
Disallow: /cgi-bin/
Disallow: /privatedir/
This is interesting- here we declare that crawlers in general should not crawl any parts of our site, EXCEPT for Google, which is allowed to crawl the entire site apart from /cgi-bin/ and /privatedir/. So the rules of specificity apply, not inheritance.
5) There is a way to use Disallow: to essentially turn it into "Allow all", and that is by not entering a value after the semicolon(:): 
User-agent: *
Disallow: /
User-agent: ia_archiver
Disallow:
Here I'm saying all crawlers should be prohibited from crawling our site, except for Alexa, which is allowed.
6) Finally, some crawlers now support an additional field called "Allow:", most notably, Google. As its name implies, "Allow:" lets you explicitly dictate what files/folders can be crawled. However, this field is currently not part of the "robots.txt" protocol, so my recommendation is to use it only if absolutely needed, as it might confuse some less intelligent crawlers.
Per Google's FAQs for webmasters, the below is the preferred way to disallow all crawlers from your site EXCEPT Google:
User-agent: *
Disallow: /
User-agent: Googlebot
Allow: /

8. W3C Validation?

The World Wide Web Consortium (W3C) is the main international standards organisation for the World Wide Web and is headed by Sir Tim Berners-Lee (the man credited with "inventing" the Internet).
W3C's sole goal is to ensure that the World Wide Web and Websites all work as well as they possibly can. Their guidelines are extremely strict and are purely based on the concept of "accessibility for all". Whilst it is not official (Google keep the criteria they use for ranking websites a closely guarded secret) it is widely agreed amongst website developers that following these guidelines will help your website list higher on the search engines.The W3C themselves state "Following these guidelines will also help people find information on the Web more quickly."


Q. Rss Syntax?

RSS Works
RSS is used to share content between websites.
With RSS, you register your content with companies called aggregators.
So, to be a part of it: First, create an RSS document and save it with an .xml extension. Then, upload the file to your website. Next, register with an RSS aggregator. Each day the aggregator searches the registered websites for RSS documents, verifies the link, and displays information about the feed so clients can link to documents that interests them.
Tip: Read our RSS Publishing chapter to view free RSS aggregation services.

RSS Example
RSS documents use a self-describing and simple syntax.
Here is a simple RSS document:
<?xml version="1.0" encoding="ISO-8859-1" ?>
<rss version="2.0">

<channel>
  <title>W3Schools Home Page</title>
  <link>http://www.w3schools.com</link>
  <description>Free web building tutorials</description>
  <item>
    <title>RSS Tutorial</title>
    <link>http://www.w3schools.com/rss</link>
    <description>New RSS tutorial on W3Schools</description>
  </item>
  <item>
    <title>XML Tutorial</title>
    <link>http://www.w3schools.com/xml</link>
    <description>New XML tutorial on W3Schools</description>
  </item>
</channel>

</rss>
This document represents the status of RSS as of the Fall of 2002, version 2.0.1. 
It incorporates all changes and additions, starting with the basic spec for RSS 0.91 (June 2000) and includes new features introduced in RSS 0.92 (December 2000) and RSS 0.94 (August 2002). 


Q. What is Backlinks? 


Backlinks are incoming links to a website or web page and this is also known as inbound links. The number of backlinks is an indication of the popularity or importance of that website. Backlinks are important for SEO because some search engines, especially Google, will give more credit to websites that have a good number of quality backlinks, and consider those websites more relevant than others.

Q. What is an instance?

An instance, in object-oriented programming (OOP), is a specific realization of any object. An object may be varied in a number of ways. Each realized variation of that object is an instance. The creation of a realized instance is called instantiation.
Each time a program runs, it is an instance of that program. In languages that create objects from classes, an object is an instantiation of a class. That is, it is a member of a given class that has specified values rather than variables. In a non-programming context, you could think of "dog" as a class and your particular dog as an instance of that class.


Q. Word limit in meta tag for Title, Desc, Keywords?

Yes, there are limits to the amount of words/letters that you can put there. 
(character = letter,space, or comma) - (example = kw1, kw2, kw3, kw4. = 19 characters) 
Each search engine shows a different amount of words/letters for the title and description. Google shows 60 characters in a IE browser window (including commas & spaces). So some people recommend only putting 60 characters into the title tag. 
Yahoo and MSN are different. 
The description tag also has a limit - Google shows the first 150 characters in the description tag. 
Other search engine vary between 150 and 200. 
Keywords also has a limit - Most people say to use 1000 characters or up to 45 words. Google does not use the keywords in the meta content. 
To use your example: 
<title>word1 word2 ... wordN(allowed 60 characters or 9 words)</title> 
<meta name="keywords" content="keyword1, kw2, kw3, .... kwN (allowed 1000 characters or 45 words)"> 
<meta name="description" content="word1 word2 ... wordN (allowed 150 - 200 characters or 25 words)"> 

Q. What is "XML sitemap"?

By placing an XML formatted file with all you site's links enable Search Engine crawlers (like Google) to find out what pages are present and which have recently changed, and to crawl your site accordingly.

Q-google crawler visit my  site  how do you know? 

Ans-(cache: site url)-hidemyass.com 

0 comments:

Post a Comment