Complete SEO Glossary for Beginners

SEO Glossary for Beginners

Complete SEO Glossary of search engine optimization and its related terms for beginners.


200 –  Status OK – The file request was successful. For example, a page or image was found and loaded properly in a browser.


301 Redirect – A 301 redirect is utilized to direct one webpage to another on a permanent basis.


302 Redirect – Found – The file has been found, but is temporarily located at another URI.

Generally, as it relates to SEO, it is typically best to avoid using 302 redirects. Some search engines struggle with redirect handling. Due to poor processing of 302 redirects some search engines have allowed competing businesses to hijack the listings of competitors.


404 Redirect – A 404 error is the error message you receive when attempting to reach a webpage that does not exist. This could be because of incorrect spelling or the page you need has been deleted.





Algorithm – Algorithm is what Google uses to decide on which results to display in its search results page. An algorithm will take into account various aspects of a site to determine how relevant it is to what has been searched.


Alt Attributes

Alt attributes make it possible to enter an alternative description in the HTML code for every image on a website. This description appears if, for some reason, the image cannot be displayed. Search engines’ ability to recognize the contents of images is still limited which is why they rely on the alternative description in order to determine what the image contains. The alt attribute can influence the content for which search engines consider a website to be relevant. Ideally the alternative descriptions should also contain the search terms for which the website is optimized.


Anchor Texts

Anchors texts are the texts displayed on a website for a given link. Users click on the anchor text in order to get to the associated website. Links like “here” or “next” are not suitable because they do not provide any information about their destination. A visitor is not very likely to click on links like these.






Backlinks (incoming links) refer to all links on other websites that refer to the website in question. Search engines are geared towards sorting search results according to real life concept recommendations! The better you are the more recommendations you will receive. Search engines interpret a link to a website (backlink) as a recommendation for it. The more backlinks a website has and the better those links are they are, the further up it will appear in the search results. An example of strong backlinks would be links from topically relevant websites.


Bad Neighbourhoods

Websites can occupy a so-called Bad Neighbourhood. This refers to all websites that have been severely downgraded by search engines. For example, they may have violated a search engines’ guidelines. Bad Neighbourhood websites appear either far down in the search results or not at all. For example, websites containing terms like “gambling”, “casino”, “pornography” or “Viagra” are classified as Bad Neighbours. Be wary of links to or from Bad Neighbours! They will have a negative effect on your own position in the search results. In evaluating a website, search engines often look at the status of websites that have links to it.


Black hat – Black Hat is the term used for SEO Practices that do not meet Googles latest guidelines. This may for instance be something like extensive links from websites built purposefully for SEO. Or buying links from webmasters.



A so-called website blocker is something that prevents search engines from accessing the site. As a result the website cannot be included in the search engine index and will not appear in the search results. For example, if an entire website is password protected, search engines will be denied access.


Business directory

A business directory is like an online version of the Yellow Pages. Every business directory contains an index of companies listed alphabetically by industry. The individual entries are then often linked to the corresponding company’s website. The business directory helps users search for companies, services or products in their area in a more targeted way. From the point of view of search engine optimization an entry in a business directory is important; it makes the website in question easier to find and creates an additional backlink.






Cloaking is a method which gives search engines the impression that a website carries content that is different to what users actually see: Visitors see a user friendly, visually appealing website which may, for example, contain little text and plenty of graphic or multimedia elements.

Search engines are limited in their ability to recognize graphic and multimedia elements, which is why they are presented with a different website (that has the same URL).

Most notably, this website contains text that is optimized for search engines. Cloaking is a violation of search engine guidelines. If a search engine recognizes cloaking it will penalize the website by permanently removing it from the search engine index. The website will no longer appear in search results.

A note of advice: if a website has been constructed differently to fit computers and mobile devices, this does not count as cloaking. Search engines will make allowances for websites with a mobile version.


Citation / NAP – A citation or NAP listing for your business is where your Name, Address & Phone Number is listed on a website, these details should align directly with your website and other mentions of your website to provide the most benefit to your organic SEO.



Businesses generally know who their competitors are on the open market. But are they the same companies you need to fight to get the best placement for your website? Not necessarily! In the area of search engine optimization your competitors are any websites that can be found by using the same search terms as yours. For example, a hairdresser in Boston offers, among other things, advice on beauty products. Therefore their website will be found by typing in “Boston beauty products”. The same search term will give results for a pharmacy which sells beauty products in Boston. Both businesses are competing for the same search term.



A crawler is a program used by search engines to collect data from the internet. When a crawler visits a website, it picks over the entire website’s content (i.e. the text) and stores it in a databank. It also stores all the external and internal links to the website. The crawler will visit the stored links at a later point in time, which is how it moves from one website to the next. By this process the crawler captures and indexes every website that has links to at least one other website.






Defective Links

A defective link is a link that has no object or does not lead to anything. Causes for defective links include programming errors, temporarily unavailable websites or if the address of the site connected to by a link has changed. Defective links diminish the quality of a website and make the job of the crawler more difficult. For these reasons a website with defective links will appear lower down in search results.


Domain Popularity

Domain Popularity refers to the number of backlinks (incoming links) from other domains that refer back to a website. No more than one backlink will be counted per domain (e.g. ). For example, in a blog about cars there are 10 different entries that have links to the website of a car dealer. In this case only one backlink would be counted for the blog.


Duplicate Content

Duplicate content refers to several websites with the same or very similar content. Search engines aim to give the user the best search results for a particular search term and consider it to be unhelpful when exactly the same content appears in several places. For this reason search engines check for duplicate content and give less consideration to websites that have duplicate information.






Frames can be laid down in HTML code to create clear structures for a website’s content. Search engines often encounter problems when trying to collect content from frames. To make it easier for search engines to find you it is best to avoid using frames.






Google Places

Google Places is Google’s listing for local business search results. Google Places appears at the very top of the Google search results when a user is looking for local information. It works by displaying the location of businesses that have registered with Google Places and are relevant to the user’s search on a small map. Links to websites that belong to these businesses appear next to the map. Registration for Google Places is free. In addition to location and web address, businesses can also include opening times and photos of their business or products. Google Places is especially important for businesses that have a predominantly local customer base or whose customers are searching for something in a certain location (e.g. restaurants, flower shops, cafe, amusement parks, etc.).

ALSO READ  Inexpensive Mother’s Day Gifts for Mom






An index is another name for the database used by a search engine. It contains information on all the websites the search engine was able to find. If a website is not in a search engine’s index, users will not be able to find it using that search engine. Search engines regularly update their indexes.





Keyword Density

Keyword density tells you how often a search term appears in a text in relation to the total number of words it contains. For ex: if a keyword appears three times in a 100 word text the keyword density would be 3%. From the point of view of search engines, a high keyword density is a good indicator of search engine spam. If a keyword appears too often in a website, search engines will downgrade the website and it will then appear lower down in search results.


Keyword Proximity

A search term can be made up of a combination of keywords. The keyword proximity refers to the distance between the search term’s individual keywords. For example: a website contains the keywords that make up the search term “dentist bangalore implant” in the heading “Your professional dentist in bangalore; dental practice for minimally invasive implants”. The search term proximity between “dentist” and “Bangalore” is one word, between “Bangalore” and “implant” it is five words. The smaller the distance between a search term’s individual keywords, the more relevant it will be from a search engine’s point of view.


Keyword Stuffing

Keyword stuffing is when someone attempts to manipulate their position in the search results by concentrating relevant keywords. Search engines can tell when keywords are abnormally distributed throughout the text or in a website’s meta tags. If the same keywords follow one another too closely, the search engine will downgrade the website and it will then appear lower down in search results.





Link Popularity

Link popularity refers to the number of backlinks (incoming links) that point to a given website. In contrast to domain popularity, every backlink is counted separately. For example, in a blog about cars there are 10 different entries that have links to the website of a car dealer. In this case you would count a total of 10 backlinks. Link popularity used to be an important figure for search engines. These days many search engines have switched to focus on domain popularity, which in addition to quantity provides information about the quality of the backlinks. For this reason it is important to get as many high-quality backlinks as possible, since everyone who clicks on a backlink will be directed to your website.





Meta Search Term

Search terms can be included in the meta tags of any website’s HTML code. These meta search terms do not appear on the website, rather they inform search engines about the search terms that the website is optimized for. In this way you can increase the probability that a website will be found by using these search terms. However, many search engines (e.g. Google) do not take meta search terms into account when evaluating websites.


Meta Tag

Meta tags are areas in HTML code that contain information about a website. The information cannot be seen on the website itself. Search engines access certain meta tags so they can, for instance, display a page title and description in the search results.





Nofollow Attributes

Links can be provided in HTML code together with a nofollow attribute. This attribute ensures that search engines cannot follow the link and therefore:


  • the link will not be classed as a backlink for the linked website


  • if nofollow backlinks are used exclusively, the linked website will not be included in the index


Nofollow backlinks will not improve a website’s position in the search rankings. Nevertheless, they are necessary. They ensure that search engines perceive a site’s link building structure to be ‘natural’. In addition, nofollow backlinks contribute to visitors of other websites being forwarded to your site.


For example: in forums and blogs, the history of a lot of entries and comments is published for the sole purpose of gaining new backlinks for the website. This is why forums and blogs frequently use the nofollow attribute.





Offpage Optimization

Offpage optimization refers to all the measures that can be taken outside of the actual website in order to improve its position in search rankings. These are measures that help create as many high-quality backlinks (incoming links) as possible.


Onpage Optimization

Onpage optimization refers to all measures that can be taken directly within the website in order to improve its position in the search rankings. Examples of this include measures to optimize the content or improve the meta tags.





Panda – Google Panda is one of Google algorithms that targets low quality and ‘thin’ sites. This is usually blogs that have been built for the purpose of SEO and have generic, poor written blog posts that are often posted on multiple websites.

Penguin – Google Penguin is another Google algorithm which targets the links and offsite factors of a website. It identifies spam links where links have been built on poor and spammy websites.

Page Content

Page content refers to all the information contained in a webpage. Page content can be displayed as text, links, images, audio, animation or videos among other things. Search engines have a limited ability to recognize images, animation, video and audio. In these instances, search engines use file names or alt attributes to determine the contents of a page. Therefore, important information needs to be given in text-form to make it accessible to search engines.


Page Description

It is possible to give a short description of the content of any given webpage (e.g. homepage, subpage). This page description is laid down in HTML code and does not appear on the website. Search engines display the page description in their search results (directly underneath the page title). If a webpage does not have a page description, text from the page will often appear instead. In the search results, the search term should be emphasized in bold. Ideally the page description should contain the search term for which the website has been optimized.


Page Title

Every webpage (e.g. homepage, subpage) has its own title. The page title is laid out in HTML code and appears in the title bar of the browser. Search engines display page titles in their search results. In addition, search engines use page titles in order to recognize what information the website contains. Ideally page titles should include the search term for which the website has been optimized.






It refers to a website’s position in search engine results. Rankings depend partly on the search term that is entered into the search engine. There are various factors that influence whether a website appears far up in the search results and therefore has a high ranking; how relevant the content is for example, or the quality of backlinks. At the same time every search engine gives different weight to these factors. If you enter the same search term in different search engines you will generally get different results.



When dealing with search engines, the term ‘relevance’ describes the extent to which the content of a website corresponds to the search term used. The relevance of a website’s content is particularly important for search engines; it affects how high a website will appear in the search results for a given search term.



Robots.txt file is a text file that can be saved to a website’s server. It determines if and when the search engine crawlers can visit a website’s subpages and include them in their index. In doing this, certain subpages can be excluded from the search results.


For ex:  using robots.txt files you can keep a website’s archives from being included in the search results. Some search engines however choose to ignore the robots.txt files. If a subpage needs to be really hidden from search then engines it should be password protected.





Search Engine

A search engine is a website through which users can search internet content. To do this, users enter the desired search term into the search field. The search engine then looks through its index for relevant websites and displays them in the form of a list. The search engine’s internal evaluation algorithm determines which position a website will get in the search results. Google, Bing and Yahoo are examples of popular search engines.

ALSO READ  Landing Page Design & Development for Best Results


Search Engine Advertising

It is possible to pay a search engine for a placement in certain search results. These advertisements do not appear in natural search results. Instead, they appear in the sponsored results (usually on the right-hand side of search engine’s results page) in response to a corresponding search term. The amount spent on advertising can be adjusted for a variety of factors. In addition, you can determine whether the advertisement appears regionally or nationwide. The fee for these ads is calculated using the number of clicks the advertisement attracts. The higher the advertising budget the better the position the website will get in a given search result. Search engine advertising can be particularly useful if a website is not appearing at the top of the search results because it is not yet well known or if there is a lot of competition for the search term being used.


Search Engine Guidelines

The majority of search engines have guidelines in place. These guidelines tell website operators how to make it easier for a search engine to find and index their site and how to get it as high up in the search results as possible. These guidelines also provide information about prohibited activities. Prohibited activities will lead to penalties for a website, including permanent exclusion from the search engine’s index.


Search Engine Marketing

Search engine marketing refers to measures that are designed to better position a website in the search results and in doing so to direct visitors to the website. These include search engine optimization and search engine advertising.


Search Engine Optimization

Search engine optimization refers to measures that aim to improve a website’s position in a search engine’s natural search results. This requires experience and ongoing work. Even if you manage to get your website to appear at the top of search results you will likely need to continue monitoring of results and measures, especially if your competition is also making use of search engine optimization. If your competition improves its position in the search results, it could be at your expense.


Search Engine Registration

You can register a website directly with a search engine. After registration, it can take several days or weeks for the search engine’s crawler to visit the website and decide whether to include it in the index.

Frequently a website will only be included in a search engine’s index when other websites that are already in the index have linked to it. In this case, because the crawler will visit the website and include it in the index anyway, it may not be necessary to register with the search engine.


Search Engine Spam

Search engine spam refers to measures that try to influence the position a website has in search engines. One example is an abnormally high number of keywords within a website’s content and meta tags. When search engines discover search engine spam on a website, that site is penalized. It may, for example, be removed from the search engine’s index so it no longer appears in search results.


Search Result

Search results refer to the list created by search engines in response to a query. Search results can be broken down as follows:

Natural search results (usually on the left-hand side of search engine’s results page)

Here you will see all the websites in a search engine’s index which are relevant to the query. The search engine’s internal evaluation algorithm determines which position a website will get in the search results. It is not possible to pay to move a website higher up in the natural search results. In addition, natural search results often display entries from Google Places, images or videos which are relevant to a search term. These entries usually appear at the top of the search results.


Sponsored search results (usually on the right-hand side of a search engine’s results page) Here you will see websites that have placed ads with the search engine. Sponsored search results are determined by factors like the price per click, click-through rates and search term competition.


Search Term

A search term is what users key into a search engine when they want to find something specific. A search term can be a single keyword or a combination of words, e.g. “dentist” or “dentist Bangalore implant”.



A SERP is short for ‘Search Engine Results Page’. This is the page you are taken to when you have searched for something in Google.

Structured Data / Schema

Structured Data / Schema is specific coding you can put into your website to highlight certain data to make it more identifiable to Google. Common schema codes include mark up on the business address and logo.



The sitemap.xml file in an XML file that is saved to a website’s server. It contains a list of all the subpages belonging to the website. These files help search engines to learn more about the structure of a website. This speeds up the crawl process and reduces the likelihood that the crawler will overlook subpages. In addition, the file can provide additional information about certain content, e.g.:


Information about images or videos that can be found on a website or the duration of a video and its subject.


General information about the website, e.g. when it was last updated.





Topical Relevance

With search engines, topical relevance is mainly used in conjunction with backlinks (incoming links). Websites that carry similar content are said to have topical relevance. Backlinks from websites that are topically relevant have more impact on a website’s position in search results than backlinks from sites that are not related. Search engines assume that topically relevant links are used to offer users additional information that could be helpful. With unrelated links there is a high probability that they have been paid for or included for the purpose of improving a site’s position in search results.





Web Catalogues

Web catalogues contain a collection of linked internet addresses which are mostly sorted according to specific criteria e.g. by industry. They help users search for information in a more targeted way. Before the emergence of search engines, web catalogues were the only way to search for a website on the internet. From a search engine optimization point of view, an entry in a web catalogue can be very useful since it makes it easier to find your website as well as creating an additional backlink.



When you enter an internet address (a URL, e.g. into a web browser, a webpage will appear. A webpage is an individual page on the internet. The webpage that first appears when you go to a website is the start page (frequently a homepage). The website’s other pages are referred to as subpages.


Website Structure

A website’s structure refers to how the website is set up, i.e. how the individual subpages are linked to one another. It is particularly important that crawlers can find all subpages quickly and easily when websites have a large number of subpages. For this reason, a website’s homepage needs to have links to the most important subpages. Files such as sitemap.xml and robots.txt also help the crawler do its job in this regard.



A website is made up of several webpages that are linked to one another. A website can, for example, be a company’s entire internet presence.


White hat

White Hat SEO is the term given to SEO which goes by Google’s latest guidelines. Typically these are strategies which focus on user engagement and the audience rather than target SEO to improve keyword rankings etc.



Here i posted a complete seo glossary for seo beginners. If you have any doubts about seo or if you have any additional points which i forget to mentioned kindly comment it. i will be glad to add in this post.

Caroline Jessy
Follow me
Latest posts by Caroline Jessy (see all)

Leave a Reply

Your email address will not be published. Required fields are marked *