SEO analysis of the web page


This page shows an overview of the key metrics of your website. Use the step-by-step list below to systematically improve your rankings on search engines to get more customers. Follow the advice and solutions created especially for you and bring every parameter to bring to perfection.

SEO analyzer

.:: InfoCotidiano ::.

Blog para os amantes de informática e tecnologia. dicas, truques, tutoriais, para windows, android, linux e mac os x

The Icons

= Error

The Icons

= Good

The Icons

= Reminders

The Icons

= Read the tips

Meta tags optimization

Title .:: InfoCotidiano ::.
Titles are critical to giving users a quick insight into the content of a result and why it’s relevant to their query. It's often the primary piece of information used to decide which result to click on, so it's important to use high-quality titles on your web pages.

Here are a few tips for managing your titles:
  • Make sure every page on your site has a title specified in the <title> tag. If you’ve got a large site and are concerned you may have forgotten a title somewhere, you may also check the HTML suggestions page in Search Console lists missing or potentially problematic <title> tags on your site.
  • Page titles should be descriptive and concise. Avoid vague descriptors like "Home" for your home page, or "Profile" for a specific person's profile. Also avoid unnecessarily long or verbose titles, which are likely to get truncated when they show up in the search results.
  • Avoid keyword stuffing. It's sometimes helpful to have a few descriptive terms in the title, but there’s no reason to have the same words or phrases appear multiple times. A title like "Foobar, foo bar, foobars, foo bars" doesn't help the user, and this kind of keyword stuffing can make your results look spammy to Google and to users.
  • Avoid repeated or boilerplate titles. It’s important to have distinct, descriptive titles for each page on your site. Titling every page on a commerce site "Cheap products for sale", for example, makes it impossible for users to distinguish one page differs another. Long titles that vary by only a single piece of information ("boilerplate" titles) are also bad; for example, a standardized title like "<band name> - See videos, lyrics, posters, albums, reviews and concerts" contains a lot of uninformative text. One solution is to dynamically update the title to better reflect the actual content of the page: for example, include the words "video", "lyrics", etc., only if that particular page contains video or lyrics. Another option is to just use " " as a concise title and use the meta description (see below) to describe your site's content.
  • Brand your titles, but concisely. The title of your site’s home page is a reasonable place to include some additional information about your site—for instance, "ExampleSocialSite, a place for people to meet and mingle." But displaying that text in the title of every single page on your site hurts readability and will look particularly repetitive if several pages from your site are returned for the same query. In this case, consider including just your site name at the beginning or end of each page title, separated from the rest of the title with a delimiter such as a hyphen, colon, or pipe, like this:

    <title>ExampleSocialSite: Sign up for a new account.</title>

  • Be careful about disallowing search engines from crawling your pages. Using the robots.txt protocol on your site can stop Google from crawling your pages, but it may not always prevent them from being indexed. For example, Google may index your page if we discover it by following a link from someone else's site. To display it in search results, Google will need to display a title of some kind and because we won't have access to any of your page content, we will rely on off-page content such as anchor text from other sites. (To truly block a URL from being indexed, you can use meta tags.)
Title length 21 signs (Recomended: 35-65 signs)
Description Blog para os amantes de informática e tecnologia. dicas, truques, tutoriais, para windows, android, linux e mac os x
The description attribute within the <meta> tag is a good way to provide a concise, human-readable summary of each page’s content. Google will sometimes use the meta description of a page in search results snippets, if we think it gives users a more accurate description than would be possible purely from the on-page content. Accurate meta descriptions can help improve your clickthrough; here are some guidelines for properly using the meta description.
  • Make sure that every page on your site has a meta description. The HTML suggestions page in Search Console lists pages where Google has detected missing or problematic meta descriptions.
  • Differentiate the descriptions for different pages. Identical or similar descriptions on every page of a site aren't helpful when individual pages appear in the web results. In these cases we're less likely to display the boilerplate text. Wherever possible, create descriptions that accurately describe the specific page. Use site-level descriptions on the main home page or other aggregation pages, and use page-level descriptions everywhere else. If you don't have time to create a description for every single page, try to prioritize your content: At the very least, create a description for the critical URLs like your home page and popular pages.
  • Include clearly tagged facts in the description. The meta description doesn't just have to be in sentence format; it's also a great place to include structured data about the page. For example, news or blog postings can list the author, date of publication, or byline information. This can give potential visitors very relevant information that might not be displayed in the snippet otherwise. Similarly, product pages might have the key bits of information—price, age, manufacturer—scattered throughout a page. A good meta description can bring all this data together.
  • Programmatically generate descriptions. For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions can be impossible. In the latter case, however, programmatic generation of the descriptions can be appropriate and are encouraged. Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. Keep in mind that meta descriptions comprised of long strings of keywords don't give users a clear idea of the page's content, and are less likely to be displayed in place of a regular snippet.
  • Use quality descriptions. Finally, make sure your descriptions are truly descriptive. Because the meta descriptions aren't displayed in the pages the user sees, it's easy to let this content slide. But high-quality descriptions can be displayed in Google's search results, and can go a long way to improving the quality and quantity of your search traffic.
Description length 117 signs (Recomended: 70-320 signs)
Keywords none
H1 .:: InfoCotidiano ::.
Count of H1 tags Count of H1 tags: 1
H1 length 23 signs (Recomended: 5-70 signs)
H1 equals Title H1 is not equals Title
Count all tags Heading tags
H2: 1 H3: 16 H4: 0 H5: 0 H6: 0
Content length Signs 27225 (Recomended length: more than 15000 signs)
27225 signs

Content to code ratio Content to code ratio: 15% (Recomended ratio: more than 25%)

Count all external links
External Links: 232
Count all internal links
Internal Links: 2

Relevance of meta tags

Title relevancy The title of the page seems optimized. Title relevancy to page content is excellent. .:: InfoCotidiano ::.
Description relevancy The description of the page seems optimized. Description relevancy to page content is excellent. Blog para os amantes de informática e tecnologia. dicas, truques, tutoriais, para windows, android, linux e mac os x
H1 relevancy The H1 tags of your page seems optimized. The H1 relevancy to page content is 100%. Count of H1 tags: 1
H2 relevancy The H2 tags of your page seems optimized. The H2 relevancy to page content is 100%. Count of H2 tags: 1
H3 relevancy The H3 tags of your page seems optimized. The H3 relevancy to page content is 100%. Count of H3 tags: 16
H4 relevancy The H4 tags of your page seems not optimized Count of H4 tags: 0
H5 relevancy (Use of this tag is optional) The H5 tag not found Count of H5 tags: 0
H6 relevancy (Use of this tag is optional) The H6 tag not found Count of H6 tags: 0


<noindex> (Directive) Content in noindex tags not found
URL length 1 symbols. (Recomended url length limitation: 115 symbols)
Protocol redirect HTTP to HTTPS redirect not working
HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Users expect a secure and private online experience when using a website. Google encourages you to adopt HTTPS in order to protect your users' connection to your website, regardless of the content on the site.

Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:
  • Encryption—encrypting the exchanged data to keep it secure from eavesdroppers. That means that while the user is browsing a website, nobody can "listen" to their conversations, track their activities across multiple pages, or steal their information.
  • Data integrity—data cannot be modified or corrupted during transfer, intentionally or otherwise, without being detected.
  • Authentication—proves that your users communicate with the intended website. It protects against man-in-the-middle attacks and builds user trust, which translates into other business benefits.

If you migrate your site from HTTP to HTTPS, Google treats this as a site move with a URL change. This can temporarily affect some of your traffic numbers.
Add the HTTPS property to Search Console; Search Console treats HTTP and HTTPS separately; data for these properties is not shared in Search Console. So if you have pages in both protocols, you must have a separate Search Console property for each one.
404 Page 404 - Not correct response
Robots.txt ok
A robots.txt file is a file at the root of your site that indicates those parts of your site you don’t want accessed by search engine crawlers. The file uses the Robots Exclusion Standard, which is a protocol with a small set of commands that can be used to indicate access to your site by section and by specific kinds of web crawlers (such as mobile crawlers vs desktop crawlers).

The simplest robots.txt file uses two key words, User-agent and Disallow. User-agents are search engine robots (or web crawler software); most user-agents are listed in the Web Robots Database. Disallow is a command for the user-agent that tells it not to access a particular URL. On the other hand, to give Google access to a particular URL that is a child directory in a disallowed parent directory, then you can use a third key word, Allow.

Google uses several user-agents, such as Googlebot for Google Search and Googlebot-Image for Google Image Search. Most Google user-agents follow the rules you set up for Googlebot, but you can override this option and make specific rules for only certain Google user-agents as well.

The syntax for using the keywords is as follows:

User-agent: [the name of the robot the following rule applies to]

Disallow: [the URL path you want to block] Allow: [the URL path in of a subdirectory, within a blocked parent directory, that you want to unblock]

These two lines are together considered a single entry in the file, where the Disallow rule only applies to the user-agent(s) specified above it. You can include as many entries as you want, and multiple Disallow lines can apply to multiple user-agents, all in one entry. You can set the User-agent command to apply to all web crawlers by listing an asterisk (*) as in the example below:

User-agent: *

You must apply the following saving conventions so that Googlebot and other web crawlers can find and identify your robots.txt file:
  • You must save your robots.txt code as a text file,
  • You must place the file in the highest-level directory of your site (or the root of your domain), and
  • The robots.txt file must be named robots.txt

As an example, a robots.txt file saved at the root of, at the URL address http://www., can be discovered by web crawlers, but a robots.txt file at http://www. cannot be found by any web crawler.
SiteMap.xml ok
A sitemap is a file where you can list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site.

Also, your sitemap can provide valuable metadata associated with the pages you list in that sitemap: Metadata is information about a webpage, such as when the page was last updated, how often the page is changed, and the importance of the page relative to other URLs in the site.

You can use a sitemap to provide Google with metadata about specific types of content on your pages, including video and image content. For example, you can give Google the information about video and image content:

A sitemap video entry can specify the video running time, category, and age appropriateness rating.
A sitemap image entry can include the image subject matter, type, and license.

Build and submit a sitemap:
  • Decide which pages on your site should be crawled by Google, and determine the canonical version of each page.
  • Decide which sitemap format you want to use. You can create your sitemap manually or choose from a number of third-party tools to generate your sitemap for you.
  • Test your sitemap using the Search Console Sitemaps testing tool.
  • Make your sitemap available to Google by adding it to your robots.txt file and submitting it to Search Console.

Backlink analysis / Overview

Referring domains
Unique domains: 12316232
Domain Zone
Domains .edu: 2321616
Domains .gov: 23223
Referring pages
Unique links: 152161 Link quality: 15% (Recomended: more than 75%)

Follow/Nofollow links
Follow: 23215123 Nofollow: 23216111

Alexa rank

Alexa global rank 3783590 Statistics updated daily: Analysis date: Saturday 17th 2019 August

Virus check

Viruses and malware none Detection ratio: 0 / 66 Analysis date: Saturday 17th 2019 August

Domain information

Domain register date 1970-01-01 00:00:00.000000
Registry expire date 1970-01-01 00:00:00.000000

IP information

Country United States
IP city Ashburn
ISP Google LLC
Organization Google LLC
Blacklist none


Images without description
Title Alt URL
none Imagem
none Imagem
none Imagem
none Imagem
none Imagem
none Imagem
none Imagem
none Minha foto //
The alt attribute is used to describe the contents of an image file.

It provides Google with useful information about the subject matter of the image. Google uses this information to help determine the best image to return for a user's query. Many people-for example, users with visual impairments, or people using screen readers or who have low-bandwidth connections-may not be able to see images on web pages. Descriptive alt text provides these users with important information.

Not so good:
<img src="puppy.jpg" alt=""/>

<img src="puppy.jpg" alt="puppy"/>

<img src="puppy.jpg" alt="Dalmatian puppy playing fetch">

To be avoided:
<img src="puppy.jpg" alt="puppy dog baby dog pup pups puppies doggies pups litter puppies dog retriever labrador wolfhound setter pointer puppy jack russell terrier puppies dog food cheap dogfood puppy food"/>

Filling alt attributes with keywords ("keyword stuffing") results in a negative user experience, and may cause your site to be perceived as spam. Instead, focus on creating useful, information-rich content that uses keywords appropriately and in context.


External Links

Qty Anchors URL
1 Página inicial
5 Atualizar o Certificado do Cadastro de Software House julho 21, 2019 Leia mais Atualizar o Certificado do Cadastro de Software Ho...
1 Postar um comentário
5 Instalando ZeosLib e Rx no Lazarus 2.0.2 julho 16, 2019 Leia mais Instalando ZeosLib e Rx no Lazarus 2.0.2
1 Postar um comentário
5 Instalando ACBr no LAZARUS 2.0.2 julho 16, 2019 Leia mais Instalando ACBr no LAZARUS 2.0.2
1 Postar um comentário
4 MySQL Server #4 - Foreign Key junho 11, 2019 Leia mais
1 Postar um comentário
4 MySQL Server #3 - Primary Key maio 27, 2019 Leia mais
1 Postar um comentário
4 MySQL Server #2 - Tables maio 26, 2019 Leia mais
1 Postar um comentário
4 MySQL Server #1 - Databases maio 11, 2019 Leia mais
1 Postar um comentário
1 Mais postagens
1 Tecnologia do Blogger
1 rion819
3 Daniel Morais (InfoCotidiano) Visitar perfil
1 2019 14
1 Julho 2019 3
1 Junho 2019 1
1 Maio 2019 4
1 Março 2019 1
1 Fevereiro 2019 2
1 Janeiro 2019 3
1 2018 18
1 Dezembro 2018 1
1 Novembro 2018 1
1 Setembro 2018 2
1 Agosto 2018 3
1 Julho 2018 4
1 Maio 2018 1
1 Abril 2018 2
1 Março 2018 1
1 Fevereiro 2018 1
1 Janeiro 2018 2
1 2017 64
1 Outubro 2017 3
1 Setembro 2017 1
1 Julho 2017 2
1 Junho 2017 5
1 Maio 2017 6
1 Abril 2017 12
1 Março 2017 7
1 Fevereiro 2017 4
1 Janeiro 2017 24
1 2016 33
1 Dezembro 2016 1
1 Novembro 2016 1
1 Outubro 2016 1
1 Setembro 2016 5
1 Agosto 2016 5
1 Junho 2016 2
1 Maio 2016 1
1 Abril 2016 7
1 Março 2016 3
1 Fevereiro 2016 2
1 Janeiro 2016 5
1 2015 57
1 Dezembro 2015 3
1 Novembro 2015 7
1 Outubro 2015 3
1 Setembro 2015 10
1 Agosto 2015 6
1 Julho 2015 5
1 Junho 2015 4
1 Maio 2015 2
1 Abril 2015 5
1 Março 2015 8
1 Fevereiro 2015 3
1 Janeiro 2015 1
1 2014 42
1 Dezembro 2014 4
1 Novembro 2014 2
1 Outubro 2014 3
1 Setembro 2014 2
1 Agosto 2014 3
1 Julho 2014 8
1 Junho 2014 1
1 Maio 2014 3
1 Abril 2014 2
1 Março 2014 7
1 Fevereiro 2014 3
1 Janeiro 2014 4
1 2013 47
1 Novembro 2013 6
1 Outubro 2013 3
1 Setembro 2013 2
1 Agosto 2013 1
1 Julho 2013 3
1 Junho 2013 1
1 Maio 2013 4
1 Abril 2013 8
1 Março 2013 4
1 Fevereiro 2013 7
1 Janeiro 2013 8
1 2012 69
1 Dezembro 2012 7
1 Novembro 2012 6
1 Outubro 2012 4
1 Setembro 2012 4
1 Agosto 2012 5
1 Julho 2012 6
1 Junho 2012 3
1 Maio 2012 14
1 Abril 2012 7
1 Março 2012 2
1 Fevereiro 2012 3
1 Janeiro 2012 8
1 2011 90
1 Dezembro 2011 8
1 Novembro 2011 6
1 Outubro 2011 6
1 Setembro 2011 10
1 Agosto 2011 4
1 Julho 2011 12
1 Junho 2011 6
1 Maio 2011 9
1 Abril 2011 8
1 Março 2011 10
1 Fevereiro 2011 7
1 Janeiro 2011 4
1 2010 100
1 Dezembro 2010 5
1 Novembro 2010 8
1 Outubro 2010 21
1 Setembro 2010 13
1 Agosto 2010 9
1 Julho 2010 10
1 Junho 2010 5
1 Maio 2010 12
1 Abril 2010 3
1 Março 2010 5
1 Fevereiro 2010 7
1 Janeiro 2010 2
1 2009 108
1 Dezembro 2009 4
1 Novembro 2009 4
1 Outubro 2009 5
1 Setembro 2009 11
1 Agosto 2009 12
1 Julho 2009 9
1 Junho 2009 8
1 Maio 2009 11
1 Abril 2009 5
1 Março 2009 17
1 Fevereiro 2009 9
1 Janeiro 2009 13
1 2008 41
1 Dezembro 2008 1
1 Novembro 2008 15
1 Outubro 2008 7
1 Setembro 2008 18
1 ACBr
1 Android
1 Antivirus e Segurança
1 app inventor
1 Argox
1 backup
1 Banco de Dados
1 Base de Dados
1 Clipper
1 Cloud
1 CodeTyphon
1 Componentes
1 Curso de Programação
1 Curso Estoque Vendas
1 Curso Firebird SQL
1 Curso MySQL Server
1 Debian
1 Deepin
1 Delphi
1 Desktop Virtual
1 Dicas
1 Dicas Windows
1 Diversos
1 Drive Virtual
1 Drivers
1 Elgin
1 Entretenimento
1 Firebird
1 Funções
1 Impressão
1 Informativos e Matérias
1 Internet
1 iOS
1 java
1 Jogos
1 Lazarus
1 LazReport
1 Linux
1 Mac OS X
1 Manuais de Serviço
1 Manutenção
1 MariaDB
1 Modem ADSL
1 offline
1 OnGuard
1 Oracle
1 parceiros
1 PostGreSQL
1 Programação
1 Redes e Manutenção
1 Replicação de Dados
1 Roteadores
1 sat
1 Serviços On-Line
1 Sites/Blog
1 Slider
1 Soft Portatil (pendrive)
1 Ubuntu
1 UserControl
1 Utilitários
1 Video-Aula
1 VirtualBox
1 Zebra
1 ZeosLib

Internal Links

Qty Anchors URL
1 Pular para o conteúdo principal
7 Mais…