Last update 3 seconds ago
Home - Comunicação automatizada e Busca das informações | Hubblefy
Titles are critical to giving users a quick insight into the content of a result and why it’s relevant to their query. It's often the primary piece of information used to decide which result to click on, so it's important to use high-quality titles on your web pages.
Here are a few tips for managing your titles:
|Title length||66 signs (Recomended: 35-65 signs)|
The description attribute within the
|Description length||0 signs (Recomended: 70-320 signs)|
|H1||Automatize a comunicação entre as áreas na empresa e ganhe produtividade|
|Count of H1 tags||Count of H1 tags: 2|
|H1 length||72 signs (Recomended: 5-70 signs)|
|H1 equals Title||H1 is not equals Title|
|Count all tags||
|Content length||signs 4343 (Recomended length: more than 500 signs)|
|Content to code ratio||Content to code ratio: 12% (Recomended ratio: more than 10%)|
|Domain register date||2017-03-24 14:40:42.000000|
|Registry expire date||2019-03-24 14:40:42.000000|
|<noindex> (Yandex directive)||Content in noindex tags not found|
|URL length||15 symbols.(Recomended url length limitation: 115 symbols)|
HTTP to HTTPS redirect not working
HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Users expect a secure and private online experience when using a website. Google encourages you to adopt HTTPS in order to protect your users' connection to your website, regardless of the content on the site.
Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:
If you migrate your site from HTTP to HTTPS, Google treats this as a site move with a URL change. This can temporarily affect some of your traffic numbers.
Add the HTTPS property to Search Console; Search Console treats HTTP and HTTPS separately; data for these properties is not shared in Search Console. So if you have pages in both protocols, you must have a separate Search Console property for each one.
|404 Page||404 - Correct response|
A robots.txt file is a file at the root of your site that indicates those parts of your site you don’t want accessed by search engine crawlers. The file uses the Robots Exclusion Standard, which is a protocol with a small set of commands that can be used to indicate access to your site by section and by specific kinds of web crawlers (such as mobile crawlers vs desktop crawlers).
The simplest robots.txt file uses two key words, User-agent and Disallow. User-agents are search engine robots (or web crawler software); most user-agents are listed in the Web Robots Database. Disallow is a command for the user-agent that tells it not to access a particular URL. On the other hand, to give Google access to a particular URL that is a child directory in a disallowed parent directory, then you can use a third key word, Allow.
Google uses several user-agents, such as Googlebot for Google Search and Googlebot-Image for Google Image Search. Most Google user-agents follow the rules you set up for Googlebot, but you can override this option and make specific rules for only certain Google user-agents as well.
The syntax for using the keywords is as follows:
User-agent: [the name of the robot the following rule applies to]
Disallow: [the URL path you want to block] Allow: [the URL path in of a subdirectory, within a blocked parent directory, that you want to unblock]
These two lines are together considered a single entry in the file, where the Disallow rule only applies to the user-agent(s) specified above it. You can include as many entries as you want, and multiple Disallow lines can apply to multiple user-agents, all in one entry. You can set the User-agent command to apply to all web crawlers by listing an asterisk (*) as in the example below:
You must apply the following saving conventions so that Googlebot and other web crawlers can find and identify your robots.txt file:
As an example, a robots.txt file saved at the root of example.com, at the URL address http://www.example.com/robots.txt, can be discovered by web crawlers, but a robots.txt file at http://www.example.com/not_root/robots.txt cannot be found by any web crawler.
A sitemap is a file where you can list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site.
Also, your sitemap can provide valuable metadata associated with the pages you list in that sitemap: Metadata is information about a webpage, such as when the page was last updated, how often the page is changed, and the importance of the page relative to other URLs in the site.
You can use a sitemap to provide Google with metadata about specific types of content on your pages, including video and image content. For example, you can give Google the information about video and image content:
A sitemap video entry can specify the video running time, category, and age appropriateness rating.
A sitemap image entry can include the image subject matter, type, and license.
Build and submit a sitemap:
|Images without description||
altattribute is used to describe the contents of an image file.
<img src="puppy.jpg" alt=""/>
<img src="puppy.jpg" alt="puppy"/>
<img src="puppy.jpg" alt="Dalmatian puppy playing fetch">
<img src="puppy.jpg" alt="puppy dog baby dog pup pups puppies doggies pups litter puppies dog retriever labrador wolfhound setter pointer puppy jack russell terrier puppies dog food cheap dogfood puppy food"/>