SEO analysis of the web page


This page shows an overview of the key metrics of your website. Use the step-by-step list below to systematically improve your rankings on search engines to get more customers. Follow the advice and solutions created especially for you and bring every parameter to bring to perfection.

SEO analyzer

Лучшие публикации за сутки / Хабр

Лучшие публикации за последние 24 часа

The Icons

= Error

The Icons

= Good

The Icons

= Reminders

The Icons

= Read the tips

Meta tags optimization

Title Лучшие публикации за сутки / Хабр
Titles are critical to giving users a quick insight into the content of a result and why it’s relevant to their query. It's often the primary piece of information used to decide which result to click on, so it's important to use high-quality titles on your web pages.

Here are a few tips for managing your titles:
  • Make sure every page on your site has a title specified in the <title> tag. If you’ve got a large site and are concerned you may have forgotten a title somewhere, you may also check the HTML suggestions page in Search Console lists missing or potentially problematic <title> tags on your site.
  • Page titles should be descriptive and concise. Avoid vague descriptors like "Home" for your home page, or "Profile" for a specific person's profile. Also avoid unnecessarily long or verbose titles, which are likely to get truncated when they show up in the search results.
  • Avoid keyword stuffing. It's sometimes helpful to have a few descriptive terms in the title, but there’s no reason to have the same words or phrases appear multiple times. A title like "Foobar, foo bar, foobars, foo bars" doesn't help the user, and this kind of keyword stuffing can make your results look spammy to Google and to users.
  • Avoid repeated or boilerplate titles. It’s important to have distinct, descriptive titles for each page on your site. Titling every page on a commerce site "Cheap products for sale", for example, makes it impossible for users to distinguish one page differs another. Long titles that vary by only a single piece of information ("boilerplate" titles) are also bad; for example, a standardized title like "<band name> - See videos, lyrics, posters, albums, reviews and concerts" contains a lot of uninformative text. One solution is to dynamically update the title to better reflect the actual content of the page: for example, include the words "video", "lyrics", etc., only if that particular page contains video or lyrics. Another option is to just use " " as a concise title and use the meta description (see below) to describe your site's content.
  • Brand your titles, but concisely. The title of your site’s home page is a reasonable place to include some additional information about your site—for instance, "ExampleSocialSite, a place for people to meet and mingle." But displaying that text in the title of every single page on your site hurts readability and will look particularly repetitive if several pages from your site are returned for the same query. In this case, consider including just your site name at the beginning or end of each page title, separated from the rest of the title with a delimiter such as a hyphen, colon, or pipe, like this:

    <title>ExampleSocialSite: Sign up for a new account.</title>

  • Be careful about disallowing search engines from crawling your pages. Using the robots.txt protocol on your site can stop Google from crawling your pages, but it may not always prevent them from being indexed. For example, Google may index your page if we discover it by following a link from someone else's site. To display it in search results, Google will need to display a title of some kind and because we won't have access to any of your page content, we will rely on off-page content such as anchor text from other sites. (To truly block a URL from being indexed, you can use meta tags.)
Title length 33 signs (Recomended: 35-65 signs)
Description Лучшие публикации за последние 24 часа
The description attribute within the <meta> tag is a good way to provide a concise, human-readable summary of each page’s content. Google will sometimes use the meta description of a page in search results snippets, if we think it gives users a more accurate description than would be possible purely from the on-page content. Accurate meta descriptions can help improve your clickthrough; here are some guidelines for properly using the meta description.
  • Make sure that every page on your site has a meta description. The HTML suggestions page in Search Console lists pages where Google has detected missing or problematic meta descriptions.
  • Differentiate the descriptions for different pages. Identical or similar descriptions on every page of a site aren't helpful when individual pages appear in the web results. In these cases we're less likely to display the boilerplate text. Wherever possible, create descriptions that accurately describe the specific page. Use site-level descriptions on the main home page or other aggregation pages, and use page-level descriptions everywhere else. If you don't have time to create a description for every single page, try to prioritize your content: At the very least, create a description for the critical URLs like your home page and popular pages.
  • Include clearly tagged facts in the description. The meta description doesn't just have to be in sentence format; it's also a great place to include structured data about the page. For example, news or blog postings can list the author, date of publication, or byline information. This can give potential visitors very relevant information that might not be displayed in the snippet otherwise. Similarly, product pages might have the key bits of information—price, age, manufacturer—scattered throughout a page. A good meta description can bring all this data together.
  • Programmatically generate descriptions. For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions can be impossible. In the latter case, however, programmatic generation of the descriptions can be appropriate and are encouraged. Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. Keep in mind that meta descriptions comprised of long strings of keywords don't give users a clear idea of the page's content, and are less likely to be displayed in place of a regular snippet.
  • Use quality descriptions. Finally, make sure your descriptions are truly descriptive. Because the meta descriptions aren't displayed in the pages the user sees, it's easy to let this content slide. But high-quality descriptions can be displayed in Google's search results, and can go a long way to improving the quality and quantity of your search traffic.
Description length 38 signs (Recomended: 70-320 signs)
Keywords none
Count of H1 tags Count of H1 tags: 0
H1 length signs (Recomended: 5-70 signs)
H1 equals Title H1 is not equals Title
Count all tags Heading tags
H2: 26 H3: 21 H4: 1 H5: 0 H6: 0
Content length signs 25346 (Recomended length: more than 15000 signs)
Content to code ratio Content to code ratio: 9% (Recomended ratio: more than 25%)
Count all external links
External Links: 56
Count all internal links
Internal Links: 213

Relevance of meta tags

Title relevancy The title of the page seems optimized. Title relevancy to page content is excellent. Лучшие публикации за сутки / Хабр
Description relevancy The description of the page seems optimized. Description relevancy to page content is excellent. Лучшие публикации за последние 24 часа
H1 relevancy The H1 tags of your page seems not optimized Count of H1 tags: 1
H2 relevancy The H2 tags of your page seems optimized. The H2 relevancy to page content is 100%. Count of H2 tags: 26
H3 relevancy The H3 tags of your page seems optimized. The H3 relevancy to page content is 100%. Count of H3 tags: 21
H4 relevancy The H4 tags of your page seems optimized. The H4 relevancy to page content is 100%. Count of H4 tags: 1
H5 relevancy (Use of this tag is optional) The H5 tag not found Count of H5 tags: 0
H6 relevancy (Use of this tag is optional) The H6 tag not found Count of H6 tags: 0


<noindex> (Directive) Content in noindex tags not found
URL length 39 symbols. (Recomended url length limitation: 115 symbols)
Protocol redirect HTTP to HTTPS redirect not working
HTTPS (Hypertext Transfer Protocol Secure) is an internet communication protocol that protects the integrity and confidentiality of data between the user's computer and the site. Users expect a secure and private online experience when using a website. Google encourages you to adopt HTTPS in order to protect your users' connection to your website, regardless of the content on the site.

Data sent using HTTPS is secured via Transport Layer Security protocol (TLS), which provides three key layers of protection:
  • Encryption—encrypting the exchanged data to keep it secure from eavesdroppers. That means that while the user is browsing a website, nobody can "listen" to their conversations, track their activities across multiple pages, or steal their information.
  • Data integrity—data cannot be modified or corrupted during transfer, intentionally or otherwise, without being detected.
  • Authentication—proves that your users communicate with the intended website. It protects against man-in-the-middle attacks and builds user trust, which translates into other business benefits.

If you migrate your site from HTTP to HTTPS, Google treats this as a site move with a URL change. This can temporarily affect some of your traffic numbers.
Add the HTTPS property to Search Console; Search Console treats HTTP and HTTPS separately; data for these properties is not shared in Search Console. So if you have pages in both protocols, you must have a separate Search Console property for each one.
404 Page 404 - Correct response
Robots.txt ok
A robots.txt file is a file at the root of your site that indicates those parts of your site you don’t want accessed by search engine crawlers. The file uses the Robots Exclusion Standard, which is a protocol with a small set of commands that can be used to indicate access to your site by section and by specific kinds of web crawlers (such as mobile crawlers vs desktop crawlers).

The simplest robots.txt file uses two key words, User-agent and Disallow. User-agents are search engine robots (or web crawler software); most user-agents are listed in the Web Robots Database. Disallow is a command for the user-agent that tells it not to access a particular URL. On the other hand, to give Google access to a particular URL that is a child directory in a disallowed parent directory, then you can use a third key word, Allow.

Google uses several user-agents, such as Googlebot for Google Search and Googlebot-Image for Google Image Search. Most Google user-agents follow the rules you set up for Googlebot, but you can override this option and make specific rules for only certain Google user-agents as well.

The syntax for using the keywords is as follows:

User-agent: [the name of the robot the following rule applies to]

Disallow: [the URL path you want to block] Allow: [the URL path in of a subdirectory, within a blocked parent directory, that you want to unblock]

These two lines are together considered a single entry in the file, where the Disallow rule only applies to the user-agent(s) specified above it. You can include as many entries as you want, and multiple Disallow lines can apply to multiple user-agents, all in one entry. You can set the User-agent command to apply to all web crawlers by listing an asterisk (*) as in the example below:

User-agent: *

You must apply the following saving conventions so that Googlebot and other web crawlers can find and identify your robots.txt file:
  • You must save your robots.txt code as a text file,
  • You must place the file in the highest-level directory of your site (or the root of your domain), and
  • The robots.txt file must be named robots.txt

As an example, a robots.txt file saved at the root of, at the URL address http://www., can be discovered by web crawlers, but a robots.txt file at http://www. cannot be found by any web crawler.
SiteMap.xml ok
A sitemap is a file where you can list the web pages of your site to tell Google and other search engines about the organization of your site content. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site.

Also, your sitemap can provide valuable metadata associated with the pages you list in that sitemap: Metadata is information about a webpage, such as when the page was last updated, how often the page is changed, and the importance of the page relative to other URLs in the site.

You can use a sitemap to provide Google with metadata about specific types of content on your pages, including video and image content. For example, you can give Google the information about video and image content:

A sitemap video entry can specify the video running time, category, and age appropriateness rating.
A sitemap image entry can include the image subject matter, type, and license.

Build and submit a sitemap:
  • Decide which pages on your site should be crawled by Google, and determine the canonical version of each page.
  • Decide which sitemap format you want to use. You can create your sitemap manually or choose from a number of third-party tools to generate your sitemap for you.
  • Test your sitemap using the Search Console Sitemaps testing tool.
  • Make your sitemap available to Google by adding it to your robots.txt file and submitting it to Search Console.

Alexa rank

Alexa global rank 1360 Statistics updated daily: Analysis date: Tuesday 19th 2019 February

Virus check

Viruses and malware none Detection ratio: 0 / 66 Analysis date: Tuesday 19th 2019 February

Domain information

Domain register date 2003-03-11 18:04:56.000000
Registry expire date 2023-03-11 17:04:56.000000

IP information

Country Russia
IP city Moscow
Organization Habr LLC
Blacklist none


Images without description
Title Alt URL
none image
none none
none none //
none image
none none //
none none
none none
none none
none none
none none
none none
none none //
none image
none none //
none image
none none
none none
none none
none none
none none
none none //
none none
none none //
none none
none none //
none none
none image
none tag image
none tag image //
none tag image
none tag image
none tag image
none none
none none
none none //
none none //
none image
none none //
none none
none none
none none
none none //
none image
none none
none none
none none
none none //
none none //
none none //
none none //
none none //
none none //
none none //
none none //
none none //
none none //
none none
none none
none none //
none none
The alt attribute is used to describe the contents of an image file.

It provides Google with useful information about the subject matter of the image. Google uses this information to help determine the best image to return for a user's query. Many people-for example, users with visual impairments, or people using screen readers or who have low-bandwidth connections-may not be able to see images on web pages. Descriptive alt text provides these users with important information.

Not so good:
<img src="puppy.jpg" alt=""/>

<img src="puppy.jpg" alt="puppy"/>

<img src="puppy.jpg" alt="Dalmatian puppy playing fetch">

To be avoided:
<img src="puppy.jpg" alt="puppy dog baby dog pup pups puppies doggies pups litter puppies dog retriever labrador wolfhound setter pointer puppy jack russell terrier puppies dog food cheap dogfood puppy food"/>

Filling alt attributes with keywords ("keyword stuffing") results in a negative user experience, and may cause your site to be perceived as spam. Instead, focus on creating useful, information-rich content that uses keywords appropriately and in context.


External Links

Qty Anchors URL
1 Тостер
1 Мой круг
1 Фрилансим
1 InoThings++
2 Разместить Разместить
4 Криптонит для мозга: IT-ребус от Криптонит Startup Challenge Криптонит для мозга: IT-ребус от Криптонит Startup Challenge
2 Объявляем конкурс статей от РТЛабс и Хабра
4 Несколько интересных кейсов с SAP HANA: потенциал big data и машинного обучения Несколько интересных кейсов с SAP HANA: потенциал big data и машинного обучения
1 Вакансии
1 Lead Java+NodeJS Software Engineer TestRigor AI Возможна удаленная работа от 2000 до 4000
1 ASP.NET/MVC разработчик (remote) 3dEYE Inc. Возможна удаленная работа от 2000
1 Аналитик (e-commerce, part time, remote) RevelTime Возможна удаленная работа от 100000
1 Rock-Star (Senior) Full-Stack Software Engineer Collectly Возможна удаленная работа от 5000 до 8500
1 Интернет-маркетолог группы компаний в области здравоохранения Биомедиа Санкт-Петербург от 30000 до 60000
1 Все вакансии
2 Вопросы и ответы
1 Пайка Простой Почему ВСЕ мои паяльники - обретают нагар и не паяют? 2 ответа
1 Assembler Средний Как запустить risc-v бинарник собранный с newlib в QEMU? 0 ответов
1 Arduino Средний Сможете помочь реализовать плату с Arduino? 2 ответа
1 Наушники Простой Как отремонтировать наушники? 2 ответа
1 Пайка Простой Нужна ли вытяжка для пайки? 2 ответа
1 Все вопросы
1 Задать вопрос
1 провели
1 игре
2 Заказы
1 Python dev / трассировка растровых изображений (срочно) 0 откликов 6 просмотров 20000 за проект
1 Добавить задания в планировщик Win, но не руками, а програмно 1 отклик 20 просмотров 3500 за проект
1 Разработка дизайна сайта для краудфандинговой площадки 12 откликов 36 просмотров 50000 за проект
1 Оптимизировать скорость работы сайта на Битрикс 2 отклика 19 просмотров 2000 за час
1 Собрать Android/IOS приложение 2 отклика 17 просмотров 20000 за проект
1 Все заказы
1 Разместить заказ
1 лекцию Тьюринга
1 JetBrains Night
2 Принять участие РТЛабс Системный интегратор, радеет за коллаборацию государства и IT и помогает талантливым техноавторам
2 Реклама
1 Криптонит Startup Challenge Конкурс технологических стартапов, снабжает IT-отрасль перспективными технологиями.
1 RUVDS Облачный провайдер, поддерживает нас во всех начинаниях, но часто втягивает в авантюры
1 Соглашение
1 Конфиденциальность
1 Тарифы
1 Контент
1 Семинары
1 TM
1 Мобильная версия

Internal Links

Qty Anchors URL
1 Хабр
1 Geektimes
2 Публикации
2 Пользователи Пользователи
2 Хабы Хабы
2 Компании Компании
3 Песочница Из песочницы Песочница
2 Войти Войти
2 Регистрация Регистрация
2 Разработка Разработка&plus;39
2 Администрирование Администрирование&plus;11
2 Дизайн Дизайн&plus;2
2 Управление Управление&plus;9
2 Маркетинг Маркетинг&plus;7
2 Гиктаймс Гиктаймс&plus;21
2 Разное Разное&plus;5
2 Лучшие Сутки
1 Все подряд +60
1 Неделя
1 Месяц
1 Год
1 jar_ohty
3 Сказ о сплаве Розе и отвалившейся КРЕНке Сказ о сплаве Розе и отвалившейся КРЕНке Сказ о сплаве Розе и отвалившейся КРЕНке
2 DIY или Сделай сам DIY или Сделай сам
5 Научно-популярное Научно-популярное Научно-популярное Научно-популярное Научно-популярное
1 Химия
1 Электроника для начинающих
1 Читать дальше →
3 55 55 55
1 digitalsibur
3 — А вы там в нефтехимии бензин делаете, да? — А вы там в нефтехимии бензин делаете, да?
1 Блог компании Цифровой СИБУР
1 Экология
1 Читать дальше →
2 91 91
1 Zangasta
2 Методы рационального мышления и Магрибский молитвенный коврик Методы рационального мышления и Магрибский молитвенный коврик
1 Научная фантастика
1 Читать дальше →
2 93 93
1 olartamonov
1 Профессиональная IoT-конференция InoThings++ — что было и что будет
1 Блог компании Конференции Олега Бунина (Онтико)
1 Интернет вещей
2 Производство и разработка электроники Производство и разработка электроники
1 Разработка для интернета вещей
1 Читать дальше →
1 7
1 antonpoderechin
1 AudioKit и синтезирование звука в iOS/OSX
1 Блог компании FunCorp
3 Программирование Программирование Программирование
1 Разработка под MacOS
1 Разработка под iOS
1 Читать дальше →
1 2
1 serjmd
1 Хоббийный CNC-роутер своими руками. Гуманитарий для гуманитариев. Часть 2
1 Читать дальше →
1 23
1 MagisterLudi
2 Андрей Гейм: Бойтесь технологического кризиса Андрей Гейм: Бойтесь технологического кризиса
2 Будущее здесь Будущее здесь
2 Физика Физика
1 Читальный зал
1 Читать дальше →
2 31 31
1 LukaSafonov
1 OWASP Proactive Controls: cписок обязательных требований для разработчиков ПО
1 Блог компании Инфосистемы Джет
2 Информационная безопасность Информационная безопасность
2 Разработка веб-сайтов Разработка веб-сайтов
1 Тестирование веб-сервисов
1 Читать дальше →
1 6
1 lozga
1 NASA покупает еще два места на «Союзах», испытывает RS-25 и не отказывается от околоземной станции
1 Космонавтика
1 Читать дальше →
1 59
2 ru_vds ru_vds
2 Дата-центры на выбор: Лондон, Москва, Цюрих, Санкт-Петербург
2 Блог компании Блог компании
1 IT-инфраструктура
1 Хостинг
1 Хранение данных
1 Хранилища данных
1 Читать дальше →
1 10
2 Изучаем Python: модуль argparse
1 Python
1 Читать дальше →
1 9
1 OlegKopaev
1 Численное моделирование – история одного проекта
1 ссылка
1 Читать дальше →
1 1
1 lol_wat
2 Подборка: 4 полезных сервиса для потенциальных иммигрантов в США, Европу и другие страны
1 IT-эмиграция
1 Читать дальше →
1 Комментировать
1 driusha
1 Docker и Kubernetes в требовательных к безопасности окружениях
1 Блог компании Флант
1 DevOps
1 Kubernetes
1 Системное администрирование
1 Читать дальше →
1 7
1 Leono
1 Классификация рукописных рисунков. Доклад в Яндексе
1 Блог компании Яндекс
1 Машинное обучение
1 Спортивное программирование
1 Читать дальше →
1 Комментировать
1 MrCheater
2 Размыкаем замыкания и внедряем Dependency Injection в JavaScript
1 Блог компании Developer Soft
1 JavaScript
1 Node.JS
1 Проектирование и рефакторинг
1 Читать дальше →
1 15
1 m1rko
1 Новый золотой век для компьютерной архитектуры
1 Open source
1 Компьютерное железо
1 Процессоры
1 Читать дальше →
1 4
1 Maxcube
1 Пиар в айти: как жить, куда идти?
1 Интернет-маркетинг
1 Контент-маркетинг
1 Повышение конверсии
1 Читать дальше →
1 5
1 capitannemo
2 1С и Яндекс.Облако Compute Cloud. Вдоль и поперек
1 1С-Битрикс
1 Администрирование баз данных
1 Облачные вычисления
1 Облачные сервисы
1 AlexandrSurkov
1 Яндекс открывает Облако. Архитектура новой платформы
1 Читать дальше →
1 21
1 advertka
1 JetBrains Night в Москве, 13 апреля
1 Блог компании JetBrains
1 Java
1 Kotlin
1 Конференции
1 Читать дальше →
1 5
2 туда → 2
1 3
1 Как стать спонсором?
2 Group
2 Group
2 ТechMedia
2 Конференции Олега Бунина (Онтико)
2 Яндекс
2 PVS-Studio
2 «Лаборатория Касперского»
2 Мосигра
1 Все компании
1 Состоялся релиз Kali Linux 2019.1
1 3
1 Серьёзные математические ошибки NHTSA позволили Tesla заявить о безопасности автопилота
1 14
1 Сутки
1 Неделя
1 Месяц
1 Безумие дотфайлов
1 247
2 Что не так с Raspberry Pi Что не так с Raspberry Pi
2 124 124
1 Сколько доменных имён .com не используется?
1 28
1 Корпоративный туалет
1 67
1 Увеличь это! Современное увеличение разрешения
1 209
1 Яндекс! Спасибо за Uber
1 235
1 InterNyet — как в Советском Союзе изобрели интернет и почему он не заработал
1 236
1 Собеседуем работодателя или как не уволиться в первый месяц
1 161
1 Как я год не работал в Сбербанке
1 577
1 Выброшенные на помойку умные лампочки — ценный источник личной информации
1 147
1 Учёные нашли самое старое живое позвоночное на Земле
1 211
1 Хотите вечных светодиодов? Расчехляйте паяльники и напильники. Или домашнее освещение самодельщика
1 262
1 Про одного парня
1 245
1 Публикации
1 Правила
1 Помощь
1 Документация
2 Настройка языка
1 О сайте
1 Служба поддержки