Let’s look at a few SEO techniques to help users find your content. All of your pages should have a descriptive and helpful title that describes what the page is about in very short terms, for example, on recipe pages avoid using a generic titles such as Barbara’s baking block.
Instead, each page should have the name of the recipe in the title, so it’s clear what the page is about. You also should provide a description of what the page will contain, specifically, for example, what makes this recipe special on what are its main characteristics? So people have something helping them identify the best page to fulfill the intended goal. Both of this can be done by adding a title and meta tags in your markup.
You can check your pages for those tags by using the right-click inspect and then search for double slash, title and double slash meta to find them. If you do not see all of your content in the markup, you are probably using javascript to render your page in the browser. This is called client-side rendering and is not a problem per se. Rendering is the process of populating templates with data from api’s or databases.
This can happen either on the server side or on the client side, when it does happen on the server crawlers, as well as your users get all the content as HTML markup immediately in single page apps. The server often sends the templates and Java scripts to the client and the JavaScript then fetches the data from the backend populating the templates as the data arrives, as explained in the first episode, the indexing for JavaScript sites happens in two waves content.
That requires JavaScript to be fed, we’ll only be indexed in the second wave, which might take some time in later episodes. We will cover how to overcome this and often also improve the user experience and loading performance by using techniques such as dynamic, rendering hybrid rendering or server-side rendering for single page apps. Another important detail is to allow Googlebot to crawl pages from your website by linking between your pages properly, make sure to include useful link anchor text and use the HTML anchor tag with the destination.
Url of the link in the href attribute do not rely on other HTML elements such as div or span or use javascript event. Handlers for this now only crawlers will have a trouble finding and following these pseudo links, they also cause issues with assistive technology. Links are an essential feature of the web and help search engines and users find and understand the relationship between pages, if you are using javascript to enhance the transition between individual pages use.
The history API, with normal URLs, instead of the hash based routing technique using hashes, also called fragment identifiers to distinguish between different pages, is a hack that crawlers to ignore using the JavaScript history. Api, on the other hand, with normal URLs, provides a clean solution for the same purpose. Remember to test your pages and server configuration when using javascript to do the routing on the client-side.
Googlebot will be visiting your pages individually, so neither a service worker nor the JavaScript using history. Api can be used to navigate between pages test. What a user would see by opening your l’s in a new incognito window, the page should load with an HTTP 200 status code and all the expected content should be visible. Using semantic HTML markup properly helps users better understand your content, as well as navigated, quicker assistive technologies like screen.
Readers and crawlers also rely on the semantics of your content, use headings, sections and paragraphs to outline the structure of your content, using HTML image and article tags with captions and alt text to add visuals. You have crawlers and assistive technology to find this content and surface it to your users. In contrast, if you use JavaScript to generate your marker, dynamically make sure you aren’t accidentally blocking Googlebot in your initial markup, as explained in the previous episode.
The first round of indexing does not execute JavaScript having markup such as a no index meta tag in the initial payload can prevent Googlebot from running the second stage with JavaScript. Following these steps will help Googlebot understand your content better and make your content more discoverable in Google search, hi Googlebot hi Googlebot. Did you see the new webmasters article serious? No, I did not what you missed out on so much stuff.
Really, yes, oh no. How could we have prevented that well subscribe and follow our articles?