Robots.txt is a file that tells search engine crawlers which pages or sections of a website they should and should not access. The file is located in the root directory of a website and follows a standard format.
Major search engines, such as Google, Bing, and Yahoo, recognize and comply with the directives specified in the Robots.txt file.
Having a Robots.txt file is not mandatory, but it is a good practice to help you manage how search engines interact with your website.
It consists of two essential parts: user-agents and directives.
User-agents: This line specifies which web robots the file applies to. For example,
User-agent: Googlebot
This means that the file applies only to the Googlebot web robot. The * symbol can be used to represent all web robots.
Directives: This line specifies which parts of the website should not be accessed by web robots.
You can check Google Search Central to learn more about how to write a Robots.txt file.
Once you have written it, access the following section to add.
Go to the Settings tab from your dashboard
Navigate to the General settings subtab
Scroll down to the bottom of the page to access the Robots.txt.
Insert in the box below and click on the Save changes button.
If you have any questions about coupons, discounts, or anything at all, send us an email at [email protected] or click the blue icon below to chat 😊
©2024 - Teach online with Uteach .
All copyright reserved