The command disallow is, in the context of websites, often linked to the robots.txt file. This command is used to crawlers from search engines instructing certain pages or parts of a website not to be indexed. Although it seems like a simple task, managing what search engines may or may not index is an essential part of effective SEO-strategy.
If you want to keep certain information on your website private, or if you think some pages are not useful for search engine results (such as administration pages or duplicate content), you can use the disallow command in your robots.txt file. This instructs the crawler of the search engine to skip those specific pages during the indexing process. However, not all crawlers always respect these rules, so it is good to consider other measures as well, such as no follow tags.
Although the use of disallow in the robots.txt file can be useful, it is also risky. A misplaced disallow can prevent important parts of your website from being indexed, so they do not appear in the search engine results. It is therefore essential to regularly update your robots.txt file to check and test, to ensure that only the intended parts of your website are excluded from indexing.
Get smarter every quarter.
Sign up for the On The Rocks newsletter and get access to exclusive insights, creative inspiration and the latest trends in our industry.