If there’s a meta robots tag or x-robots-header on your page with “noindex” in the content attribute, Google won’t index it.
If Google has crawled your website already, you can check for pages excluded because of noindexing in the Coverage report. Just toggle the “Error” and “Excluded” tabs, then check for these two issues:
Submitted URL marked ‘noindex’
Excluded by ‘noindex’ tag
If Google hasn’t crawled your website yet, or you just want to keep an eye out for rogue “noindex tags” in the future, sign up for Ahrefs Webmaster Tools (AWT) and run a free website crawl using Site Audit. This checks your site for 100+ common SEO issues, including noindexed pages.
3. You have low-value pages
Google is unlikely to index pages that don’t hold much value for searchers. In a tweet from 2018, Google’s John Mueller suggests that your website and content should be “awesome and inspiring” for it to be indexed.
The workload like this whatsapp number list allows both the vendor and the affiliate to focus on. Clicks are the number of clicks coming to your website’s URL from organic search results.
If you’ve ruled out technical issues that would prevent indexing, it’s worth asking yourself if that page is truly valuable. If the answer is no, that’s probably why it’s not indexed.
If you feel that the page is low-value and you’re concerned you might have other similar pages, run a free website crawl using Site Audit in Ahrefs Webmaster Tools. This flags up two issues often associated with low-value content:
Pages with low word counts
Pages that are exact or near-duplicates
You can see the number of URLs with low word counts in the All issues report.
Although content doesn’t need to be lengthy to be valuable, pages with super low word counts often aren’t that valuable for search engine users. So it’s worth reviewing these pages manually and making them more useful where necessary.
You can see pages that are exact or near duplicates in the Duplicate Content report:
Here’s a good example of two low-value pages that are near-duplicates: