Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add additional filters to robots.txt to avoid crawler traps #335

Open
PGijsbers opened this issue Jul 10, 2024 · 0 comments
Open

Add additional filters to robots.txt to avoid crawler traps #335

PGijsbers opened this issue Jul 10, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@PGijsbers
Copy link
Contributor

I updated the robots.txt in #334. Unfortunately, we still see a sizable number of crawlers stuck because of two issues (see also #336). One issue is that most pages allow for filters (and sorting), and this means there are (near) limitless urls to crawl. We should disallow them in our robots.txt. We perhaps should not do this right away though, as currently also the entity pages (e.g., the dataset pages https://www.openml.org/search?type=data&sort=runs&id=151&status=active) contain filters/sorts. I do think we want crawlers to visit the dataset pages. So we must first create entity pages with urls which do not contain queries. Then we can disallow crawling of the remaining pages that do support queries.

@PGijsbers PGijsbers added the bug Something isn't working label Jul 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant