@fcarlo93 Filters and Disallow Directives
Filters in the context of robots.txt are achieved through Disallow directives, which prevent crawlers from accessing specified paths on your site. Here's how to effectively use them:

Be specific: You can specify exact URLs or use wildcards to match patterns. For example, Disallow: /tmp/* blocks access to all URLs under the /tmp/ directory.
Check for errors: Use Google Search Console to identify crawl errors or issues caused by robots.txt directives to ensure you're not inadvertently blocking important content from being indexed.