Cover the full pipeline (scrapers, merge, map generation), all 6 data
sources with their parsing methods, filter criteria, CLI arguments,
Docker setup, caching, rate limiting, and project structure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Replace print() with Python logging module across all 6 scrapers
for configurable log levels (DEBUG/INFO/WARNING/ERROR)
- Add --max-pages, --max-properties, and --log-level CLI arguments
to each scraper via argparse for limiting scrape scope
- Add validation Make targets (validation, validation-local,
validation-local-debug) for quick test runs with limited data
- Update run_all.sh to parse and forward CLI args to all scrapers
- Update mapa_bytu.html with latest scrape results
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>