CLI Commands
RapidRabbit CLI provides several commands for website analysis.
Available Commands
--scan <url>- Crawl a website and analyze all pages--detect <url>- Detect CMS and technologies--sitemap <url>- Find and parse sitemaps--list- List all saved sessions--export <session-id>- Export session data--mcp- Start MCP server mode--help- Show help information
Scan Options
Customize your website scans with these CLI options.
Common Options
--max-pages <n>- Maximum pages to crawl (default: 100)--depth <n>- Maximum crawl depth--delay <ms>- Delay between requests in milliseconds--screenshots- Enable screenshot capture--no-external- Skip external links
Example
rapidrabbit --scan https://example.com --max-pages 50 --screenshots
Output Formats
Export your crawl results in multiple formats for different use cases.
Supported Formats
- JSON - Complete data for programmatic processing
rapidrabbit --export <id> --format json - CSV - Spreadsheet-compatible format
rapidrabbit --export <id> --format csv - XLSX - Excel format with formatting
rapidrabbit --export <id> --format xlsx
Scripting
Integrate RapidRabbit into your automation workflows and scripts.
Bash Script Example
#!/bin/bash
for url in $(cat urls.txt); do
rapidrabbit --scan "$url" --max-pages 50
done
CI/CD Integration
RapidRabbit can be integrated into CI/CD pipelines for automated website auditing. Use the CLI in your build scripts to scan staging environments before deployment.
Common Workflows
Here are some practical examples for everyday use:
Full Site Audit
Scan a website and export the results to JSON for further analysis:
rapidrabbit --scan https://example.com
rapidrabbit --list
rapidrabbit --export <session-id> --format json
Quick CMS Check
Identify the CMS powering a website without a full crawl:
rapidrabbit --detect https://example.com
The output includes the detected platform name, confidence score, and the detection signals that were matched (headers, meta tags, URLs, or body content patterns).
Sitemap Discovery
Find all sitemaps for a website by checking robots.txt and common sitemap locations:
rapidrabbit --sitemap https://example.com
Exit Codes
The CLI returns standard exit codes for scripting integration: 0 for success, 1 for general errors, and 2 for invalid arguments or missing parameters. You can use these in shell scripts to conditionally handle scan results.