Atlas — on-demand crawls (Premium)

Last updated: April 2026

Atlas is Techfleet's market intelligence module. It tracks competitor pricing across the major US refurbished-device storefronts and surfaces the data in /dashboard/market-intel.

Atlas Free vs Atlas Premium

  • Atlas Free — Every signed-in merchant sees yesterday's snapshot. We run one curated scrape at midnight US time covering Reebelo, Gazelle, Plug.tech, REFURB, TekReplay, Cellmigo, and BuyBackWorld. The data refreshes every 24h.
  • Atlas Premium — Includes everything in Free, plus on-demand custom-URL crawls. Paste any storefront URL and Techfleet's scraper pulls live pricing within minutes. Price-drop alerts and the recent-drops feed are also Premium-only.

How on-demand crawls work

  1. You paste a competitor storefront URL into the "Crawl now" panel on /dashboard/market-intel.
  2. The job lands in a queue. The Mac mini scraper (with our residential proxy) picks it up within 30 seconds.
  3. Snapshots get inserted into market_intel_snapshots with your merchant id attached. The recent-crawls table updates in real time as the job moves from Queued -> Running -> Done.
  4. Once Done, the new prices roll into the regular Atlas dashboards alongside the curated sources.

Supported storefronts

  • Shopify — Auto-detects condition, storage, and color option slots. Works on the vast majority of refurb stores.
  • WooCommerce — Best-effort auto-detect (v2 — limited coverage today).
  • Generic — If neither Shopify nor WooCommerce is detected the job fails with "Not a recognized platform". The merchant id and URL are still recorded so we can iterate on coverage.

Limits

  • 5 enqueues per merchant per hour.
  • 10 jobs queued at any one time per merchant.
  • Each crawl pulls up to ~250 variants per storefront page; the queue worker leaves no per-job cap, so a large catalog still completes in a single run.

When a crawl fails

A failed status surfaces the reason inline (e.g. "Not a Shopify store", "fetch 403", "rate-limited by source"). Failed crawls do NOT count against your hourly quota — re-submit once you've fixed the URL or rotated the proxy. If a domain is consistently 403'ing, contact support — it usually means the source has banned our scraper IP and needs a Decodo session refresh.

Was this article helpful?