We collect data from just about any source, including dynamic web pages and subscription services. We'll run your crawler every day, every hour, or as often as you need.
We'll process data from multiple sources and find a data structure to fit your workflow. Download reports in CSV, JSON, or Excel format, or have us set up a database for you.
Dig into your dataset with a custom web dashboard. We'll design and build an application to your specifications, complete with search and visualization tools.
I'm Steve McLaughlin, a programmer with years of web scraping experience. I specialize in building custom tools to collect and explore datasets of any size.
Whether a website has a public API or simply a web interface, I can make a near-perfect copy of the underlying database. I also create backups of websites and social media accounts, and I can recover many long-lost sites.
A salesperson needed contact information for several thousand members of a professional association. We analyzed the association's website, built a custom crawler script, then delivered the final dataset in a CSV spreadsheet.
A real estate agent needed price data on foreclosure properties in his region, but it was spread across several hard-to-navigate county websites. We built a crawler script to find upcoming foreclosure auctions across 5 counties, assembling data from 5 sources for each property.
An online publisher needed to recover thousands of blog posts from a website that had gone offline years ago. We downloaded the site's archived history from the Wayback Machine, extracted posts and metadata, and loaded everything into a new WordPress instance.
A small business owner needed customer data from a CRM platform so she could run her own analysis in Excel. We built a script that exhaustively queries the platform's API while heeding a cap on daily requests.