Scale Facebook and Google Ads with AI Creative
A tactical playbook for using AI tools to generate winning ad creatives, optimize campaigns, and scale spend profitably across Facebook and Google.
Most B2B teams waste 30-40% of their sales capacity chasing leads that will never convert. The root cause isn't bad salespeople—it's bad data and no systematic way to prioritize. Raw lead lists from LinkedIn scrapes, webinar signups, or inbound forms are just names until you layer on firmographic, technographic, and intent data. Without enrichment and scoring, your SDRs are essentially guessing which leads to call first, and your marketing team can't segment campaigns with any real precision.
This playbook walks you through building a fully automated lead enrichment and scoring pipeline using Phantombuster for lead scraping, Clay for multi-source enrichment, Customers.ai for website visitor identification, Zapier for orchestration, and AI-based scoring models to rank every lead before it hits your CRM. When built correctly, this system runs 24/7, processes hundreds of leads per day, and ensures your sales team only sees prospects with a realistic chance of closing. Teams running pipelines like this typically see a 2-3x improvement in SQL-to-opportunity conversion rates within the first 60 days.
Before diving into the steps, understand the data flow: Phantombuster scrapes leads from LinkedIn (Sales Navigator searches, group members, post engagers) and pushes them to a Google Sheet or webhook. Customers.ai identifies anonymous website visitors and exports them to the same pipeline. Clay ingests both streams, enriches each lead against 75+ data providers (Clearbit, Apollo, BuiltWith, Crunchbase, etc.), and runs AI-powered scoring formulas. Zapier monitors Clay for leads that cross a score threshold and routes them into your CRM (HubSpot, Salesforce, Pipedrive) with full enrichment data, assigns them to reps, and triggers Slack alerts for hot leads. Think of it as a conveyor belt: raw leads go in one end, sales-ready scored opportunities come out the other.
Your entire pipeline is only as good as your ICP definition. Before configuring a single scraper, document the specific attributes of your best customers. Pull data from your CRM on your last 50 closed-won deals and identify patterns: What's the average company size (employees and revenue)? What industries? What tech stack do they run? What funding stage are they at? What job titles signed the deal? Write these down as hard filters, not vague personas. For example: "SaaS companies, 50-500 employees, Series A-C, using HubSpot or Salesforce, Director+ title in Marketing or RevOps, based in US/Canada."
Pro Tip: Don't skip the negative ICP. Explicitly define who you do NOT want—agencies under 10 people, bootstrapped companies with no budget authority, industries with long procurement cycles you can't support. This saves enormous downstream waste.
Use Phantombuster to extract leads that match your ICP from LinkedIn. Start with the LinkedIn Sales Navigator Search Export phantom—paste in a Sales Navigator search URL filtered by your ICP criteria (industry, headcount, geography, seniority level). Phantombuster will extract names, titles, company names, LinkedIn URLs, and sometimes emails. Run a second phantom—LinkedIn Profile Scraper—to pull richer data from each profile URL, including current company, job duration, and headline keywords.
Set up Phantombuster to run on a schedule (e.g., every Monday and Thursday) so your pipeline continuously ingests fresh leads. Export results to a Google Sheet or configure the webhook output to send data directly to Clay. A typical Sales Navigator search yields 500-2,500 leads per run depending on your filters. Aim for precision over volume—tighter filters mean higher downstream conversion.
Common Mistake: Scraping too broadly. If your Sales Navigator search returns 50,000+ results, your filters are too loose. Narrow by headcount range, specific technologies, or recent job changes. Phantombuster has daily LinkedIn action limits (around 80-150 profiles per day per account), so every scrape should count.
Your website traffic is a goldmine of intent data that most companies ignore. Customers.ai (formerly Visitors.ai) installs a tracking pixel on your site and uses identity resolution to de-anonymize visitors—matching IP addresses and device fingerprints to real contacts. Configure Customers.ai to capture visitors who view high-intent pages: your pricing page, case studies, integration docs, or demo request page. The tool returns names, email addresses, company names, and LinkedIn URLs for a meaningful percentage of your traffic (typically 15-30% identification rates on B2B sites).
Set up a daily or real-time export from Customers.ai to the same Google Sheet or Clay table that receives your Phantombuster leads. Tag these leads with a source label like "website_visitor" and include the pages they viewed—this intent signal will be critical for scoring. A visitor who viewed your pricing page three times in a week is fundamentally different from someone who bounced from a blog post.
Pro Tip: Create separate Customers.ai audiences for different page categories. A "pricing page visitor" audience and a "blog-only visitor" audience should be scored very differently. Use Customers.ai's built-in filtering to only export visitors from companies with 50+ employees to avoid noise from students and freelancers.
Clay is the enrichment engine at the heart of this pipeline. Import your leads from Phantombuster and Customers.ai into a Clay table (via Google Sheets integration, webhook, or direct API). Then configure Clay's waterfall enrichment to pull data from multiple providers in sequence. For each lead, enrich the following fields:
Clay's waterfall enrichment is the key differentiator here: if Clearbit doesn't have the company's revenue data, Clay automatically falls through to Apollo, then to Crunchbase. This gives you 85-95% fill rates on critical fields instead of the 40-60% you'd get from a single provider. Configure the waterfall order based on which providers have the best data for your target market.
Pro Tip: Use Clay's AI column feature to generate custom enrichment. For example, add a column with the prompt: "Based on this company's description and tech stack, rate on a scale of 1-5 how likely they are to need [your product category]. Explain your reasoning in one sentence." This adds an AI-generated fit assessment to every lead automatically.
With enriched data in Clay, create a scoring formula that assigns points across four dimensions. Here is a template rubric you should customize to your business:
Total possible score: 100. Define your tiers: Hot (75-100) routes to SDR immediately, Warm (50-74) enters a nurture sequence, Cold (below 50) goes to a long-term drip or is excluded. Implement this in Clay using a formula column or an AI-powered column that evaluates all enrichment fields and outputs a numeric score plus a one-line justification. Test the model against your last 30 closed-won and 30 closed-lost deals to validate it predicts outcomes correctly.
Common Mistake: Weighting firmographic data too heavily and ignoring intent signals. A 50-person company where the VP of Marketing visited your pricing page twice this week is almost always a better lead than a 5,000-person company that hasn't shown any engagement. Behavior beats demographics every time.
Use Zapier to connect Clay to your CRM and communication tools. Create three Zaps based on score tiers:
Configure Zapier to run on a 15-minute polling cycle or use Clay's webhook triggers for near-real-time routing. Include all enrichment fields in the CRM record so reps never have to research a lead manually—company size, tech stack, funding, intent signals, and the AI-generated fit score should all be visible in the contact record.
Pro Tip: Add a Zapier step that checks for duplicates in your CRM before creating a new contact. Use a "Find Record" step in Zapier to search by email or LinkedIn URL. If the lead already exists, update the record with new enrichment data and adjust the score instead of creating a duplicate.
Track these metrics weekly to evaluate pipeline health: Lead volume per source (Phantombuster vs. Customers.ai vs. other), enrichment fill rate (percentage of leads with all critical fields populated—target 85%+), score distribution (what percentage of leads are Hot, Warm, Cold), hot lead to meeting booked rate (target 25%+ for Score 75+ leads), and cost per enriched lead (sum of all tool costs divided by total leads processed—target under $0.50 per lead). Build a simple dashboard in Google Sheets or your BI tool that pulls from your CRM.
Review your scoring model monthly. Pull all leads that became opportunities and all that were disqualified. If your model consistently scores eventual opportunities below 75, you need to reweight. If 50+ scored leads are converting at the same rate as 75+ leads, your threshold is too high. The scoring rubric is a living document—treat it like a machine learning model that needs retraining with fresh data.
Pro Tip: Use Clay's AI column to run a monthly analysis. Feed it a CSV of your scored leads alongside their actual outcomes and prompt: "Analyze which enrichment fields are most predictive of conversion. Recommend scoring weight adjustments." This turns your pipeline into a self-improving system.
Once the core pipeline is running, add advanced automation layers. Configure Customers.ai to trigger a Zapier workflow when a previously scored "Warm" lead returns to your pricing page—automatically upgrading them to "Hot" and alerting the assigned SDR. Set up Phantombuster to scrape new leads who engage with your LinkedIn posts (using the Post Likers phantom) and feed them directly into Clay for enrichment and scoring. Create a "re-enrichment" loop where Clay re-checks funding data on Warm leads monthly—if a company closes a new round, their score jumps and they get re-routed to sales.
Build a Zapier workflow that monitors your CRM for closed-lost reasons. If "timing" or "no budget" are frequent, automatically re-enqueue those leads into a 90-day nurture drip and re-score them after the waiting period. The goal is a pipeline that doesn't just process leads once, but continuously re-evaluates and resurfaces opportunities as signals change.
Common Mistake: Building the pipeline and forgetting it. Automation doesn't mean zero maintenance. Phantom scrapers break when LinkedIn changes its DOM, enrichment APIs hit rate limits, and scoring models decay. Assign one team member as the pipeline owner who audits it bi-weekly.
Lorem ipsum dolor sit amet consectetur adipiscing elit etiam nisl tellus dolor egestas quis laoreet fames.