Week 7 Deep Research Prompt :: The Dealership Intelligence Investigation
-
Metadata
Topic: New vs. Certified Pre-Owned: Let AI Make the Case — The new-vs-CPO decision framework using Deep Research methodology
Week: Week 2 of 7 ("AI at the Dealership: 7 Weeks of Prompts That Could Save You Thousands")
Series: AI at the Dealership
Content Type: Deep Research methodology + prompt breakdown + follow-up prompts
Platform Compatibility: ChatGPT (GPT-4 with web search), Claude 3.5 Sonnet, Google Gemini (with Google Search)
Prerequisite: Week 1 — "Should I Buy a Car Right Now?" (recommended to have confirmed budget and TCO analysis from Week 1)
Tags: Deep Research, new-vs-CPO, vehicle financing, depreciation analysis, dealer tactics, CPO programs, total cost of ownership, financial decision-making
Categories: Car Buying, Financial Planning, AI Research Methodology
Difficulty Levels of Related Posts: Beginner (Week 2 ChatGPT variation), Intermediate (Week 2 Claude variation), Advanced (Week 2 Gemini variation)
Reading Time: 15-20 minutes to read this post; 8-15 minutes to run the prompt; 30-45 minutes total to work through the output and build your decision framework
SEO Title (under 60 characters): Deep Research: New vs. CPO — AI-Powered Analysis
SEO Description (150-160 characters): Use Deep Research mode to investigate the new-vs-CPO decision across eight research threads. Get a structured, investment-grade analysis with financing, depreciation, and dealer tactics covered.
Publication Date: April 13, 2026
Last Updated: April 13, 2026
Week 7 Deep Research: The Dealership Intelligence Investigation
You've done the hard work. You've locked down your budget, chosen new or CPO, and secured financing. Now comes the moment that terrifies 52% of car buyers: walking onto a dealership lot. The data is stark — dealership visits average nearly 3 hours, 55% of buyers wait just to get a test drive, and the experience is controlled entirely by people paid to separate you from maximum cash. But here's what most buyers miss: the dealership visit isn't the problem. The test drive is the emotional peak of the entire car-buying journey — 78% of buyers said the test drive sold them. The problem is everything around it — the wasted time, the pressure, the routes designed to hide mechanical issues, the complete information asymmetry about which dealers are trustworthy and which are the reason the FTC sent warning letters to 97 dealership groups in March 2026. Capital One's 2025 Car Buying Outlook shows that 69% of buyers now view dealers as trustworthy — up from just 44% in 2023 — but that trust is unevenly distributed, and the buyers who can tell the difference walk into the right showrooms with the right scripts. This week's Deep Research prompt is a systematic investigation into dealer selection and test drive engineering — an eight-thread inquiry into ownership structures, reputation forensics, digital dark patterns, fee architecture, CPO verification, pre-visit audits, structured test drive routes, and dealer-experience benchmarking that transforms a passive sales event into a controlled diagnostic evaluation.
Why Deep Research?
Deep Research mode is fundamentally different from a standard chat conversation. Instead of asking an AI a quick question and getting a confident surface answer, Deep Research lets you ask an AI to investigate a topic by searching across multiple sources, synthesizing patterns, identifying conflicts, and building a structured analysis from the ground up. It's the difference between "What should I look for in a dealership?" (which produces generic advice) and "Investigate the current dealership enforcement landscape by ownership type, analyze reputation sources with temporal weighting, synthesize fee structures by state and dealer group, and build a dealer intelligence dossier matrix for my geographic area that rank-orders dealers by composite score with visit-sequence recommendations" (which produces an investment-grade competitive intelligence brief with source citations and quantified rankings).
This week, Deep Research matters because dealership selection is an information-asymmetry battleground. The dealer knows their enforcement history; you do not. The dealer knows whether their online inventory photos are real or ghost cars; you do not. The dealer knows their standard accessory packages and fee architecture; you do not. The dealer has designed the test drive route to hide problems; you don't know what a diagnostic route should look like. Only systematic, multi-source research — comparing FTC enforcement data, state Attorney General actions, DealerRater and Google review patterns with temporal analysis, CARFAX historical data, ghost car detection methodology, CPO certification verification protocols, test drive route engineering principles from automotive diagnostics, and multi-dealer comparison frameworks — gives you symmetric information before you set foot on a lot. That's what this prompt is built to produce.
The Deep Research Prompt
Prompt Breakdown — How AI Reads the Deep Research Prompt
The Deep Research prompt above is dense by design. Every section does specific work, and understanding how the AI parses each block lets you adapt the architecture to any high-stakes decision where information asymmetry favors the seller.
"You are an automotive consumer strategist. I need a comprehensive dealer research and vehicle validation system that eliminates information asymmetry and transforms the dealership visit from a sales event into a structured diagnostic evaluation." — The opening positions the AI as a specialist with a specific role (consumer strategist, not generalist) and anchors the mission to your specific need (eliminate asymmetry, transform the visit). This tells the AI to produce competitive intelligence, not general advice. The phrase "structured diagnostic evaluation" signals that you're treating the dealership as a source of data, not a sales environment. This is the framing that disciplines the entire output.
Transferable principle: Open research prompts by defining the AI's specialist role and the specific problem you're solving. "Strategist producing competitive intelligence" produces different work than "advisor offering tips." The framing creates the tone.
"CONTEXT — MY SITUATION: [Target vehicle(s), New or CPO, Budget ceiling, Geographic search radius, Pre-approved financing, Trade-in, Must-have features, Deal-breaker conditions, Comfort with pressure tactics, Plan to visit X dealers]" — The context block provides all the variables the AI needs to personalize findings and produce a dealer dossier for your specific situation. Unlike generic advice ("visit a dealer"), this context enables specificity ("visit Dealer A before B because Dealer A has lower pressure-tactic frequency in reviews and a transparent internet sales department"). Each parameter is a lever. The ordering matters — it mirrors the stages of car buying (vehicle choice, financing, geography), which helps the AI understand dependencies.
Transferable principle: Provide every context variable upfront, ordered to mirror decision stages. Don't make the AI infer your situation — give it the levers to pull. Precise inputs enable location-specific, vehicle-category-specific, and risk-tolerance-specific recommendations.
"RESEARCH THREAD 1 — DEALER OWNERSHIP STRUCTURE & ENFORCEMENT LANDSCAPE: What percentage of dealerships are independent, single-point franchises, or part of publicly traded groups? Which ownership structures have the highest concentration of FTC enforcement actions? For each dealership in my search radius, classify by ownership type and cross-reference against known enforcement actions." — Each thread opens with a research question about market structure (Thread 1: ownership patterns and enforcement), then pivots to your specific situation (classify your local dealers). This two-layer approach prevents generic answers. The thread doesn't ask "Are there bad dealers?" (vague) but "Which ownership types and specific dealerships in my area have documented enforcement actions?" (quantifiable). The structure—title, five to seven sub-questions, named sources—is the same pattern across all eight threads, which trains the AI on the expected architecture.
Transferable principle: Structure research threads as market-level research (general patterns, statistics, trends) followed by your-situation-specific application (how the pattern applies to your location, your numbers, your decision). This prevents surface-level generic advice and forces specificity.
"RESEARCH THREAD 3 — DIGITAL DARK PATTERNS & PRICING TRANSPARENCY: What is 'drip pricing'? What is a 'ghost car listing'? For each dealership's website, audit: (a) How many inventory items show 'Call for Price' vs. transparent OTD pricing? (b) Do photos match standard stock images? (c) Is pricing conditional on financing or trade-ins not disclosed upfront?" — Thread 3 includes both market research (what drip pricing is, how common it is) and a specific methodology (the four-point audit you perform on each dealer's website). Notice the methodology is concrete and actionable — you could execute the audit yourself without AI. This is the pattern that disciplines the output: not just "beware of dark patterns" but "here's the exact audit template and what each flag means."
Transferable principle: When research involves methodology (audits, checklists, protocols), embed the methodology in the thread as a numbered or lettered procedure. Make it concrete and executable. This prevents hand-wavy guidance and produces operational frameworks you can deploy yourself.
"DELIVERABLE 3 — DEALER INTELLIGENCE DOSSIER MATRIX: A comparison table showing, for each dealership in my search radius: [Dealer Name | Ownership Type | Composite Reputation Score | Digital Transparency | Pricing Risk | Internet Sales Capability | Overall Safety Score | Visit Recommendation]" — The deliverables section specifies the exact output format using a table template. You've pre-defined the columns, so the AI cannot improvise a different structure. This is "output format specification by template." It prevents the AI from producing a narrative summary when you need a sortable, comparable matrix. The columns themselves encode your decision criteria — reputation, transparency, pricing risk — which trains the AI on what matters for dealer selection.
Transferable principle: Specify deliverables using pre-defined structure templates (tables, matrices, numbered lists, week-by-week calendars). Templates force conversion of research into operationalized outputs. A template is an implicit rubric that shapes the AI's synthesis.
"CONSTRAINTS FOR THIS RESEARCH: Every claim about a specific dealership must be attributed to a publicly available source. Do not assume the dealer's quoted price is fair — benchmark every dealer's fees against state averages. For CPO vehicles, distinguish between OEM CPO and dealer-certified throughout." — The constraints section preempts failure modes. The attribution requirement blocks hallucinated sources. The skepticism constraint ("do not assume the dealer's quoted price is fair") tells the AI to treat dealer-provided data as potentially adversarial. The domain-specific constraint ("distinguish OEM vs. dealer-certified") is an intellectual integrity requirement that ensures the output is precise in areas where precision matters. These constraints are patches over ways the AI would otherwise disappoint you.
Transferable principle: Write constraints as explicit blocks on failure modes. "Attribution to public sources" blocks hallucination. "Skepticism toward counterparty data" blocks credulous synthesis. "Distinguish X from Y throughout" blocks conflation of subtly different things. Name the failure mode; write the constraint to block it.
What to Expect from Deep Research
Output Length: Expect 12,000-20,000 words of output, depending on the depth of research available for your specific geographic area and the number of dealers in your search radius. The Executive Summary alone will be 500-800 words. Each of the eight research threads will produce 1,200-1,800 words with sub-findings, source attribution, and specific-situation implications. The Dealer Intelligence Dossier Matrix will be a multi-page table with one row per dealer (if you have 5 dealers in your radius, expect a 5-row table with 8 columns of comparative analysis). The Pre-Visit Action Plan is a detailed 16-day calendar with 2-3 specific actions per day.
Completion Time: Deep Research on ChatGPT (with o3 Deep Research), Claude (with research mode), or Gemini (with Deep Research) typically takes 7-15 minutes to execute for a prompt of this scope. The research phase runs invisibly — the AI searches across FTC press releases, state Attorney General filings, Google Reviews and DealerRater data, CARFAX databases, and individual dealer websites — and then synthesizes the findings. You don't see the searching; you see the final structured brief with citations.
Structure: Output is organized by deliverable, with clear h2/h3 headers, source citations inline or in footnotes, and explicit "So what does this mean?" implications at each thread's close. The Dossier Matrix is a scannable, sortable table. The Action Plan is a day-by-day checklist you can print and execute against. The Executive Summary reads like a competitive intelligence brief — top-line findings followed by ranked dealer recommendations.
Quality Signals: High-quality Deep Research output includes specific numbers tied to sources (not round figures or estimates), conflicting data points with explanations (e.g., "DealerRater reports XYZ Dealer with 3.2 stars (n=47 reviews); Google Reports 4.1 stars (n=230 reviews); conflict likely due to review filtering differences"), assumptions stated explicitly at every step, and clear links between findings and action items. If the output is heavy on generic advice, light on source citations, or hedges every claim with "it depends," the rigor is insufficient — ask the AI to redo weak threads with named sources and location-specific data.
Key Research Questions the Prompt Answers
1. Which dealerships in my area carry legitimate FTC complaints or enforcement actions, and what are the specific violations? Research Threads 1 and 2 resolve the enforcement landscape, translating regulatory data into dealer-specific risk profiles. This is the baseline filter — eliminating dealers with known patterns of deception.
2. How do I spot and avoid ghost car listings, drip pricing, and other digital dark patterns before wasting a dealership visit? Research Thread 3 exposes the mechanics of dark patterns (what drip pricing is, how ghost cars work, the FTC CARS Rule prohibitions) and provides an audit methodology you can execute yourself on dealer websites.
3. What mandatory add-on fees should I expect, and are there regulatory caps in my state? Research Thread 4 maps fee architecture by state and dealer, quantifying the cost of doc fees, nitrogen, etching, and protection packages. This prevents surprises at the F&I desk.
4. Is the CPO certification legitimate, or am I buying an OEM-certified rebrand of a dealer-certified vehicle? Research Thread 5 distinguishes OEM CPO (manufacturer-backed warranty with multipoint inspection) from dealer-certified (marketing label with minimal guarantees). For CPO buyers, this is the difference between a warranty and a sales pitch.
5. How do I audit a specific vehicle's history and dealer's inventory before scheduling a visit? Research Thread 6 builds a systematic pre-visit audit — CARFAX, recall check, cross-reference across listing platforms, ghost car detection — that screens vehicles and dealers before you invest time.
6. How do I design a test drive route that actually reveals mechanical issues instead of hiding them? Research Thread 7 turns the test drive from a joyride into a diagnostic protocol — cold start, highway, rough pavement, parking, hills, quiet-road listening — that forces the vehicle to reveal its true condition.
7. Should I visit my preferred dealer first or second, and how do I sequence multiple visits for maximum leverage? Research Thread 8 applies dealership psychology and visit-sequencing strategy to your specific situation, telling you whether to front-load your top choice or save it for when you have competitor pricing in hand.
8. Based on all research, which dealership-vehicle pairing offers the best value and least pressure risk? The Dealer Intelligence Dossier Matrix answers this by ranking every dealer in your search radius by composite safety score, reputation, transparency, and pricing risk. The Executive Summary recommends which dealer to visit first and which to keep as backup.
Platform-Specific Tips for Accessing Deep Research
ChatGPT (GPT-5 Deep Research or GPT-4 Turbo with web search): ChatGPT Plus and Pro include Deep Research mode, specifically designed for prompts like this one. Select "Deep Research" from the model picker before pasting the prompt. GPT-5 Deep Research will search FTC press releases, state Attorney General consumer protection filings, DealerRater, Google Reviews, CARFAX, and individual dealer websites, then synthesize findings with inline citations. Expect strong results across all eight threads, particularly on enforcement data (Thread 1) and digital dark pattern audit (Thread 3). Output time: 8-12 minutes. If you're on a standard tier without Deep Research, GPT-4 Turbo with web search enabled will produce a compressed version with fewer citations.
Claude (Claude Opus 4.6 with Research mode): Claude's Research feature (available on Pro and Max tiers) is Anthropic's equivalent to Deep Research. Enable it before pasting the prompt. Claude excels at structured synthesis — it will produce the clearest Dealer Intelligence Dossier Matrix and the most rigorous assumption documentation. Claude also explicitly flags data gaps rather than papering over them, which is an advantage for decision-making. Claude's research output is typically more conservative on enforcement data (it tends to cite "reported actions" rather than inferring patterns), which is safer than overstating dealer risk. Output time: 6-10 minutes. Without Research mode, Claude's training-data cutoff means recent enforcement actions (2026) may be incomplete; supplement with ChatGPT or Gemini for current-year FTC data.
Gemini (Gemini 2.5 Pro with Deep Research): Gemini's Deep Research is natively integrated with Google Search, which gives it the broadest real-time access to dealer reviews, Google Maps ratings, and local business data. Click "Deep Research" before pasting the prompt. Gemini will produce a plan preview (which threads it will investigate first, in what order) before executing — review the plan and confirm. Expect particularly strong performance on reputation forensics (Thread 2), geographic dealer mapping (any thread requiring local search), and digital dark pattern research (Thread 3, where Google search finds the latest consumer complaints and FTC warnings). Output time: 10-15 minutes because Gemini searches more exhaustively. Output tends to be more journalistic and accessible than Claude's, which is an advantage for comprehension.
Pro Tip — Multi-Platform Workflow: For maximum rigor on a high-stakes dealer selection decision, run the prompt on two platforms sequentially. Start with Gemini or ChatGPT Deep Research for the data-heavy threads (FTC enforcement, state AG actions, current DealerRater/Google complaint patterns). Then feed the research findings into Claude with the instruction: "Take these research findings, apply them to my specific parameters (my zip code, my vehicle category, my dealer search radius), and produce the Dealer Intelligence Dossier Matrix and Pre-Visit Action Plan with maximum analytical rigor. Flag every assumption and source." Claude will catch gaps the other platforms missed and build cleaner structured output. Total time: 30-40 minutes. Value delivered: institution-grade competitive intelligence before your first dealer visit, eliminating the 52% buyer anxiety that defines dealership experiences.
How This Connects to the Weekly Posts
This Deep Research prompt is the investigation layer of Week 4. The three platform-specific posts (ChatGPT, Claude, Gemini) teach you three different prompt variations for dealer research and test drive preparation at Beginner, Intermediate, and Advanced difficulty — those prompts produce tactical, week-of-visit outputs like dealer checklists, test drive scorecards, and visit-timing scripts. The Deep Research prompt on this page goes deeper: it's designed for buyers who need the source-cited, benchmark-level data that backs up tactical decisions. Where the weekly posts give you the playbook, this prompt gives you the competitive intelligence.
Week 4 builds on Week 1's confirmed budget (from the "Should I Buy a Car Right Now?" prompts), Week 2's new-vs.-CPO decision (from "New vs. CPO: Let AI Make the Case"), and Week 3's financing readiness (from "Getting Your Money Right Before You Shop"). If you completed those weeks, you have a vehicle budget, a new-or-used choice, and a pre-approval rate. This week's Deep Research takes those decisions and builds the dealer-selection architecture that turns them into a shortlist of trusted dealers and a test drive protocol that forces the vehicle to reveal its true condition. The cross-platform comparison post for Week 4 ranked Claude as the publication winner at 84.5/100 (with Gemini at 82.75 and ChatGPT at 82.50), suggesting that Claude's strength in structured analytical output and Gemini's strength in real-time Google Maps data combine to make them the top choices for running this Deep Research prompt. That said, all three platforms deliver strong results when the prompt is specific and researched as carefully as this one.
Adaptability Tips: Using This Prompt for Other Decisions
1. Home Purchase — Realtor & Property Intelligence: Replace dealer-research threads with realtor-research threads. Thread 1 becomes "realtor licensing and disciplinary history," Thread 2 becomes "realtor reputation across Zillow, Google, and state realty board complaints," Thread 3 becomes "dark patterns in real estate listings (shadow inventory, fake photo staging, undisclosed easements)," Thread 4 becomes "inspection and escrow fee structures by state," Thread 5 becomes "home inspection and insurance underwriting," Thread 6 becomes "pre-visit home audit (inspection reports, comparable sales, neighborhood noise checks)," Thread 7 becomes "structured home tour protocol (cold-weather water intrusion test, basement moisture, electrical load)," Thread 8 becomes "realtor-experience benchmarking and offer-sequencing strategy." The eight-thread architecture transfers directly; only the specific sub-questions and sources change. Deliverables stay the same: intelligence dossier, audit methodology, comparison matrix.
2. Commercial Vendor Selection — RFP-Driven Evaluation: Adapt the prompt for enterprise software vendors, manufacturing suppliers, or professional services. Thread 1 becomes "vendor industry positioning and market stability (private, PE-backed, public, startup — with solvency risk analysis)," Thread 2 becomes "customer satisfaction forensics (G2, Capterra, industry-specific review sites, case study sourcing)," Thread 3 becomes "contract dark patterns (hidden renewal clauses, data lock-in, surprise fees)," Thread 4 becomes "fee architecture and hidden costs," Thread 5 becomes "implementation and support quality verification," Thread 6 becomes "pre-sales due diligence (reference checks, architecture review, integration audit)," Thread 7 becomes "pilot project design and success metrics," Thread 8 becomes "vendor interaction quality and negotiation leverage." The RFP response becomes your Dossier Matrix. The procurement timeline becomes your Action Plan.
3. Medical Specialist — Second Opinion & Provider Selection: Adapt for high-stakes healthcare decisions (elective surgery, specialist referral). Thread 1 becomes "specialist credentials, board certification, and malpractice history," Thread 2 becomes "patient experience reviews (Healthgrades, Zocdoc, patient forums)," Thread 3 becomes "facility quality and insurance acceptance transparency," Thread 4 becomes "procedural cost structure and surprise billing risk," Thread 5 becomes "outcome data verification and complication rates," Thread 6 becomes "pre-consultation audit (medical records review, second opinion feasibility)," Thread 7 becomes "consultation protocol and informed consent quality," Thread 8 becomes "decision sequencing if you're seeing multiple specialists." Deliverables: provider dossier matrix, pre-consultation preparation checklist, decision-making timeline.
4. Commercial Contractor — Build Quality & Insurance Verification: For home renovation, commercial construction, or major maintenance projects, adapt to contractor research. Thread 1 becomes "contractor licensing, bonding, and insurance verification," Thread 2 becomes "contractor reputation on Yelp, Google, Angie's List with timeline analysis (old reviews vs. recent)," Thread 3 becomes "contract dark patterns (scope creep, change-order clauses, payment schedules that favor contractor)," Thread 4 becomes "typical cost overruns and fee structures in your market," Thread 5 becomes "workmanship quality verification (references, previous jobs, warranty terms)," Thread 6 becomes "pre-proposal audit (site inspection, material sourcing, timeline realism)," Thread 7 becomes "inspection protocol during work (progress documentation, compliance checks)," Thread 8 becomes "contractor-selection sequencing and negotiation." Same architecture, different domain. The Dossier Matrix ranks contractors. The Action Plan is your project timeline with contractor checkpoints.
Follow-Up Prompts
Follow-Up 1 — "Build the Walk-Away Script": Once you have the Deep Research output and the Dealer Intelligence Dossier Matrix, ask: "Using the dealer reputation and pressure-tactic patterns from the Deep Research, generate a word-for-word conversation script I can use to gracefully exit a dealership visit if the experience becomes high-pressure, deceptive, or the vehicle doesn't meet my standards. Include responses for three common pressure scenarios: (a) 'This price is only good today,' (b) 'Let me talk to my manager,' (c) 'You're really close on the trade-in — let me see what I can do.' For each, provide a polite but firm exit line that preserves negotiating leverage if I decide to return." This turns research into battle-ready dialogue that removes emotional vulnerability.
Follow-Up 2 — "Stress-Test the Dealer Recommendation": Ask: "The Deep Research recommends Dealer X. Now stress-test that recommendation by changing one variable at a time: (a) A new negative review posts on Google Reviews from last week that I missed, (b) A state Attorney General enforcement action against that dealer group is announced (hypothetically), (c) The specific vehicle I wanted sells before my visit, (d) I discover the dealer has a history of bait-and-switch pricing. For each scenario, does the recommendation change, and which of my backup dealers becomes the new first choice?" This produces decision resilience against real-world noise and teaches you decision stability.
Follow-Up 3 — "Build the Test Drive Score Comparison Workbook": Ask: "Convert the structured test drive route from the Deep Research into a printable one-page scorecard that I can fill in during or immediately after each test drive. Include columns for: (a) Vehicle basics (year, make, model, VIN), (b) Cold-start diagnostic (rough idle, smoke, transmission engagement), (c) Highway performance (acceleration, merge, road noise, suspension stability), (d) Surface street performance (braking, steering, visibility), (e) Overall fit and feel (comfort, tech usability, smell/condition), (f) Deal-breaker flags (any item scored below 3 out of 5). At the bottom, include a 'Overall Vehicle Score' calculation formula and a comparison section where I can paste results from all vehicles tested. Turn raw 1-5 scores into a side-by-side recommendation matrix." This converts research into an operational tool for the dealership visit itself.
Metadata
Topic: Researching Dealers and Test Driving Like a Pro — Dealer Intelligence & Vehicle Validation via Deep Research
Week: Week 4 of 7 ("AI at the Dealership: 7 Weeks of Prompts That Could Save You Thousands")
Series: AI at the Dealership
Content Type: Deep Research methodology + prompt breakdown + follow-up prompts
Platform Compatibility: ChatGPT Deep Research (GPT-5 or GPT-4 Turbo with web search), Claude Research mode (Opus 4.6 / Sonnet 4.6), Google Gemini Deep Research (Gemini 2.5 Pro)
Prerequisite: Week 1 ("Should I Buy a Car Right Now?") for confirmed budget; Week 2 ("New vs. CPO") for vehicle category decision; Week 3 ("Getting Your Money Right Before You Shop") for pre-approved financing. Recommended to complete all three before running this prompt.
Tags: Deep Research, dealer research, dealership selection, test drive engineering, pre-visit audit, dealer reputation, enforcement actions, digital dark patterns, ghost car listings, CPO verification, FTC CARS Rule, dealer fees, multi-dealer comparison
Categories: Car Buying, Consumer Intelligence, AI Research Methodology, Consumer Protection, Dealership Strategy
Difficulty Levels of Related Posts: Beginner (Week 4 ChatGPT variation), Intermediate (Week 4 Claude variation), Advanced (Week 4 Gemini variation); this Deep Research post sits above the Advanced tier for research-intensive buyers.
Reading Time: 20-24 minutes to read this post; 8-15 minutes to run the prompt; 60-90 minutes total to work through the output and execute the Pre-Visit Action Plan
SEO Title (under 60 characters): Deep Research: Dealer Intelligence & Test Drive Strategy
SEO Description (150-160 characters): Use Deep Research to investigate dealerships across eight research threads — enforcement history, reputation forensics, dark patterns, fee architecture, CPO verification, test drive engineering.
Publication Date: May 3, 2026
Last Updated: May 3, 2026