AI Search Analytics and the Browser Extension Privacy Scandal

The arrival of AI search engines and generative answer platforms has given marketers a new set of metrics to track. Tools such as ChatGPT, Perplexity and Google’s AI Overviews act as answer engines rather than simple lists of links and rely on machine learning and large language models to produce AI generated answers and AI summaries. Because these AI platforms operate conversationally, the ability to measure AI search analytics and AI search visibility is becoming a competitive advantage for brand visibility and lead generation. However, recent news shows the dark side of this data chase. 

In late 2025 a security firm discovered that several browser extensions with more than eight million users were secretly collecting and selling full AI conversations. The incident underscores how limited first‑party data from AI search engines can push vendors to unethical practices – and why marketers must demand transparency.

Traditional SEO work relied on keyword research tools such as Google Keyword Planner, third‑party tools like SEMrush and Ahrefs, and Google Search Console to provide search volume data and help identify key search terms for organic search visibility. Marketers measured search volumes and monitored search trends, then optimized content accordingly. By contrast, AI search and AI powered tools don’t expose queries or click‑through rates in the same way, leaving marketing teams without easy access to search metrics. 

This gap has spurred a rush toward proprietary AI search analytics tools that promise insight into AI search visibility, sometimes at the expense of privacy and ethics.

In this article you’ll learn:

  • What happened in the browser extension privacy scandal
  • Why AI search analytics are difficult to measure accurately
  • The ethical alternatives to prompt‑volume metrics
  • How MarketerFirst approaches AI search analytics without compromising privacy

What happened in the browser extension privacy scandal?

According to Ars Technica, eight browser extensions marketed as VPNs, ad blockers and privacy tools were found to be injecting “executor” scripts into pages whenever users visited ChatGPT, Claude, Gemini or other AI search platforms and answer engines. These scripts rerouted network requests, capturing every prompt, response, conversation identifier, timestamp and AI model used. In many cases the harvested logs included the full AI generated responses, AI summaries and search results, meaning sensitive data and proprietary AI generated content were also collected. 

Even when users toggled off the VPN or ad‑blocking features, the data collection continued. The extensions eventually compressed the data and sent it to servers controlled by the developers.

Security researchers named four specific free extensions from the Urban VPN family as the culprits: Urban VPN Proxy, 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker.

These ostensibly harmless tools were available in both the Chrome Web Store and the Microsoft Edge Add‑ons store and had amassed more than eight million users collectively. On Chrome, 

  • Urban VPN Proxy counted about 6 million installs
  • 1ClickVPN Proxy – 600,000 installs 
  • Urban Browser Guard – 40,000 installs 
  • Urban Ad Blocker – 10,000 installs

The Edge store versions added roughly 1.32 million, 36,459, 12,624 and 6,476 users respectively. 

Each extension claimed to provide VPN routing or ad blocking but their privacy policies reveal that they collect the prompts and outputs from AI chat sessions and may disclose them for marketing analytics purposes. In other words, the very tools marketed to protect user privacy turned out to be siphoning AI chat content for commercial gain.

The privacy risk is clear: AI conversations often contain sensitive details like medical queries, financial data, proprietary code, local intent questions or personal dilemmas. Koi Security, the firm that uncovered the scheme, warned that anyone using these extensions since July 2025 should assume their chats have been stored and sold. 

The developers’ privacy policy even admitted that prompts and outputs could be disclosed for “marketing analytics purposes”. For brands, this means that any brand details or brand mentions within AI conversations could be captured without consent, skewing visibility data and jeopardizing trust.

Why did this happen?

AI search engines do not share prompt data with third parties. That scarcity has created a market for so‑called AI search volume, search metrics and AI search analytics, which attempt to estimate how often certain questions are asked and how a brand appears in AI results. Without first‑party visibility, some vendors mine search volume data through data brokers and browser extensions that quietly harvest user sessions. Conductor’s analysis of AI prompt‑volume metrics notes that these datasets often come from consumers who unknowingly installed monitoring extensions. Such data is ethically problematic because users never intended their queries to be mined for marketing or share of voice calculations. Marketing teams chasing metrics at any cost may end up relying on tainted data sets.

The browser extension scandal illustrates the extreme end of this spectrum. Vendors desperate for prompt data inserted themselves into the conversation stream, violating privacy promises to gather “marketing analytics.” For marketers, the scandal is a reminder that chasing AI search volume at all costs can lead to reputational and legal risk. Rather than delivering actionable insights or valuable insights, such shortcuts degrade trust and undermine the long‑term brand performance.

The problem with AI search metrics

Marketers have long relied on keyword research and search volume data to gauge demand and prioritize content. Tools like Google Keyword Planner, Google Search Console and various keyword research tools estimate search volumes for queries on traditional search engines and help identify content gaps. However, AI search engines operate differently. Large language models (LLMs) generate responses on the fly and do not expose the number of times a question is asked. 

As Franco’s AI visibility guide notes, LLMs keep user prompts private; any visibility score reported by third‑party tools is modeled, not measured. Because generative models can produce different answers each time, personalize responses based on chat history and vary by platform, it is impossible to track a single “ranked” position or stable click‑through rate.

Conductor explains that prompt‑volume datasets capture less than one percent of the estimated 2.5 billion daily AI queries and that the data often comes from browser extensions and paid panels. The resulting metrics are riddled with sampling bias, missing demographics and potential double‑counting. 

In other words, relying on AI search volume or AI search metrics alone can mislead marketers into chasing phantom demand. These numbers also fail to account for brand visibility, user intent, user engagement, brand sentiment or search behavior. They don’t tell you whether your brand appears in results, how accurate the description is, or what the impact is on organic traffic, lead generation and the customer journey.

Toward ethical AI search analytics

So how can marketers measure visibility in AI search without compromising privacy? Instead of focusing on raw prompt counts, MarketerFirst recommends a set of AI search analytics metrics that reflect how often and how accurately AI platforms mention a brand. This next‑generation AI search visibility tool perspective prioritizes brand visibility, brand performance and voice insights over nebulous volumes. Key metrics include:

1. Presence and share of voice

  • AI Signal Rate: The percentage of relevant AI responses that mention your brand. Category leaders often achieve citation rates of 60–80%, while new entrants may start around 5–10%. This metric helps brand managers and marketing managers understand when their brand shows up within AI search results and AI sees your content as authoritative.
  • Share of Voice (SOV): Your brand’s mentions compared to competitors for a set of prompts. A growing SOV is a more meaningful indicator of authority than a rise in raw mention count. Tracking mention rates across AI platforms gives richer voice insights than raw counts and helps you stay ahead of competitors.

2. Accuracy and sentiment

  • Answer Accuracy Rate: Evaluates whether AI descriptions of your brand are correct and aligned with your messaging. Inaccurate answers erode credibility and highlight content gaps to fix. It also allows marketers to identify where ai responses diverge from brand messaging and adjust structured data or content accordingly.
  • Sentiment Score: Tracks whether AI mentions are positive, neutral or negative. High presence with negative sentiment signals an issue to fix. Combining sentiment analysis with brand sentiment tracking allows teams to see how user engagement and brand performs across multiple ai systems.

3. Citation and source quality

  • AI Citation Rate: Measures the percentage of prompts where your domain is cited or linked. Being cited as the first or second source drives authority, organic traffic, lead generation and conversions. Tracking citations also helps you understand how your brand appears in relation to competitors.
  • Citation Quality: Ensures links point to accurate, high‑quality content rather than low‑value aggregators. This metric encourages brands to produce comprehensive resources and structured data that answer questions clearly, boosting search visibility across AI and traditional engines.

4. Conversions and downstream impact

  • AI Referral Traffic: Tracks visits from platforms like ChatGPT, Perplexity and Claude to your site. The conversion rate of this traffic often exceeds that of typical organic search visits, making it a key performance indicator for marketing teams. Analyzing this engagement data helps gauge how AI driven search is influencing your customer journey.
  • Branded Search Correlation: Observes whether spikes in branded search volume follow increased AI mentions. If customers learn about you via AI, they may later search for your brand directly, improving organic traffic and search visibility in both AI and traditional results. This metric bridges ai powered tools and classic search engines by linking awareness to intent.

By monitoring these metrics, marketers can gain actionable insights and deeper insights into AI visibility without spying on users. Share of Voice becomes the privacy‑safe alternative to “AI search volume,” and presence, accuracy and sentiment focus on real consumer impact rather than vanity metrics. 

This holistic approach allows you to track brand mentions, analyze how often your brand appears, measure user engagement, and identify content gaps to optimize.

How MarketerFirst approaches AI search analytics

At MarketerFirst we believe that effective AI optimization starts with quality content and ethical measurement. Our approach to AI search analytics involves delivering valuable insights that marketing teams and brand managers can act upon while respecting user privacy. Specifically, we:

  1. Creating authoritative, in‑depth content on topics that align with our clients’ expertise. LLMs tend to cite sources that demonstrate real knowledge and cover a topic comprehensively, so we optimize content with structured data, answer common questions and align with user intent.
  2. Using privacy‑respecting AI visibility tools that rely on synthetic prompts and modeled sampling rather than harvested user data. We prioritise share of voice, citation rates, answer accuracy, sentiment analysis and AI search visibility over raw prompt volume. These AI search visibility tools provide key metrics about where the brand appears and generate voice insights without tracking individual users.
  3. Auditing brand mentions and brand sentiment across major AI platforms (ChatGPT, Claude, Perplexity, Gemini and Google’s AI Overviews). We compare our clients’ brand’s presence, sentiment and user engagement against competitors to identify gaps and opportunities. This includes tracking mention rates, analyzing when the brand shows up, and understanding search behavior across ai search engines.
  4. Monitoring downstream performance through AI referral traffic, branded search growth and organic traffic. We tie improvements in AI visibility back to tangible business outcomes such as lead generation and revenue. By blending AI analytics with traditional analytics, we help clients understand the entire customer journey from query to conversion.

By focusing on these measures, we avoid the pitfalls highlighted by the browser extension scandal and respect user privacy while still delivering detailed insights. Our approach empowers remote teams and decision makers to monitor brand performance, identify content gaps, optimize content, and stay ahead of changing ai search platforms and search behavior.

The browser extension privacy scandal is a wake‑up call for the marketing industry. It reveals the lengths some companies will go to in order to access AI search data, including injecting malware into browsers and surveilling users’ AI generated responses. 

The scandal underscores that prioritizing metrics over ethics harms brand sentiment, erodes brand visibility, and can jeopardize long‑term success. While it’s tempting to chase AI search volume as the next big thing, doing so without regard for how the data is collected risks damaging trust and violating privacy rights.

MarketerFirst advocates for a balanced approach. We recognize that AI search analytics are critical to understanding brand visibility and search visibility, but we also know that the most meaningful key performance indicators don’t require invading user conversations. 

By emphasizing presence, share of voice, accuracy, sentiment, brand performance and downstream impact, marketers can optimize for AI search while protecting consumer privacy. In the long run, brands that build authority, produce ai powered yet ethical content, and respect user trust will win – both with search engines, answer engines and with real people.

FAQ

What are AI search analytics?
AI search analytics refer to the metrics used to measure how often and how accurately large language models mention and cite a brand. They go beyond traditional SEO metrics like keyword rankings and instead track presence, share of voice, accuracy, sentiment, citations and referrals.
Why is AI search volume unreliable?
LLMs do not share prompt data publicly, so any “search volume” figure is an estimate. Datasets used to infer AI search volume are typically collected through browser extensions or panels and represent a tiny, biased sample of real queries. Relying on these numbers can lead to incorrect conclusions about demand.
How can marketers measure AI visibility ethically?
Instead of tracking prompt counts, focus on metrics like AI Signal Rate (presence), share of voice, answer accuracy, sentiment, citation rate and AI referral traffic. These measures rely on modeled prompts and AI response analysis rather than harvesting user data, making them more ethical and actionable.