API vs Web Quality Issues

Dear Perplexity Support Team,

I have tried reaching out by email and Discord, with unfofrtunately, no success.

I’ve been testing the Sonar Pro API and related models (Reasoning Pro, for example) via OpenRouter.ai and [labs.perplexity.ai] across approximately 20 use cases, particularly for data enrichment and company intelligence tasks.

Unfortunately, the output is consistently far less performant compared to the web search experience on perplexity.ai with a Pro account. The depth, reasoning, and accuracy in the browser version are significantly superior.

As an example, I tried executing the following prompt via Sonar Pro (labs.perplexity.ai):

Identify the current global employee head-count for xx (xxx.ch).
Use multiple sources (website, reports, directories, etc.), apply reasoning, and return the result as a clean JSON object with “Employee Range” and optionally “Employee Count”.
Rules: No prose, no markdown, no extra output.

However, while perplexity.ai returned a clear, reliable output:

{"Employee Range": "Mid (101-250)", "Employee Count": 81}

…the same prompt on labs.perplexity.ai using Sonar Pro either failed to gather proper sources or produced incomplete or inferior results.

I’ve attached screenshots for reference.

Could you please clarify:

  1. Why the Perplexity Labs API experience is not aligned with the core search performance?
  2. If or when parity between the web version and API/Labs will be expected?
  3. Whether full web access and source reasoning is being used in Labs/LLM API use?
  4. What can I do to get the full capability like the web version?

Thank you for your time and support.

Thanks,
Dani SYED