API vs Web Quality Issues

Dear Perplexity Support Team,

I have tried reaching out by email and Discord, with unfofrtunately, no success.

I’ve been testing the Sonar Pro API and related models (Reasoning Pro, for example) via OpenRouter.ai and [labs.perplexity.ai] across approximately 20 use cases, particularly for data enrichment and company intelligence tasks.

Unfortunately, the output is consistently far less performant compared to the web search experience on perplexity.ai with a Pro account. The depth, reasoning, and accuracy in the browser version are significantly superior.

As an example, I tried executing the following prompt via Sonar Pro (labs.perplexity.ai):

Identify the current global employee head-count for xx (xxx.ch).
Use multiple sources (website, reports, directories, etc.), apply reasoning, and return the result as a clean JSON object with “Employee Range” and optionally “Employee Count”.
Rules: No prose, no markdown, no extra output.

However, while perplexity.ai returned a clear, reliable output:

{"Employee Range": "Mid (101-250)", "Employee Count": 81}

…the same prompt on labs.perplexity.ai using Sonar Pro either failed to gather proper sources or produced incomplete or inferior results.

I’ve attached screenshots for reference.

Could you please clarify:

  1. Why the Perplexity Labs API experience is not aligned with the core search performance?
  2. If or when parity between the web version and API/Labs will be expected?
  3. Whether full web access and source reasoning is being used in Labs/LLM API use?
  4. What can I do to get the full capability like the web version?

Thank you for your time and support.

Thanks,
Dani SYED

1 Like

I have the same issue. Haven’t been able to find any kind of proper response. No matter how I try it, the API output is always garbage.

If you know any other LLM API that also have live internet access, do share.

1 Like

I’ve encountered the same issue and have been experimenting with fine-tuning the API parameters to replicate the citation quality seen in the Web UI. Here are the parameters that yielded decent, though not optimal, results:

Model: sonar-reasoning-pro

Presence Penalty: 2

Top_p: 0.1

Context Size: Low

Has anyone discovered more effective parameter configurations, tested the search API, or perhaps utilized a web scraper for better results? I’d greatly appreciate your insights and suggestions!

I am here because I am too, searching for answers, struggling to improve my sonar pro results.
I mostly use it for company research by providing a URL of a prospects domain.

What it then does it appparently starts “searching” for a similar “company name” and mixes up the results from the URL with search results it found online, sometimes from similar domains but different companies.

it makes up non-existent domain names when asked for competitors.. despite my efforts to tell it NOT to create any domains it has not visited, but only provide live, verified domain names it visited, etc.

Super frustrating - especially when perplexity is supposed to provide real data as opposed to Openai models.

Now I’m considering adding FIrecrawl to my workflows and other deterministic architechture solutions I’d rather not, like scraping live SERPs.. but I guess I have to

</rant over>