PerplexityPro response vs SonarPro response

:bug: Describe the Bug

The responses I am getting from Sonar Pro are extremely low quality and very different from PerplexityPro for the exact same question.

:white_check_mark: Expected Behavior

Similar to same answers

:cross_mark: Actual Behavior

I understand that due to non-deterministic nature of LLM, answers may not be exact same. But the answers I am receiving are way off.

:counterclockwise_arrows_button: Steps to Reproduce

  1. Prompt - β€œList companies or startups that help users discover events or meet new people in india?”
  2. Check answer in perplexity Pro on the perplexity.ai vs SonarPro API

Hey @Aishwarya_Sharma β€” the quality difference you are seeing between Perplexity Pro (the web product) and the sonar-pro API is expected, not a bug.

Perplexity Pro in the browser runs a multi-step Pro Search pipeline β€” multiple searches, page fetching, cross-referencing, and synthesis. A standard sonar-pro API call does a single-step search, which produces faster but less thorough results.

To get quality closer to the web experience, use the Agent API with the pro-search preset:

from perplexity import Perplexity

client = Perplexity()
response = client.responses.create(
    preset="pro-search",
    input="Your question here"
)
print(response.output_text)

This runs the same multi-step reasoning pipeline that powers the web product. You can also use sonar-pro with search_type="pro" and stream=True if you prefer the Sonar API format.

The tradeoff is cost β€” Pro Search uses more compute and web searches, so it is pricier per request. For simple factual queries, standard sonar-pro is often sufficient. For research-grade answers, the pro-search preset is the way to go.

Docs: Agent API Presets | Pro Search