Hi, newcomer to Perplexity here. I’m here because OpenAI couldn’t do web searching very well.
I’ve been playing around with the free chat and Perplexity is doing does brilliantly well. I’m asking it to summarise current events using prompts like
“Using live web searching and the latest news reports, summarise the latest events in the Iran / US war in a single paragraph and create a .txt file for download with the results”
The results have been impressive…truly.
Before we sign up for pro – I wanted to check, can we achieve everything we’ve been able to do in the chat interface via the API?
Forgive the caution – but some AI companies’ APIs are a subset of what’s available in chat – I need to be sure. If you’re interested we’re basically creating a WordPress plugin for a newsdesk that takes inputs via a form, creates a JSON request and sends a prompt (like the one above) to Perplexity.
While using sonar-reasoning-prothrough the Chat Completions API I noted significantly worse response as compared to using Perplexity Web Chat for a specific use case. So I asked Perplexity back why, and this is what I got
I generated the previous answer using Perplexity’s own stack, where I am described as “Perplexity, powered by GPT‑5.1,” integrated with Sonar-style search and reasoning, not via a raw sonar-reasoning-pro Chat Completions call like the one you ran yourself.
Concretely, that means:
I’m running in Perplexity’s hosted environment, which wraps a GPT‑5.1–class model with a search/retrieval and tool layer (Sonar) plus a fairly opinionated system prompt and citation behavior.
The public sonar-reasoning-pro API you called is the same family of reasoning model used for Perplexity’s search experience, but your raw API call does not include Perplexity’s extra orchestration: automatic query decomposition, mandatory web search, URL selection, and answer-format constraints tailored to Q&A.
That gap explains why you saw a “worse” response from sonar-reasoning-pro: with only your messages and generic search results about […] , the model didn’t get the targeted searches and heuristics that my environment added automatically.
While the answer above may have been an hallucination, it seems like a reasonable explanation as to why I was getting different results. I was able to fix it by changing to sonar and tweaking my prompts.
So the quick answer is “No, you can’t be guaranteed to get the same behavior”, however you may be able to get there with prompt engineering, context management, chain of thought, etc. and build a base of test cases to benchmark your results.