Dear Perplexity Team,
I noticed a very weird behavior in responses from the Perplexity Agent API for the presets deep-research and advanced-deep-research:
While Opus 4.6 and GPT 5.4 parsed the expected final output in a good Perplexity-usual quality, the final output generated by Gemini, regardless which deep-research preset, parsed system instructions instead of a user-targeted output to the final reponse. Can you validate that the reason for the strongly differing behavior is connected to the model?
For further sources and details see the PDF documents attached that contain the detailed responses and metadata from a A/B Test I ran to compare responses between models and presets. I used the same script just different models for the PDF generation. Link to PDFs
Looking forward to your response - great things you are cooking rn!