Gemini Output in Agent API Response

Dear Perplexity Team,

I noticed a very weird behavior in responses from the Perplexity Agent API for the presets deep-research and advanced-deep-research:

While Opus 4.6 and GPT 5.4 parsed the expected final output in a good Perplexity-usual quality, the final output generated by Gemini, regardless which deep-research preset, parsed system instructions instead of a user-targeted output to the final reponse. Can you validate that the reason for the strongly differing behavior is connected to the model?

For further sources and details see the PDF documents attached that contain the detailed responses and metadata from a A/B Test I ran to compare responses between models and presets. I used the same script just different models for the PDF generation. Link to PDFs

Looking forward to your response - great things you are cooking rn!

Hey @ysimonhan — thanks for the detailed A/B testing and the attached PDFs. Gemini models should not be leaking system instructions into the final response output — that is definitely not expected behavior.

Please email api@perplexity.ai with a link to this thread and the Google Drive folder so the team can investigate the Gemini model integration with the deep-research presets.

In the meantime, the deep-research preset (GPT-5.2) and advanced-deep-research (Claude Opus 4.6) should both produce clean outputs, as you have already confirmed.