Strange instances of hallucinations where "Initialized" and "assistant" were inserted into Perplexity API responses

I’ve seen 3 instances recently where these words strangely seemed to leak into the output of the Perplexity API - 2 instances with Initialized and 1 instance of assistant. In each case the prompt was a yes/no question and the word was injected immediately after no/No before the space, which was then followed by an explanation (even though our system prompt specifies to give no explanation - another issue). Each case happened using the model llama-3.1-sonar-large-128k-online with a temperature of 0.0 and 500 max tokens.

Here are each of the cases including the user message and assistant output:

> Question: "Is 'Broad Band Internet Service Provider, India' a mobile, cell phone, or telco provider? Examples providers include: T-Mobile, SafariCom, etc"
> Response: 'NoIntialized providers like Airtel, Jio Fiber, and others mentioned are primarily broadband internet service providers, not mobile or cell phone providers, although some of them may also offer mobile services as part of their broader telecommunications offerings.'
> Question: "Is 'Beam Telecom Pvt Ltd' a mobile, cell phone, or telco provider? Examples providers include: T-Mobile, SafariCom, etc"
> Response: 'noassistant\n\nBeam Telecom Pvt Ltd is not a mobile, cell phone, or telco provider in the traditional sense. Instead, it is a broadband internet service provider, offering high-quality entertainment and broadband services.'
> Question: "Is 'Broad Band Internet Service Provider, India' a mobile, cell phone, or telco provider? Examples providers include: T-Mobile, SafariCom, etc"
> Response: 'NoIntialized providers like Airtel, Jio Fiber, and others mentioned are primarily broadband internet service providers, not mobile or cell phone providers, although some of them may also offer mobile services as part of their broader telecommunications offerings.'

Each of these instance occurred using a prompt constructed with this code:

def prompt(question: str) -> list:
    return [
        {
            "role": "system",
            "content": 
                f"""\
                You are an AI cybersecurity analyst. You will be given a question that must be answered
                with a 'yes' or a 'no' response. If the question cannot be answered with a 'yes' or a 'no',
                or if you're not sure what the answer is, then reply with 'i don't know'.

                Rules:
                - You must reply with either 'yes', 'no', or 'i don't know'.
                - Do not give any explanations or elaborations.
                """
            ,
        },
        {"role": "user", "content": "Is Seattle, Washington in the Pacific Northwest?"},
        {"role": "assistant", "content": "yes"},
        {"role": "user", "content": "Is it the year 2004?"},
        {"role": "assistant", "content": "no"},
        {"role": "user", "content": "What is the weather today?"},
        {"role": "assistant", "content": "i don't know"},
        {"role": "user", "content": question},
    ]

I really have no idea what is going on here, whether it’s a pure model hallucination or something else strange is going on. Has anyone else seen this?