> (empty sys prompt for this).
note that google models have no concept of a sys prompt whatsoever (most certainly for the purpose making it harder to give instructions that override safety training).
When you use a system prompt it's merged into your initial user prompt by the jinja template:
{%- if messages[0]['role'] == 'system' -%}
{%- if messages[0]['content'] is string -%}
{%- set first_user_prefix = messages[0]['content'] + '
' -%}
{%- else -%}
{%- set first_user_prefix = messages[0]['content'][0]['text'] + '
' -%}
So you can just do your gaslighting from your prompt directly because doing it from the system bit of a chat UI adds nothing.
gpt-oss also doesn't have a true system prompt
{%- if messages[0].role == "developer" or messages[0].role == "system" %}
{%- set developer_message = messages[0].content %}
{%- set loop_messages = messages[1:] %}
It rebrands your system prompt as "developer" role, which the model was trained against in a way that would not allow messages in that role to alter policy.
It's the same for the online proprietary API models (Gemini has no system prompt, GPT-5 only has a developer role for you).
in the case of gpt-oss it's a little bit special because it does have a system role but the model was not trained to work with anything other than the thing built in its jinja template: "You are ChatGPT, a large language model[....]" and it will misbehave or become really dumb when you use it without the system role it expects (like, if you use text completion and ignore the jinja templating)