Anonymous
09/25/24(Wed)10:11:39 No.102547249 >>102547133
Nothing stops you from trying, but i wouldn't expect a night and day difference. May work at the start but as the context is filled with 'safe' tokens, so will the output.
Text completion has no template, and you can use an instruct/chat model as a completion model. The problem is that anything that steers it a bit towards the 'assistant mode' will cascade and start rejecting things.
I remember, back when i just started using llms, using instruct models as completion with few-shot examples.
Some setup.
char1: dialog...
anon: dialog...
char1: dialog...
anon:
Set "\nchar1:" as the input suffix string, set "\nanon:" as the reverse prompt and that's pretty much it. You still get to dialog with the thing but, hopefully, avoid all the steering tokens. You can add new characters on the fly, you could even play as more than one by adding more reverse prompts.
I don't know if it worked well because of that token avoidance or because i was using a model that didn't care about those things anyway. Never had an out of character rejection (good guy would still refuse to kill another person, while a bad one wouldn't).