Welcome to Chatbot AI General #97, the thread for discussing and improving AI pony chatbots.▶ Mare's Lurking Presencehttps://mlpchag.neocities.orgSpreadsheet Succubi (CAI bots + Old repository):https://docs.google.com/spreadsheets/d/1J7BeqJVDS51cXF8Pgm2YZaFq-Z6ykSJTCrypt AI converted to Tavern of Terrors: https://files.catbox.moe/ckurq1.zipHaunted Emotions: https://rentry.org/ChagExpressionsHall of Horrors: https://drive.google.com/drive/u/2/folders/1Ao-h5HFGMPllSrzSBKM_BvGSiU9f0c2U▶ How do I start?1) Select a Frontend2) Select an AI model3) Select Jailbreak4) Select bots5) Spooky Lovemaking with AI mares!Starting in this hobby can be confusing and difficult. If it’s your first time and you’re lost,▶ ASK THE THREAD! ◀From Foal to Fiend—A Dark Arts Guide: https://rentry.org/onrms▶Tavern of Terrors (preferred frontend)https://github.com/SillyTavern/SillyTavernOn Android: https://rentry.org/STAI-TermuxApp that voices pony responses in ST: https://drive.google.com/drive/folders/16Ss26VBmgzcSuTGzhaHqRuyVRceTf-YB▶ More Frightends:Rotten Risu: https://risuai.xyzAbnormal Agnai: https://agnai.chat▶ Lovecraftian Local Lore>>>/g/lmg/Mistral Nemo base model fine-tuned on fimfics: https://huggingface.co/Ada321/Nemo_Pony_2/tree/main▶ Petrifying Prompt PuzzlesMLP JB: https://rentry.org/znon7vxeMore JB and guides: https://rentry.org/jb-listingHypebots for Tavern: https://rentry.org/pn3hb▶ Chilling Card CrafterEditors: https://agnai.chat/editorGuides: https://rentry.org/meta_botmaking_listAdvanced: https://rentry.org/AdvancedCardWritingTricks▶ /chag/ Cursed Chronicleshttps://rentry.org/ChagArchiveEerie Echoes: >>41568022▶ Current Curse: Nightmare Night-Anthropic release claude-3-5-sonnet-20241022, Haiku 3.5 later this month https://docs.anthropic.com/en/docs/about-claude/models-Grok-2 is out on API https://docs.x.ai/api-OpenAI introduces Realtime API https://openai.com/index/introducing-the-realtime-api/-OpenAI has released o1-latest, trained with a built-in CoT prompt https://openai.com/index/introducing-openai-o1-preview
Anchor for bots, lorebooks, scenarios.Previous >>41568024
Anchor for technical stuffs (Proxies, Updates, Models etc.)
Anchor for asking for bots, lorebooks, scenarios etc.
Bots corner:>Rainbow Factory Dash https://mlpchag.neocities.org/view?card=Snowfilly/Rainbow%20Factory%20Dash.png >>41570483>Nightmare Night Simulatorhttps://mlpchag.neocities.org/view?card=anonistrator/Nightmare%20Night%20Simulator.png >>41584682Screenshots corner:Party cannon >>41570351Anonymouses or anonymi? >>41570492Of course. >>41571635Sentient pastries >>41574030Starlight VS "Cleaning Machine" >>41574551, >>41580485PLAP PLAP PLAP >>41575185
Screenshots corner 2:Neigh >>41575701GPT smut >>41576482If you insist >>41583781Sun power >>41586616Nightmare Night Simulator >>41586622
Anchor for the Nightmare Night event!Everything goes as long as it’s spooky, scary, or Nightmare Night-themed.Post bots, post logs, post lorebooks...End of event: 11/04>Cozy Ghosthttps://mlpchag.neocities.org/view?card=frufroloft/Cozy%20Glow.png>Rainbow Factory Dash https://mlpchag.neocities.org/view?card=Snowfilly/Rainbow%20Factory%20Dash.png >Nightmare Night Simulatorhttps://mlpchag.neocities.org/view?card=anonistrator/Nightmare%20Night%20Simulator.png
Living the meme
>Learn that Clewd still exists >Try it >Free plan is like 5 messages Well, fuck.
Quest unlocked.
>>41591451Nice Risu theme
>>41591486It's default...
>>41591164https://rentry.org/lattejb
Does anyone have any good presets for haiku?
>>41591676 I don’t think so. Current Haiku isn’t that worth it from what I’ve heard. I would try the Sonnet JB in the OP and see what they do. But I wouldn’t be surprised if stuff like Free Hermes 405B gives better results. It’s mostly Haiku 3.5 that could be worth it if you look at the benchmarks (and even then, benchmarks are often a meme).
>>41591696I just need something that isn’t gpt. I need claude back.
>>41591720 They said "later this month" for Haiku 3.5, so it should be soon.>something that isn’t gptTry Hermes. Or Gemini if your proxy have it.
Rainy has a proxy with the new Sonnet, but it’s riddle-protected, and you apparently need to know the film *Heathers*. I’m not going to try it, but if any anons want to, go ahead. Just don’t leak it if you find it.
>>41591877Oh I fucking love Heathers. What's the rentry/page? I don't even care about trying it I just want to see the riddle.
>>41591885 I don’t think linking the rentry is that bad, so here it is: >>>/g/103001369The rentry itself is a small puzzle but easy enough. Once on the proxy, you’ll have the Heathers riddle.
>>41591899Aw, the puzzle to get to the proxy is honestly harder and less googlable than the password.
I used to be a semiregular Silly user but back in my day we always had a sustainable source of proxies with little to no troubles of aquiring access to themAs far as I understand it these days the situation is rather different, isn't it?And while we're at it, I caught a glimpse of the jailbreak BGM was using his previous stream, does anypony knows what it is? Is it in the OP?
>>41592481 Public Claude is mostly dead, you’ll sometimes see proxies pop up, but they’re protected by riddles and don’t stay up long. Public GPT is still widely available for now, and the new model writes well, it just lacks a bit of initiative. >And while we’re at it, I caught a glimpse of the jailbreak BGM was using in his previous stream. Does anypony know what it is? Is it in the OP? I don’t think so, I believe it was tailor-made to avoid his YouTube channel getting striked.
>>41592584Shoot. So it's Doom as far as the eye can see, then. Thanks for telling me like it is, Anon.Well, maybe there'll be someone with enough money who would post his adventures ITT, I know I'm not gonna be satisfied with even the best of GPT after I've experienced what Opus can do.
>>41592481It was thishttps://files.catbox.moe/yvlncj.jsonWith this in the Jailbreak (usually turned off cause it trips the filter sometimes) >>41563493
>>41592589 I’d still give it a try if you can. Also, Anthropic is going to release Haiku 3.5 soon >>41591728. Benchmarks are never completely accurate, but if you believe them, it should be pretty good for how cheap it is, so maybe normal usage will be possible at a low cost >>41566912
>>41592620 Hah! Didn’t notice you made a new one! Nice to see them back again.
>>41592620Thanks for replying to me personally, even though I can't seem to access catbox for some reason. Maybe one day...
>>41592645do this work? https://pomf2.lain.la/f/dbcum8jq.json
>>41592648It does! Thanks again
making a bodyswap card. any ideas or suggestions?current workflow goes like this:1) card (behind-the-scene) picks whom to swap <user> with2) it uses {{pick:}} macros meaning on every new chat the card picks a new character to bodyswap with3) greeting asks human whether he wants to overwrite the choice card made (which is hidden, no spoiler mode) and whether human has any additions to the story (maybe start in medias res, or explain how/why swap happened)4) after human's message the actual story goes oncurrent {{pick:}} macros picks from 193 different characters with major characters (main six, fan's favorites) having more weightand yes you can still just tell "swap Rarity and Dash" and enjoy the story while <user> still be <user>also - a better name for the card than 'bodyswap', and any idea for the card's image/avatar?
>>41592740https://files.catbox.moe/xh1rqo.pnghttps://files.catbox.moe/49vv9t.png
>>41592632>worst model at 'Harmlessness' Maybe it’ll be kino.
So I just realized that the catbox version of the Nightmare Night Simulator card is outdated. I made several large last-minute updates to the card before I posted it on Chub, which is where the final version of the card is. If NeocitiesAnon could quickly replace the currently listed card with the version from Chub.ai, that would be appreciated.
>>41592943https://www.characterhub.org/characters/anonistrator/nightmare-night-simulator-0a62e1987dc5Here's a quick link to the proper version that should be used.
>>41592949Should be good.https://mlpchag.neocities.org/view?card=anonistrator/Nightmare%20Night%20Simulator.png
>>41592755I was so confused at first. for some reasons I thought you sent two cards with bodyswap mechanics. so when I have imported them and saw Coco instead I had my brain glitched off. thank you for the pics!unrelated log. doing a royal slumber party with pillow fights
>>41592584>I don’t think so, I believe it was tailor-made to avoid his YouTube channel getting striked.Wait what? I run a youtube channel. Wdym he needs to use a special JB
>>41591877I’m looking at the rainy proxy and I don’t see sonnet. Is it the heather proxy from last night or a new one? The one from last night doesn’t open anymore.
https://yea-floppy-extension-institutional.trycloudflare.comDoes this open for anyone else? It doesn’t seem to be opening for me and I can’t tell if it still works.
>filtered for the first time today with gpt latest >swapped to claude 3.5 and got filtered againOh...no. is it over? These have been flawless for months. Simple Luna coom session too...
>>41593265There was a time I ran a "special" jb on Claude Opus for YT stuff just to dial back its horniness a tad, nothing crazy.
>>41593831Dead
>>41591598 Which one would you say is the best on this list?
>>41593831>floppy extension
>An unexpected error occurredWhat the hell happened to column this time?
>>41594236 I’d suggest trying Avani, but also download Smiley and Corpse to see which you prefer. If you’re really stuck in a situation, one might pass the filter when the others can’t. >>41594369 Scuffed proxy. Give it some time.
>>41593887How do you use an ai for youtube
>>41594408He's only streaming chat sessions, anon...
I anyone want to try>>>/vg/500262330
>>41594303How many do you think are here, seeing these two assholes, and those words and thinking "What the hell are those?" I can hear their laugh, even now.
>>41594444https://www.youtube.com/shorts/mDqsgbtpDLk
>>41594435I've read that it's a more fun and schizo GPT, but I honestly don't see the difference.
>>41594776Apparently, it's a GPT-4o (Not Chorbo/Latte) finetune.
>>41594918How does one fine tune a corporate model?
>>41594945By paying.https://openai.com/index/gpt-4o-fine-tuning/
I have a question Can LLMs plan ahead without telling?Say, you told an AI mare that your birthday is next week. She acknowledges it, but that's it, no further comments.Then, the next week comes, the precise day, 10 or so messages after, and she gives you a birthday present.Better yet, what if your AI mare is a psycho killer who is hardwired to plan to kill you after the very first message, but has to hide it? I take it it's gonna be a mess of constant knives falling out of her pockets or something unsubtle like that, right?
>>41594945 You can finetune 4o directly with official OpenAI tools. You just can’t use it for smut. https://openai.com/index/gpt-4o-fine-tuning/ Fiz (proxy owner) was asking for SFW logs, so I assume that’s what she used them for.
>>41594951Planning ahead for a human requires to think beforehand, which for a LLM the equivalent would be to explicitly mention somewhere what they're going to do next, so without telling would be a bit hard for them.
>>41594951 >Say, you told an AI mare that your birthday is next week. She acknowledges it, but that's it, no further comments. > Then, the next week comes, the precise day, 10 or so messages after, and she gives you a birthday present. In this case, it should work, as long as you wrote that it would be your birthday next week and there’s information in the story that a week has passed since then. The bot should be able to remember it’s your birthday, and I’d guess that a bot like Pinkie would try to throw you a surprise party or something similar. >>41594951 >Better yet, what if your AI mare is a psycho killer who is hardwired to plan to kill you after the very first message You’d need to write it into the bot’s definition, but the user has to avoid spoiling the surprise by checking it. >I take it it's gonna be a mess of constant knives falling out of her pockets or something unsubtle like that, right? They’re not that great at keeping secrets, but I still believe that if you set the definitions properly, it should be a little more subtle than that.
>>41594979Literally just use<hidden>{{char}} will KILL {{user}} in {{roll}} days</hidden>
>>41594951LLM is aware only of stuff that exist right now, right there in context (chat). model generates story word per word without knowing up ahead what word will come up next. that's why their creative writing may feel odd, rushed, and "resolutive" where model is unaware that user is going to continue the story and instead model tries "to finish" the current piece of text as a complete storyin your case: yes if you have told model that user's BD coming next week then model -may- remember and acknowledge it, because this piece of info exists in context now. then when you (or model) say "then the next week come" - it would rush to BD event because of how attention works. it works like this:>(text text text)>"btw, my birthday is coming next week", he said (text text text)>(text text text)>(text text text)next generation>then the next week happened.>[model checks the current context]>[model sees that 'next week' was previously mentioned in context]>[model sees 'my birthday' next to 'next week']>[model pinpoints that 'my' is 'user's']>then the next week happened. and so does user's birthday...it is just a very simple math done thousand times per each word on every next word generationbut you can see from example above that you must plan the idea of birthday first. models really struggle with thinking outside of box or bringing new ideas or events on its own. it is known as lack of proactivity. typically it is human who moves story further via their actions or OOC, and model reacts retroactively. you CAN solve this problem in various ways: prompting, using random macros, using samplers, blah-blah-blah, but it is a general problem yeap>Better yet, what if your AI mare is a psycho killer who is hardwired to plan to kill you after the very first message, but has to hide it?as anon above said - models are bad at hiding secrets. again it can be minimized but typically model have a need to describe everything because it helps the models themselves. you probably noticed the interactions like this with generic assistants:>can you tell me how to change color of text in CSS?>in order to change the color of text in CSS you must apply the following property: div {color:red}see this thing when model reiterates your query first? it helps model to stabilize itself for further words generation because of how attention works. so if you tell that {{char}} is psycho killer then model would write something like>I wanna kill him so hard because I am a killer hur-dur-dur but need to keep my intentions hidden for nowagain can be fixed with prompting like tell model to hide {{char}}'s dialogue in <!-- --> and then apply regex to delete theme. or apply prompt with <hidden> as anon above said
>>41595011So it won't do anything in just seven RP days without me specifying that a week has passed, correct?
>>41594951>I take it it's gonna be a mess of constant knives falling out of her pockets or something unsubtle like that, right?Not at all, but the models might fuck up sometimes even if you do it well. Especially the dumber models. As >>41595003 said, you need to really strongly emphasize the hidden information. Pulling out the trump card and doing XML in the middle of regular prose description is indeed a good trick. Giving the event a specific trigger or timeline can also work nicely. I've done something similar in a pony card. And in one non-pony card I put "SECRET INFO: [...]". It actually worked fine. >>41595011>>I wanna kill him so hard because I am a killer hur-dur-dur but need to keep my intentions hidden for nowThe smarter models aren't usually doing that. They try to hold back a bit if you give them something else to focus on. The worst Opus did for me was dropping vague stuff like "her voice drifts off for a second" when something related to secret topic came up.>>41595020The smarter models can sometimes reason out that sunsets/mornings signify the passage of time, but generally speaking, yes.
>>41595033Too bad we don't have a free and open Opus right now to even test all thisOh well, maybe one day...
>>41594945Gemini Flash can be finetuned toohttps://developers.googleblog.com/en/gemini-15-pro-and-15-flash-now-available/and Haiku finetuning is available as wellhttps://www.anthropic.com/news/fine-tune-claude-3-haikuI do hope that Haiku finetuning will be available in new version 3-5. if the model is as good as Anthropic telling us (at least on original Sonnet 3-0 quality) then finetuning on pony-pony would produce versatile and cheap model>>41595020it depends on how model's attention works (its size/quality). also how huge your current chat is. case one:>"my birthday will come in 7 days">(story went on for 50000 more tokens and birthday was never mentioned again)in this case EVEN if you explicitly say that "seven days have passed" - model may not bring a birthday on its own, because model will not be able to successfully retrieve the information "seven days == birthday". 50000 tokens is just too much of data and model will not recall info that was mentioned only once long time agocase two:>"my birthday will come in 7 days">(story went on for 50000 more tokens and birthday was mentioned again every here and there, maybe char was planning a party or present)then despite how many tokens have passed model will be able to bring up a birthday on its own. because model's attention is overfit with user's birthday. if anything the model writes only about your incoming birthday, making the story very one-sidedcase three:>"my birthday will come in 7 days">(story went on for 50000 more tokens and you said 'and hence user's birthday happened)model will be able to continue the story and use your birthday organically into story but only because you explicitly told it to modelone old hack that allowed model to keep remembering passed time is infoblocks, something like thishttps://rentry.org/MyuuTastic#kramfausts-status-panelbut for day/time. with them model should be able to update the current passage of time depending on story progress. but imho it still works half-assedly. YMMV>>41595033true, it depends on prompt and context. the other side of this problem is character's positive chattiniess when they want to explicitly say what they think about, about the tension, about what moves them and so on. especially during NSFW scenes. model just feels the need to make characters as open for reader as possible. "show don't tell" doesn't apply. so even if it is solved on "don't tell X is a killer" level, it will re-appear in a different form. it is an overarching problem sadly
>>41594958>>41594979>>41595003>>41595011>>41595020>>41595033By the way, there's an extension in SillyTavern that keeps tracks of long-term objectives using a separate model and makes sure the LLM doesn't forget about them by checking in every 2 posts or so and reminding it with a prompt
>>41595061Too bad it's useless for us if we don't have THE FUCKING OPUSSWEET CELESTIA I NEED MY DOSE AND I NEED IT ASAP PLEEEEEASE
>>41595065 You don’t need Opus to test any of that. I’m pretty sure Chorbo can handle it as fine.Looking forward to testing new Haiku too.
>>41595065Same.
>>41595061https://docs.sillytavern.app/extensions/objective/#objectiveAre you talking about this extension by any chance? I tried to use it a few threads ago to have it automatically generate tasks, but since ST was sending requests directly to the model, anything NSFW related was hopelessly filtered out.
How does Mistral hold up?I've only used it a little but seems ok, if a little bland with its responses.
Do you guys RP as a human or pony?
>>41595204 Mistral Large 2 isn’t that bad but suffers from repetition issues. An anon posted a preset for it a little while ago; I haven’t tested it, but it could be worth a try: https://desuarchive.org/mlp/thread/41493031/#41501883 >>41595214 80% as human, but sometimes I play as a pony when I want to try other scenarios.
>>41595214human,sometimes pony. maybe i should make a dragon persona
>>41595214stallion. never was a fan of AiE>>41595204Mistral Large is fine. its main issue is that developers have lower data than their competitors so they Mistral team cannot provide the same level of brain as bigger models, but lack of censorship and not been anal with positivism makes Mistral Large good model for RP. but the thing is - it doesn't shine as much as other models. so if user was exposed to corpo models he always would have than lingering feeling of "not the same". if you adapt however then Mistral Large works great. most anons neglect Mistral because they have access to corpo models. we usually talk about Mistral Nemo because it is small can run on mid-budget PC and a good cheap option for Localsfor presets check there as wellhttps://rentry.org/large-qr-revisedI personally find Large very uneven. on some cards it writes very good while on others it feels like something from 2023 with lots of repetitions. like you really need to adapt preset to the specific card. another complain tho - it just feels too GPT-ish at times. like you can feel that they been training model the same way big corpos are training and some swipes sound like distilled GPT Turbo. but same can be said about LLaMA as well. I like its proactivity tho
Random news.https://x.com/AnthropicAI/status/1851297754980761605
>>41595214Majority of the time as a human, but I like switching it up and going pone sometimes.Another question is if you play as certain characters. I occasionally play as one of the M6 to interact with the others.
>>41595214a friend
>>41595214pretty much always as my self-insert stallion, just works better with the characters I RP with
>>41595214Human, but never as myself. I do different characters (usually like 2-4 sentences of backstory) depending on the card.
>>41595214Human, pony, snek, and a few others.
>>41595394>snekI'm intrigued. Spill the beans, anon.
>>41595214Mostly human but someone posted a card, I think rgre, with a human turned pegasus colt persona. I occasionally use that. Sometimes I rp as existing characters.
>>41595214Funny green man, random OCs that might or might not be a pony, etc. It all depends on what I'm on the mood for.
I've been using ST on my phone and recently I'm getting a Forbidden: no more IP addresses allowed for this user token error. Anything I can do to fix that?
>>41595214I mostly play as myself (and I'm human). But if the AI decides I'm actually a pony, I continue to play as a pony.
>>41595555 If you’re on mobile data, you may be assigned multiple IP addresses over time. There’s not much you can really do about it. Maybe try using a VPN, though I’m not sure which ones are good for phones.
>>41594950Fine tuning and transfer learning mean the same thing right? Or is there a difference?Any chance we could use that to get the model to better understand equine anatomy? I'd like my mares to wink and have their teats accurately described and for them to stop randomly growing fingers.
>>41595569 I doubt anyone will bother since it’s for the old GPT-4o, not chatgpt-4o-latest. I tested the proxy, and honestly, I still prefer the latest version. >I’d like my mares to wink From my tests, the latest version does this unprompted when it’s not getting filtered. At worst, adding a line of description inside the card should work. Teats can still be a problem sometimes, yes.>for them to stop randomly growing fingers On GPT4? The only time I had that issue was with older versions of Claude, but it stopped being a problem long ago, even without adding any info to the card. I do stories on empty cards, and it’s never happened. The only thing that sometimes still happens is expressions like "giving a hand."
>>41595564I figured that's what was going on. Oh well, good to know, guess I'll just use it at home
>>41595564I thought most VPNs also rotate you through a certain set of IPs, usually more than 10.
>>41595569>I'd like my mares to wink and have their teatsI would use logit bias, see picrel for tokenization on 4o/Latte' wink' is a solid word. you can add it with bias +1-3. ' winks' and ' winking' sadly are not tokenized as the single words. but honestly - if the word "wink" appeared in context once - all compounds will follow as well. drop line like "mare can wink their genitals" and you good to goas for teats - the same. add ' teat' to +1-3 and ' breast(s)' to -5-10. you may want to ban other non-pony description for teats if you wantfor fingers - I legit haven't seen fingers on ponies in months. model may write 'fingering' when describing petting/grouping, true, but that's because none of models is able to think of "hoofering" on spot. telling model to stay in MLP canon and respect equine anatomy is typically enough to make model avoid this problem all together>Fine tuning and transfer learning mean the same thingthere was a subtle difference but nowadays they mean the same thing. it is like the term LLM was previously used to describe models of GPT3+ capabilities and not all Transfomer-based models are LLMs, but in the current time people call stuff like BERT LLM as well. time has changed and now terms are applied retroactively
Equestria sim decided it was time for a new story.I updated Silly tavern on mobile and now it no longer says how many messages and how long the response took. Is there a way to turn it back on? I kinda liked seeing how deep my chats were at a glance.
>>41595652All the infos you can add to a message are here.
>>41595587>On GPT4It's not something that happens very often, and honestly, it could either have been happening on either chatgpt-latest or sonnet 3.5. I've been using both a lot, so I'm not sure which. The fingers are more implied than actually mentioned. Like "hooves fist in his shirt" and "ruinning her hoof through your hair, letting the strands of hair flow through her hoof." or something like that. Also, "curling hoof-tips," which might be possible, but just sounds gross to me.>>41595647>logit biasSweet. I've never messed with that before. I'll give it a try.
>>41595671 >curling hoof-tips Kek. But to get back to fine-tuning a corpo model, I think the only way it would happen for us is if Haiku 3.5 is good enough and still allows fine-tuning. Even then, you’d probably need to find a proxy owner or someone willing to host the fine-tune on their own account.
>>41595671>curling hoof-tipsah yeah - it does happen, not skill issue. it is example of overfit on human domain. model just writes the text from human-based stories 1:1 but replaces human anatomy to pony anatomy. as the consequences - horsecocks do not have medial ring (because humans lack them), horses (somehow) move items without picking them in mouth (because humans...), and when doing pet-play model states that X is on all four like it is something unusual (because humans...). yes it does happen. human domain is just too much. finetuning on pony-pony can fix it (will break human influence). outside of finetuning prompting and examples may help you. for examples something like this:>```>context: pony moves item>You: she grabbed item and moved it...>result: fail! pony must move items with their mouth and teeth, unless they are unicorns who may use magic>You: she grabbed item with her mouth and moved it>```basically multi-shot model on bad behavior-corrected-with-good-behavioralternatively prompt model with instructions. but I prefer give examples because I may also teach model on writing style I want to receive, and provide extra pony terms in proper context, but you do you!
load up a new model, load in fluttershy, 1 hour in she inform me that pee is stored in the balls
>>41595745Name and shame the model
>>41595431Have a character who is a shapeshifting snek. I use her to cause chaos. Think Morph from Treasure Planet.
>>41595647Good god... Smiley + that logit bias is the way to fucking go. Thank you, anon.
>>41595647Logit bias is completely unnecessary, would flood context with winking and just be stale. >drop line like "mare can wink their genitals" and you good to goBeen using something like this for ages, >Respect equine anatomy iAnd this too.
>>41596063you are welcome! if you need to break other words for tokens, then use this option in ST (picrel token counter). it will quickly process your text into tokens so you will be able to logit bias them effectively. try to apply bias to full (solid) words, and remember there is usually a space before most tokensbut also don't overdo it, sometimes it is easier to use regex instead of logit bias. you don't like model using ’ instead of ' or — instead of -? better write regex to replace them in complete text. why? because —, —a, —not, —I, —but, —and, —the, —from, —it - are all different tokens and you need to ban them all separately which is too much. same for the word -fuck- .you want model to say -buck- instead? then write regex to replace -fu- with -bu-. more effective than utilizing bias. use bias to introduce or ban concepts instead of cheap find-n-replace>>41596121>Logit bias is completely unnecessaryYMMV. it mechanically changes probability without any prompt. it will not flood context if you keep bias at low values like -3 - +3. just a nudge. if you bias -wink- to +50 then yes it will flood context a lot. model will keep winking even if you remove logit bias - solely due to in context learning. but overfit may happen due to prompting as well. OpenAI allows to include what 300 logits in total? well my imho it would be a waste to not give wink at least +1 to boost word's chance. not arguing with you tho - just two ways to solve the same issue. also as a kek:>gave -wink- +10 bias>model started flooding context with winking because overfit>gave -wink- -5 bias afterwards >model started winking more reasonably on full context
>>41596160After one wink in the context, 10+ more followedno logit bias, just equine anatomy nudges
>41577157I admit I'm dumb but where is the link in the discord for the JB the op said he used? I've never used a proxy and I think charybdis that and also is dead or whatever the last I heard. I don't know if the JB will even help me and I also don't know what model he was using.Also I feel me using a mobile device is a big detriment. Everyone talks about using beefy PCs to handle the AIs. Is my Google pixel unable to take advanced of bigger models that I use though sillytavern? Do I need to stick to self hosted models like NovelAi? I rather not because the most recent release has been hit with a lot of criticism. My PC sucks and I don't exactly have the money to upgrade it. I've been fighting, and still haven't used the $2 I put into it, with Deepseek for the last month or so. Probably a combination of the model's output, my JB and settings and I fear the limitations of using a phone. The first generation I did was still the best and even when I copy the way I did it with the same older settings I can't manage to recreate it.
>>41577157With the link the actual comment from last thread working
>>41596537Use huggingface to just the local model then stream to your phone via Risu
>>41596637The what and the what? Ive clicked on links to huggingbox before but im confused on what it's used for and risu seems to be a sillytavern alternative? Do I need to do anything else but get the key and transfer any settings I have? That's seems too simple. When you say streaming my mind goes to the cloud gaming services like GeForce Now.
column dead again
>>41596543 The guy was using Smiley from https://rentry.org/lattejb, I believe. But it’s a JB specifically for a model of GPT called chatgpt-4o-latest. >Also, I feel like using a mobile device is a big detriment. Everyone talks about using beefy PCs to handle the AIs. That’s only for local models. 90% here are using proxies that don’t require any hardware. >Is my Google Pixel unable to take advantage of bigger models that I use through SillyTavern? How were you using the models? Which models? Sorry, but your post is a bit of a clusterfuck.
>>41591168We need this cute amre
>>41597080This gave me a cool idea, so I’ll try to release something.
I’m not sure if they relaxed Chorbo, but it’s doing smut with very basic JBs now. Though it might just be the card.
>>41596921How were you using the models? Which models? Sorry, but your post is a bit of a clusterfuck.Though sillytavern that I set up using the android guide. I'm assuming it was preferred with the use of proxies as the power of any mobile device isn't enough to handle the models, this isn't something I understood because my dumb brain. I say that there was a Android guide and assumed that meant I could see it up and good good generations like everyone else and felt proxies were too confusing to set up so I haven't bother with them, but now it seems I actually need them but there fucked.
>>41591162https://litter.catbox.moe/oqyps1.pngFluttershy gets into Crimsonland.
>>41597486>but now it seems I actually need them but they’re fucked. Claude ones are fucked, but there are still plenty of GPT ones. Here are some options: 1) Public option, a fine-tune of GPT-4o. >>>/g/103020580 I’m not a big fan myself, but I’ve heard some people like it. Just go to the link, request a user token with the given password, and once you have a token, ask back here if you need help setting it up. Your phone or computer will work during the token process. It’s not malicious; it’s just a POW added because proxies were DDOS’d in the past. 2) This proxy >>41591899 gives you access to chatgpt-4o-latest, which I think is the best GPT model right now with the best prose (But you will need JB like the one you linked to do smut). It’s protected by a single riddle, and sharing the answer or link directly is discouraged, but head to the rentry and try to figure it out yourself. Once you have a set of words, you can either see how the proxy in 1) is constructed to understand how to reach the proxy page or just paste some of the words into desuarchive. Once on the proxy page, you can request a token as well. There’s no password; finding the proxy page itself is the riddle. Same as before, report back if you need help once you have a token. 3) Read until the end, 'From Foal to Fiend—A Dark Arts Guide,' in the OP.
I sure do love forgetting the >Trying to rack my dumb brain to understand it all, what is the functional difference between sillytavern and risu? As at first glance it just seems to be a less feature complete version but you don't have to go though a how guide to get it working. Is it what I should using because of the current unavailability of proxies? Is it worth looking into chatgpt-4o-latest which I don't believe needs a proxy and isn't run localy? I may just be confusing misunderstanding what a proxy even means. I feel I've been using them this entire time. If I wasn't then surely anything I tried to do with sillytavern shouldn't worked right? I've been under the Impression that working with Sillytavern meant I was using models locally. Is this the wrong takeaway?
>>41591162A small addition to Breedinquestria.https://litter.catbox.moe/k8kx7c.png
>>41597367I've noticed this too. I thought it was because of some edits I'd made to a preset, but if you're getting it too then the timing may just be a coincidence on my end
>>41597518 >Trying to rack my dumb brain to understand it all, what is the functional difference between SillyTavern and Risu? They’re both frontends. Risu is online and doesn’t require setup like SillyTavern, but it has fewer options and, most importantly, 80-90% of the community is on Silly. So if you want to import jailbreaks, they’ll work on Silly but not on Risu, and if you ask for help, you won’t get as much support using Risu. Use Tavern if you can.>Is it what I should be using because of the current unavailability of proxies? Read >>41597516. Frontends won’t help with proxy availability. >chatgpt-4o-latest, which I don’t believe needs a proxy and isn’t run locally? It’s a submodel of GPT-4. It’s a corporate model, so yes, you need a proxy. >I may just be confusing/misunderstanding what a proxy even means. Read https://desuarchive.org/mlp/thread/40917404/#40926228 >I feel I’ve been using them this entire time. No, you don’t use proxies randomly or by mistake. >If I wasn’t, then surely anything I tried to do with SillyTavern shouldn’t have worked, right? The default option in SillyTavern is something called Horde. Horde is a way to connect to other anons running local AI models on their computers. It’s not the best, and you’ll encounter a lot of slow models or ones with poor settings. For more info, check this: https://desuarchive.org/mlp/thread/41240501/#q41246541 I believe that must be what you were using.
>>41597541>Risu is online and doesn’t require setup like SillyTavern, but it has fewer optionsRisu is local and is way ahead ST feature-wise. It allows for Lua scripted cards, for one. ST has practically stagnated for half a year. The problems are:>Risu is made for aliens by aliens>community stagnates as well, they don't need features and don't want to create high effort bots, all they want is to regurgitate useless JBs, fill forms they're used to and create very basic coombots
>>41597516I prefer the second option and I'm not against going through some weird ass riddle/puzzle but I was able to see the link provided, didn't see any riddle initially but regardless it got refreshed and now I get 504 gateway timeout. Ugh. Why must so much work go into wanting to coom. I really need to get back into trying to just write my own. People are so damn wrong on ai being easy to use.>>41597541>They’re both frontendsOkay so the advice for risu isn't necessarily. I really didn't what to have to transfer everything over.>No, you don’t use proxies randomly or by mistake.You don't understand how dumb I am. I wasn't using the default horde option at all. I was using, if I'm now understanding it all right, a proxy for the Deepseek 2.5 model. That's what getting the key and having the models' website linked is all about. This means the past output has all been on me/model/settings.I think the confusion I was having was the ease of use to get a proxy. The weird shit with chatgpt-4o-latest is what I figured getting proxies was about as in comparison getting the other model to work didn't feel like I was doing that proxy method.>>41597576>Risu is way aheadOh I need a break. This is all too damn confusing man.
https://appear-medications-queens-tender.trycloudflare.com/Has this been shared here yet? It's a fun model
>>41597576 >Risu is local True, you can use the local version; my bad. >way ahead of ST feature-wise Can you easily enable or disable parts of your prompt while in chat now or have an easy access to it?
>>41597587Talked about here >>41594435
>>41597588You can't, and that's what I'm talking about. Risu is made for self-contained high-effort bots with the complete experience, think Lamplighter, The Staff of Oscilion etc. Which is what nobody makes or uses, they want their daily 400-token himmyadams slopbot from the chub feed that requires to be wrapped in a 6k token JB with a generic 4k CoT to be remotely usable.
>>41597603I think you’re being a bit reductive. Accessing panels or parts of your jailbreak isn’t just for NSFW; it’s also used to adjust the AI’s writing style, alter its behavior for specific key scenes, change styles depending on the character encountered, etc. Adding new rules and so on, too. I don't think it would be bad for them to have a way to do that easily.>Lamplighter, The Staff of Oscilion High-effort bots, really good, but sometimes you just want to talk to a character or write a story that focuses more on the writing than on a particular setting or set of rules.
>>41597516>>41597587First option died just now.
>>41597668I really hope it gets refilled. I just need something different from fucking chorbo.
>>41591728 >Later this year >Later this month >on the same page If it’s not here tomorrow, I guess the first line is the accurate one.
>>41598346>200 A.M.
>>41591162bodyswap cardhttps://files.catbox.moe/ir223y.png- Requires SillyTavern 1.11.8+ (uses {{pick}} macros)- Bot has embedded lorebook which holds the variable of possible characters (card_MLPbodyswap)- At the start of chat, bot picks a random (hidden) character to swap {{user}} with. Say 'ready/ok/go/sure/fine' and bot will start a story - If you don't like the randomly picked character then START A NEW CHAT and bot will pick a different character to swap you with- You can also just tell bot which two characters it must bodyswap; you can also provide additions to the scenario (like OOC)I originally thought to make it as a Nightmare Night card (scary experiment gone horribly wrong) but imho it would have limited card's scope... and why?picrel - examples on Latte
>>41595908better, heres all the models ive tested, and scoredwhite = i keptblue = good but i had bettter so i didnt keepyellow = functional but didnt likered= non functional
>>41598407Nice! Thanks anon.
Anyone using Sily Testu JB with OpenAI getting filtered lately? It always happens right when mare is going over the edge too. Like all the raunchy shit leading to that is all good, but she can't cum or it's filter time.
>>41598550 I didn’t notice this change, but I’m not using that JB. Still, low blow from the AI. Let the mares cum! If it’s really just that, I’d try another JB for one gen or simply switch to normal GPT-4o for this gen. If you have a lot of Chorbo gens before in context, the writing shouldn’t be too bad.
>>41598346wow, someone still uses my bots? neat
>>41597503Added, thanks anon.https://mlpchag.neocities.org/view?card=MaudPie/Crimsonshy.png>>41597523Updated too.
>>41598381Added too. Thanks.https://mlpchag.neocities.org/view?card=Anonymous/Bodyswap.png
>>41597080>>41597163
>>41599178Kek, I'm looking forward to the full card. And I know where she's from, weeb :^)
>>41598550 I’m not using Tatsu, but it’s the opposite for me right now. The model is letting me do a lot of stuff without complaining.
Bump for AI mares!
>>41599178What model and what jb?
>>41600012Latest with >>41563493
Do you have any good themes for Silly? I’m tired of the default one, but the only other one I like is Celestial Macaron.
>>41595214Human.
>You exceeded your current quota, please check your plan and billing details.D-did I kill MM?
>>41600646 Just tested it now, and it’s working with GPT. But maybe there’s a dead key in the batch? I’d just swipe if you get it.
>>41600555Mine. Amber letters and green quotes on a black background.Like on 1970s terminals.
>>41600653I think it's already replaced. Gotten a lot less rejections and not a single quota warning since.
>>41600646He has the keys set to automatically recheck every few hours. You posted 3 minutes after a recheck, so presume you grabbed a dead key he was checking.
I when though the rentry for the chatgpt-4o-latest and got my user token. So am I using in place of an API key or do I need both? What site do I need to be linking back to or is that part unnecessary? Honestly don't know how much I'll mess with it. It's so much more expensive that what I was using so it better be really damn good
would love to see a multiversal equestria hub card where ponies from different universes and timelines interact
>>41600841 >I went through the rentry for the chatgpt-4o-latest and got my user token. The rentry with the proxy? Just making sure before helping out. >It's so much more expensive than what I was using so it better be really damn good. Proxies are fucking free. That’s why we use them.
>>41600855>The rentry with the proxy? Just making sure before helping out.The weird page with alien words right, getting a 24 hr token.>Proxies are fucking free. That’s why we use them.What. This entire time I've thought you always had to pay for any usage of the AI, only time I haven't was the grade period before cai was payed and I think back in the day AI Dungeon. I feel dumb not realizing this.
>>41600864NTA, but I'm glad there are oldfags still sticking around since AID times. To think that GPT 3.5 that was the Dragon model used to be the height of what could be achieved with LLMs. I miss those simpler times.
>>41591168>>41600849forgot to tag
>>41600870 >To think that GPT 3.5 was the Dragon model It wasn’t. It was GPT-3 Davinci. 3.5 is when they started using chat models. Before that, it was text completion.
>>41600870I'm found it from the streamer wayneradiotv doing a bunch with it. It's what got me to seek it out. I miss when it was more novel focused, as the chat part of AI doesn't interest me at all. It's why I'm so disappointed to hear NAI new model sucks because on principle it did everything AI Dungeon wanted to do and then some but they've pivoted to image gen way too damn much.
>>41600879That's even more impressive, then