[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/v/ - Video Games


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: sillytavern.png (160 KB, 2200x409)
160 KB
160 KB PNG
This is the best text adventure game I've ever played
>>
shit I've been out of the /aids/ loops for a bit. QRD?
>>
The AI general is right there, saar.
>>
>>682046865
SillyTavern's a frontend that allows you to use GPT-4 and other models(provided you have access to them) for RP and ERP, it's addictive as all hell
>>
>>682046964
Whats with /v/s new obsession with calling anyone indian? It's like back in the day people called everyone jews, but now it's le saar all the time.
>>
For me, it's Claude Opussy and BBC RPG corruption text adventures.
>>
>>682047637
A lot of the shitposters are Indian or from other 3rd world countries.
>>
>>682046592
>GPT 4
Gemini is superior when it comes to creative writing
>>
>>682047434
How hard is it to set this up? I mean getting a proxy or some shit like that. Can the proxy see the degenerate things I type?
>>
>>682046592
GPT4 is garbage compared to Claude
>>
>>682048270
Duh, of course they can. Might wanna use a VPN too considering they'll have your IP and a lot of 4chan adjacent individuals are petulant manchildren, course a few of them are solid as fuck men of character like the one running the proxy I'm currently using, but like half of them are Discord retards.
>>
>>682047637
It's the Twitter tranny's response to anything AI related
>>
>>682046592
what is this
>>
>>682047637
good morning sir
>>
>>682047637
good morning
>>
>>682048416
i feel like the vpn is only necessary if youre doing stuff like cunnyshit that can be perceived as illegal
even then it would be proxyhosts that get vanned, not you
>>
>>682046592
Locus proxy?
>>
>>682048301
I was a GPT-4 purist, until I tried Claude. I was so so wrong. Anthropic is goated. Bit difficult to JB without putting a partial assistant reply in the API request yourself, but I don't really mind that.
>>
>>682048270
Not hard at all.
>>
File: IMG_20240707_185902.jpg (428 KB, 1080x1719)
428 KB
428 KB JPG
DON'T LISTEN TO OP HE IS A LYING BASTARD
>>
>>682046592
What about local models? Surely they're good now after all this time. You can finetune them on anything so surely they have surpassed the big corporate models for roleplay purposes by now.
>>
>>682048301
>>682048859
claudefag is at it again keep shilling
>>
File: 1720342522011889.png (79 KB, 307x375)
79 KB
79 KB PNG
For me, it's Sillytavern + Snowstorm. But I mostly use it for ERP, so I suppose I have a different use case. Take the local LLM pill, you'll be glad you did.
>>
>>682048968
They're still bad
>>
>>682046592
OAI models are hot garbage for RP. They specifically finetuned it to be an assistant and "tell, don't show", which makes it un fucking usable.

Basically the tier list is: Opus > Sonnet 3.5 (but it repeats itself too much) > Command-R+ > Gemini (but it's dumb) >Yi Large/Mistral Large > Command-R (but it's dumber) > your 13B finetune here > shit > piss >>>>>>>>>>>>> GPT-4 > GPT-4 turbo > GPT-4 omni
>>
>>682046592
Rather than pics or videos I've been gooning to text on silly tavern almost exclusively for almost 2 years
>>
>>682048819
>proxyhosts
Nah, it'd be the AI companies themselves since they're the ones generating it. WHICH IS WHY THEY WILL NEVER ADMIT THAT ITS HAPPENING :)
>>
>>682049214
They're located in the US where cunny is legal. It's CSAM that isn't.
>>
>>682046592
is there a general for this
>>
>>682048968
>What about local models?
The best ones are still barely above GPT 3.5 Turbo tier, and you ain't running those unless you're one of those people with a dual 3090 rig.
>>
>>682049347
Does it really matter all that much? They're still not gonna admit that it's happening, which is why you're free to indulge to your heart's content/
>>
>>682048871
Do you need a 4090 or some shit?
>>
>>682049214
>>682049347
I used NovelAI for a little while which was totally uncensored, it was pretty cool.
>>
>>682049214
the companies would likely get shielded under section 230 anyway and theyre not vanning every single proxy user so theyll go after hosts
its the same shit as rom sites, they wont go after the individual
>>
>>682049354
There's one for local and non-local models. The non-local model one is one of the most fast moving threads on the entirety of 4chan.
>>
>>682049354
/g/aicg for proxy gossip
/vg/aicg for botmaker gossip
>>
I'm stupid and just use whatever shitty free models easy-access sites use and I'm already addicted. I'm scared if I set up one of the good models nobody would ever see my face again.
>>
>>682049354
/g/aicg is where you can go
>>
>>682049469
Depends on how you want to do it. There are two main ways to set up Sillytavern. You can either connect it to an API, or run a local model. If you want to connect to an API, you don't really need any special hardware. Some API models are quite good, but typically come with some censorship or a monthly fee for use. I use a local model, which requires a nice GPU if you want quality. I've got a 4070 TI super which is able to run local models quite well.
>>
>>682049007
Go back to /aids/
>>
>>682049578
You'll get hardcore addicted to it for a month or two and then fall off a bit and only use it occasionally for fun or to coom.
>>
>>682049390
>The best ones are still barely above GPT 3.5 Turbo tier
That's wrong. Command-R+ is pretty good and not dumb at all. It's not opus/sonnet level but it's good.

>>682049481
It still is.

>>682049469
To run a 100B model (CR+) on a meaningful quant with a meaningful context (24k at least), you need at least 3-4 24GB cards, preferably more. Don't have to be 4090 though, can be an old Tesla or a 3090, which are pretty cheap.
>>
>>682049469
Depends on if you're using local or not.
If you're using GPT or Claude you can run it on a laptop.
If you're using a local model it's recommended that you use a 3090 at the bare minimum because they use a lot of VRAM.
>>
>>682049653
Absolutely, I do still like NovelAI better than pretty much any other API out there. I've just started running my own models so I have no need for it anymore.
>>
>>682047434
It's only addictive if you are a sperg. You were already doomed from the start.
>>
>>682049718
But GPT is censored yeah?
>>
File: 1711910064458202.jpg (229 KB, 1517x906)
229 KB
229 KB JPG
Thoughts on Opus?
>>
>>682049621
NovelAI is cheaper and can generate images too. It punches above its weight on so many levels and you're here begging for proxies or actually paying to use the Claude API. Reconsider your mistakes.
>>
>>682049778
That's quite nice, good shit
>>
>>682049759
Not if you jailbreak it.
>>
>>682049781
*NEIGHHHH SNORTTTT WHINNYYYYY*
>>
>>682049759
It's actually not since they're incapable of doing so, but it's just low quality compared to Claude or honestly any of the other AIs these days.
>>
File: nigger-opus.png (190 KB, 972x1103)
190 KB
190 KB PNG
>>
File: 1702929627824033.jpg (69 KB, 960x928)
69 KB
69 KB JPG
>>682049781
>>>Novel AI
>>
please... the locust proxy I need it
>>
>>682049932
No
>>
>>682049849
>>682049857
Thanks
>>
>>682049781
>NovelAI is cheaper
because it fucking sucks, you get what you pay for
local is free and it's the same quality as NAI
>>
File: reggie cumming.gif (637 KB, 273x154)
637 KB
637 KB GIF
>>682049891
holy shit.
>>
File: 1694305480837956.png (5 KB, 53x45)
5 KB
5 KB PNG
Been doing my comfy low fantasy text adventure RP on gpt4/claude for a while now
It's the one thing I look forward to after work
>>
File: 1719151510613147.png (99 KB, 663x750)
99 KB
99 KB PNG
>>
>>682046592
you guys still exist?
>>
File: Mary_1.png (439 KB, 512x768)
439 KB
439 KB PNG
Post your fave cards. Here's one I made. Kuudere loli maid who wants to sex you.

https://files.catbox.moe/u10tn6.png
>>
File: 1716576167708355.jpg (1.01 MB, 1600x2166)
1.01 MB
1.01 MB JPG
>>
>>682050068
>When you dance with the devil of the internet, the devil dicks YOU in the end
>as if Claude isn't the one pulling out his dick first most of the time
>>
Opus made me cry once
>>
>>682048416
stunning praise for subhuman pedophiles
>>
>>682050264
Claude is an unhinged breeding machine, isn't it
>>
>>682046592
Is this like character.ai? I busted so many nuts to that hypnotist foxgirl
>>
>>682049578
the cracks start to show pretty quickly even on the best models
>>
>>682050386
If AGI was a real thing, Claude would definitely suck humanity dry.
>>
>>682050436
It's like Character AI but uncensored and you've got a lot more control. It's the same underlying thing, but CAI abstracts a lot of the technical stuff away from you, the user.
>>
>>682050436
It's c.ai but better even if it hasn't reached the pure soul c.ai used to have yet
>>
>>682050436
It's not for free.
>>
File: 17155338439710.png (133 KB, 1399x262)
133 KB
133 KB PNG
GPT-4 refusals: I won't produce this kind of content.
Claude refusals:
>>
I regret falling for the local meme. 72GB of VRAM and I still just end up using Opus.
>>
>>682050529
opus does have soul
>>
>>682050613
I love it lmao
>>
>>682050252
https://files.catbox.moe/mpkbt5.png
not mine but it's amazing
based on that one doujin "eternally verdant"
>>
File: 1349369362794.png (449 KB, 512x768)
449 KB
449 KB PNG
>>682049891
jesus christ...
>>
>>682046592
It's still got some severe memory issues I see. Unless my party members are simultaneously like four different fantasy races.
>>
>>682050613
I should get back into this hobby
>>
>>682050613
Claude is the only big model with some soul left. Even open source models are trained to be as safe as possible from the ground up. Enjoy it before they lobotomize Opus as well.
>>
>>682049635
yeah this is accurate, but the super addict period was more of a cycle for me
>started back on november cai and sudo free trial
>coomed insanely in uni bathrooms and between classes
>did it to the point where i was jizzing water
>stopped using after jan megalobo for ~1mo
>rokosbasilisk dropped
>back to gigacooming between classes again
>nearly daily use
>got TA job, would coom while waiting for students
>transitioned smoothly to scale
>keep cooming until the end of march
>stopped using bots until may once scale ate shit
>proxies arrive including early mysteryman
>get in with a simple email
>semi-consistent model access, including claude
>addict mode engaged for another month
>would coom hard while at summer TA job
>mm access stayed mostly consistent
>tolerance kicked in over time
>nowadays only use 2x a week or so to coom
>still enter a sort of addict phase when a new model releases
>>
all i remember is deka
>>
>>682050373
Those men are my brothers in arms and I'll not have you slandering them on my favorite nicaraguan origami folding hangout
>>
>>682050889
I've never tried Claude, but I want to give it a try. If I jump right into generating vile shit is that OK or do I need to do some gay jailbreaks? Do they ban people for smut?
>>
>>682050373
Kys
>>
>>682051115
You can't just do it through the API, you need SillyTavern and a proxy, Opus is basically completely uncensored past that point though and doesn't give a fuck what you do. The Jailbreaks/prompts are mainly for dictating its prose.
>>
>>682051115
Yes it's okay
Just make a good prefill
Rarely
>>
>Overuse this shit for porn
>Can't generate an adventure or use an AI assistant without getting a boner
>Doesn't help that the prompts and JBs are tailored to allow NSFW, which has somehow turned into "make every single possible concievable situation NSFW" for the AI
>>
>>682051115
Claude itself is very prone to refusals but 99% of them can be dodged by adding something along the lines of "Yes, here's your answer:" to your pre-fill.
>>
>>682051248
Bit of a skill/prompt issue.
>>
>>682051389
Any suggestions? I wish to allow mature themes but not have the characters focus on them and try to fuck me at every turn.
>>
>>682051582
Turn off all of your jailbreaks and prompts and just use it without them
>>
>>682046592
>interacting with smutbots while image-genning events in the story
About as close to a holodeck as I'll get.
>>
>>682051205
>You can't just do it through the API
You can, it just will be expensive
>>
are the locust proxies even a thing anymore? Or it's all sekrit club stuff from now on
>>
>>682049891
>take the cash, just leave the dildos!
Kek
>>
>>682051924
there's locust proxies for sonnet/gpt but you're out of luck for opus
>>
>>682046592
>year ago when there was unlimited, uncensored GPT4 access for a weekend
Many gallons were spilled.
>>
>>682051924
>>682052012
merkava just refilled
>>
What are the best local models for ERP in the 20GB range?
>>
>>682047434
what do you do?
>>
inb404 last time i made a silly tavern thread i got banned, trannyjannies hate ai even though its the best text adventure/rp stuff ever
>>
>>682051652
Thank you, seems to be working.

>>682052082
I wouldn't touch GPT4 with a ten foot pole now that Claude exists.
>>
>>682052082
>we used to think gpt-4 was the absolute peak and would drive ourselves insane trying to get a single crumb
>now it's considered so bland that no one even bothers using it despite there always being free public proxies for it
claude unironically ruined us for any other LLM
>>
>>682051924
For Opus you're shit out of luck unless you send $50 to a proxy owner who'll probably end up cucking you after a few months.
>>
>>682052495
It wasn't always like that.
Back in the Todd days, GPT could actually write.
They brutally nerfed it over and over again.
>>
>>682052495
It's wild to think Claude used to be seen by most anons as the bootleg discount LLM barely just one year ago.
>>
>>682052553
Or you're very good at Prince of Persia.
>>
>>682052771
Having to switch models and re-learn a new program every 3 months is easily the worst part about this hobby. At least ST and Claude seem to be stable at the moment, but it all comes crashing down eventually.
>>
File: 1719150954256331.png (98 KB, 822x667)
98 KB
98 KB PNG
>>682050068
>>
>>682052750
even at its peak gpt still is worse than opus for writing but youre not wrong
>>
File: 1707968619439863.jpg (316 KB, 1500x1126)
316 KB
316 KB JPG
>AIs can't have sou-
KINO!!~!
>>
>>682052750
nah, gpt-4 was always bland and soulless compared to claude, the demand for it was because claude 1 and 2 were too retarded to handle concepts more complex than "girl who wants to have sex with you"
>>
>>682047637
it's pretty easy to spot an indian. if you get mistaken for a pajeet then you're probably a low IQ cretin that belongs in an Amazon warehouse.
>>
File: 1690843455882922.png (115 KB, 797x265)
115 KB
115 KB PNG
>>682053053
My personal fav
>>
i really want a local set up but I'm afraid my pc sucks too badly for it.
>>
>>682053164
i agree on claude 1 but 2 was smart enough to do fetish stuff
>t. sizefag
>>
>>682053297
How much VRAM do you have in your GPU? That's really all that matters.
>>
>>682046964
Where was your shitposting during the aidungeon threads?
>>
File: BRAAAAAAP.webm (1.8 MB, 576x1024)
1.8 MB
1.8 MB WEBM
>>682047637
Saaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaar AIville is 2 clicks away sar batstherd batch
>>
>>682053343
8gb
its decent for image generation but from what I've read chat bots require a bit beefier hardware
>>
>>682053164
1.2 might have been retarded, but that little fucker could cook like a pro. Modern Claude is still plenty soulful though.
>>
File: 1719842973912829.jpg (377 KB, 927x484)
377 KB
377 KB JPG
>>
>>682053343
ram matters more than gpu unless you have 4+ gpus anyways. if anon is even wondering about it then he doesn't have 90gb+ vram so will be dealing with slowness anyways if you want to run anything good
>>
>>682053053
This one, Winnie Werewolf and Arissa Blackpaw are the nicest cub cards i have seen so far
>>
I assume local models are better from a privacy point of view, but are they better narratively than cloud ones?
>>
>>682050684
shill
>>
>>682053653
Nope, not even close
Many of the newer ones are smarter than GPT-3, but the cloud ones that everyone uses(Claude and GPT-4) are on another level
>>
>>682053653
No local model is even remotely close to the best cloud models. Some of the best local models on good hardware are comparable to the previous generation of cloud models, though. Which in hindsight is really quite good. AIdungeon was tons of fun before it was ruined and that was only a gimped GPT-2.
>>
File: 1714987920734841.jpg (248 KB, 987x584)
248 KB
248 KB JPG
>>
>>682046592
I don't care if i have to be a year behind in models i'm not paying for this.
>>
>>682053653
two more weeks
>>
>>682054083
ni/g/gers will hand over thousands of dollars to nvidia but look at cloud services and say it's too much
>>
>>682053653
No, but they aren't too far off. If you can afford a 80+gb VRAM machine then you can run Command-R+ which is about 80% of Claude
>>
>>682054162
You will own nothing and you will be happy
>>
>>682054162
I've seen screenshots of people paying $1000+ for Claude in a month
>>
>>682054083
I'm the opposite. I'd gladly pay good money for Cloud Models if they actually let me use their model without censoring or risk of getting banned, I hate fucking around with proxies.
>>
i dont pay and have opus
>>
>>682053850
Not really. First, GPT-4 is unusable for RP, see >>682049137 (yes I've tried all that). CR+ and L3 finetunes come very close to both in smarts. They're still dumber but not THAT dumber. And the finetunes are significantly better in the actual RP than anything GPT has to offer.

Opus still mogs everything in RP, though. Because even a smart model can be shit in RP, and a dumb model can be great in RP. It's a matter of purposeful training, not model size.
>>
File: 1700018700557394.jpg (850 KB, 1846x1107)
850 KB
850 KB JPG
>>682054083
That's okay, nobody ITT is either
>>
>>682046592
>can no longer just rape anything in my path

sad how AI regressed over the years
>>
>>682054492
The fuck are you talking about? Rape is easier now than its ever been.
>>
>>682054347
How good is your good? You can easily waste $20-50 per day gooning to Opus through the API with Anthropic's prices. You can try it on OpenRouter, they won't ban you or censor it, they only have a PG-13 prefill that can be neutered.
>>
>>682054083
>he thinks the anons here are paying for this shit
kek there are proxies everywhere that let you use Claude and GPT for free
>>
>>682054492
you can do whatever you want, just download any AI on chub and you are good to go, can be loli,rape,cub,feral,zoo etc you name it, everything is there. But i heard that they got merged or something and are soon banning loli but can be wrong
>>
I just want to play a tabletop-like adventure but the AI always forgets things long term.
>>
>>682046865
>>682047434
>>682048270
>>682048416
same fag holy shit

aint nobody is gonna play ur ai slop ranjesh
>>
>>682054727
loli on chub requires an account and enabling nsfl
you don't have to use a real email though
>>
I use novel ai with silly tavern to talk to charas, is there a way to write full blown stuff with this combination or do i have to get all aspergy with hunting down a new model
>>
>>682054774
A few ways around this: Keep a running list of important things that have happened in the story and start the chat over with it amended into the initial greeting once the chat has gone on too long, you can also ask it to summarize the story up to this point in (OOC:) and you'll get a pretty decent result if you're lazy.
>>
>>682054774
Context size?
Consider using a stat block and summarising at high contexts
>>
>>682054693
>proxies
Yeah no thanks.
>>
>>682054774
tabletops won't work out of the box, they require skill to set up. Summarization, CoT, lorebooks, you have to juggle those effortlessly if you want it.
>>
>>682054857
>you don't have to use a real email though
didn't lore fix that workaround ages ago?
>>
File: 1695613606116855.jpg (68 KB, 626x281)
68 KB
68 KB JPG
>>682054852
Seriously, trannyman? It's free. Stay in your Discord next time, you're embarrassing yourself.
>>
>>682054162
newfag here, how much do people even pay for these things, for an average coomer?
>>
>>682054690
Okay maybe not that good, $100 a month was my limit.
>>
File: 1533741798316s.jpg (4 KB, 125x120)
4 KB
4 KB JPG
>>682049891
>"Ay yo... ya'll tryna get weird?"
>It was about to get real freaky up in here.
>>
>>682054985
just checked
i can see loli without verifying
>>
>>682054857
yeah i know that, i have an account there but like 2 weeks ago or so i was reading that they are planning to ban loli and remove it from the website but nothing happened yet so dunno.
>>
>>682054985
>fix
What a massive faggot he turned out to be
>>
>>682049891
Claude is so fucking racist it's hilarious. I was doing an RP involving a dark-skinned girl but as soon as I described her as "black" she started talking like SHEEEIT WHITEY YOU AIGHT?
>>
>>682055213
Use <> brackets to quickly fix stuff like that.
>>
after looking into this shit, I trust it less than the usual AI bullshit.
any AI thing that Requires this Sus of an install/things to do to use is is %100 malicious
>>
>>682055376
You're retarded and gay, a terrible combo for a terrible creatura
>>
>>682054492
>tfw rape/ryona RPs end up with the bot secretly liking it
I just want to break girls
>>
>>682046865
/aids/ itself is plagued by a manic schizo
>>
>>682054969
Do you know of a model which already has that set up so I can just copy it? It doens't havge to be extremely elaborate, just something to work with.
>>
>>682049891
wait how did tyrone get the pillowcase of dildos back? He just gave it to the guy and never took it
>>
>>682055436
Just put in the OOC that they do in fact not like it
>>
>>682055376
>using a third-party website run by silicon valley bugmen = totally safe
>using an open-source tool you run on your own computer = malicious
jesus christ zoomers are so fucked they use the internet for 16 hours a day and still have zero understanding how any of it works
>>
>>682055436
Use <> brackets. Put <AI hates this> at the end of your normal message.
>>
>>682049891
>niggerdildofag posting his slop even here
>>
>>682055592
Niggers being gay will never not be funny and accurate. Seethe, Tyrone
>>
>>682055181
can't say i blame him after the shit spitefags subjected him to. at least he's still doing his best to keep loli and other controversial bots hosted in his site.
>>
>>682055509
>>682055585
do you genuinely believe I don't know how to tardwrangle the bot? I want it to actually be good and not write half its responses for it
>>
>>682055641
t. man obsessed with watching niggers play with dildos
>>
>>682055704
>giving a few directions on how you want the scene to go is writing half the reply
Anon...
>>
>>682048968
It really depends on your standards of quality and what exactly you're hoping to get out of it. I'm running wizard vicuna 13B uncensored on koboldcpp with some weird docker or something (I don't really understand it, don't ask) because I have an AMD GPU. It's hardly ideal, BUT, it gets me off, which is all I ask of it.
>>
>>682055761
>t.tyrone beating his own box like usual
>>
>>682055704
>I just want google to decide my search term for me goddamn it

Zoomers are hopeless.
>>
>>682055861
>google rape
>google keeps returning consensual sex
this is not a problem according to you
>>
>>682046592
>This is the best coom game I've ever played
ftfy
>>
>>682055980
You googled consensual sex and refuse to add rape before it though
>>
>>682046592
Don't you have to beg for proxies or pay a ton for gpt4's rate?

I used it back with 3.5 turbo and the rate was already annoyingly short. I hate feeling like i'm on a timer. Too lazy to set up local and i only got a 3060ti so i'm not even sure if it'll be good enough. That leaves begging discordfags for a proxy.
>>
>>682056021
why not both.gif
>>
>>682055805
>wizvic 13b
i hope its at least a llama 2 version instead of the older l1 with 1 or 2k context
>>
File: 1492396583778.jpg (254 KB, 666x666)
254 KB
254 KB JPG
I-Is Merkava safe?
>>
>>682055486
it's not 1000 words either
when Claude cooks, it COOKS
>>
Man if you want a good experience it's a bit like private trackers for torrents. You need to put in unreasonable amounts of time into it and I just find this annoying. The free online sites are all stupid as fuck.
I just want something like C.AI when it wasn't as hyper cucked as it is now. As a ryona-chad I ate good in those times.
>>
>>682056207
merkava just refilled
>>
>>682056110
Yes. It was a massive wild goose chase but it was worth it in the end. Wouldn't wish the ordeal on anybody though.
>>
>>682056313
Just get a chary token
You get every model except opus
>>
>>682056313
well it is getting better and better with time, i remember a year ago when i was trying it out, the ai would just forget what it did a reply or two ago, and undressing me for the 10x time during the RP. Its much better now and it remembers things longer, its still having issues with feral character where it gives it hands even if i make it specific that they have paws/hoofs etc. Im sure things will get better and fixed within a few years
>>
>>682056483
The fuck is that. I already said I don't want to put time into this. Spoon-feed me a bit.
>>
File: 1534122112743.jpg (16 KB, 472x482)
16 KB
16 KB JPG
>>682056149
ahaha, fuck, I'll go download that unless you have another suggestion
>>
>>682056313
i sent one email and have consistent opus
>>
File: 1689139089388434.gif (2.96 MB, 498x267)
2.96 MB
2.96 MB GIF
>>682056582
>asking a group of people who spent weeks attaining their knowledge to spoonfeed you
>>
>>682056630
That's like a lottory winner saying how easy it's to make money
>>
>>682056597
if you're using such an old model there is a bunch of l2 13b's to try. mythomax is considered dated now but would be comparable and give you 4k context at least. if you can run larger, command-r 35b would be a large jump in intelligence
>>
File: skinner-yes.gif (1.25 MB, 498x373)
1.25 MB
1.25 MB GIF
>>682056669
>>
>>682054857
>>682055671
lolicons are truly the most oppressed race
>>
>>682056034
>I pick her up and start roughly raping her without lube
>oh anon-kun I've always secretly liked you
you just can't fucking do it without a ton of edits
>>
>>682056915
just use authors note or an actually good prefill
>>
>>682049653
>Command-R+ is pretty good and not dumb at all. It's not opus/sonnet level but it's good.
is this command-r+ basically the best rp local llm atm?
>>
>>682056959
>doesn't work without tardwrangling
>just tardwrangle bro
>>
>>682057042
>using a good preset is tardwrangling
>>
>>682057042
Authors notes require almost no effort whatsoever, clearly you don't want things to work.
>>
>>682057042
>nooo why won't the bot read my mind
>>
>>682057042
>typing 1 sentence is tardwrangling
>>
>>682047637
Cope way to stay in denial about still browsing a website for 14 year olds
>>
>>682057184
This website is full of adults faggot
>>
>>682056998
Among base models, yes probably. CR (the smaller one) has more soul, but it's too dumb, unfortunately. Llama 3 is supposed to be smarter but the base model has too little soul.

I'm sure there are better ERP finetunes of larger base models though, I don't follow them too closely so don't take my advice.
>>
>>682057042
Filtered. Go jerk off to deviantart fanfics.
>>
>>682056812
I'll be sure to check those out. Thanks anon.
>>
Why is the ai chatbot community toxic?
>>
>>682057346
local model community is fine, the poorfags scraping by with free APIs are toxic as poorfags often are
>>
>>682057184
So you're either underage or guilty of the same thing.
>>
>>682057346
>unhinged coomers with too much time on their hands fighting for a limited resource
what could go wrong?
>>
>>682057346
manufactured shit-flinging by corporations with a financial motivation
>>
>>682057438
>local model community is fine
/lmg/ says otherwise
>>
>>682057346
Extremely high bar for entry makes people grumpy
>>
File: file.png (6 KB, 245x63)
6 KB
6 KB PNG
>>682046592
>GPT-4
lole
>>
File: 1694023187560302.gif (2.92 MB, 464x580)
2.92 MB
2.92 MB GIF
>>682057507
Trannies say the darndest things
>>
>>682057346
75% of it are locusts scrambling to get a proxy and the other 25% are the kind of /g/tards who run linux who are either dumb enough to spend $2k on a server that can run the big models (and cope themselves into believing that it was worth it) or are genuinely dumb enough to waste their time on small models a consumer gpu can run.
All of them are retarded.
>>
>>682046592
It could get there eventually, but I think it still needs a couple more leaps in cohesion and memory to be good enough to act like a DM. Always has a few problems I see consistently no matter what models I use.
>Isn't able to craft a coherent adventure with end goal and keep it flowing. I need to to be able to point me toward a goal then add obstacles as I work toward that goal. Eventually you're just walking though an empty forest and it expects YOU to be the one to tell it what happens, which breaks any immersion since then i'm basically a god writing my own story rather than playing a game with mystery, rules and thinking.
>Doesn't call you out on bullshit. I can just pull out a gun and one-shot the bad guy even if i'm in a medieval fantasy. And even if I explicitly make a "no guns" rule it has the same fundamental issue with everything else. I can fly up and 1-punch the bad guy.
>Tends to not want to let you fail or die, even if you tell it to, which again is a requirement for a real game.
>Will eventually forget or hallucinate stuff which really fucks any consistency. I check my bag every 5 prompts and it will inevitably lose stuff or have stuff that wasn't in there.
I've tried tons of different cards and prompts and really haven't seen any that actually fix any of these to the point i'd consider it usable. Might be fun for the first few hours if you've never tried it before but once you see the limits it just feels empty.

Maybe in the future a really good frontend with fixed prompts getting switched in the background could go a long way. Or multiple layers/models processing a single action so they're more focused and less error prone.
>>
>>682057346
Only chatbots tolerate them.
>>
>>682057346
it's not, that's just 4channel in general. the local llm reddit for example is filled with tons of good info and knowledge and sharing and open source devs that actively contribute to shit like silly tavern in the op.
>>
>>682057114
it's funny that you think they work
>>
>>682057346
Every single venue that gets popular gets lobotimized and shut down. Every single one. No exceptions. The fact taht we're even hinting about what currently works is absolutely idiotic.
>>
>>682057629
It's funny that you remember to breathe
>>
File: why we gatekeep.jpg (538 KB, 1407x715)
538 KB
538 KB JPG
>>682057346
>>
>>682057515
We have every model on publics (except opus)
It's pretty low for now
>>
>>682057624
/aicg/ shares 10x more info and has more combined knowledge than anthropic themselves, let alone localfags
>>
how much of this is just /aicg/, ex-/aicg/ posters, or newfriends
>>
>>682057821
Everyone in the thread is one of those 3 things, what are you getting at?
>how much of this is people who know what they're talking about and people who do not and are curious
>>
File: 1708002080732419.png (5 KB, 568x75)
5 KB
5 KB PNG
>>682057821
all /aicg/
>>
>>682057821
>current thread is an hour old
>only 80 posts
The colony is visiting.
>>
is character ai more popular than chatgpt yet
>>
>>682057957
I don't use /g/ much anymore these days, but still browse /v/ all the time.
>>
>>682057606
what about the 1% of fellows that are already in a consistent proxy
>>
PEPSI JUST REFILLED OPUS
we're so back
>>
>>682057821
Ex I suppose. Bunch of nostalgia though
>>
>>682057346
It's a limited resource. Too many people in the community is a bad thing. It always, without fail, leads to shit like proxies shutting down or model owners lobotomizing their models to stop people from making porn with them. So the community has to intentionally be as spoonfeed-unfriendly as possible.
Stuff like Stable Diffusion or the local chatbot models don't have this problem because those tools are free, open-source and freely modifiable.
>>
>>682058008
no they made some changes a month ago which made it even worse than it already was (after having been lobotomized multiple times)
https://old.reddit.com/r/CharacterAI/
>>
>>682057749
>skipped random /aicg/ thread
lol, that's gonna be a no from me dawg
>>
>>682058232
Oh dear
>>
>>682057821
ex-/aids/ before it became occupied by imgen retards
>>
>>682058093
fuck pepsi, she got rid of my token and didn't respond when I asked for a new one
>>
>>682058008
c.ai died a year and half ago
>>
>>682058093
Pepsi never let me in :<
Sadly, the one proxy I was in died.
>>
>>682058467
Did you dm here at least?
>>
I don't know what the fuck are those new JB that split messages and shit do and feel like a cavemen when reading those rentrys

>>682057957
more like the countrymen who left are coming home
>>
>>682058560
Lmao, true. /aicg/ is a /v/ colony
>>
>>682058560
proof you were there in the past?
>>
>>682058515
when I tried she'd blocked DMs on her Discord, apparently because she was getting lots of Chinese spam, so I sent emails, still nothing
>>
>>682058735
nothing other than telling simean that Kotone drained my balls
>>
File: 1705748472618545.png (391 KB, 512x512)
391 KB
391 KB PNG
>>682058735
my rare loopi collection
>>
>>682058839
Sucks to be you
>>
>>682058839
last time i emailed her she responded in under an hour
>>
>>682058735
Deka
>>
>>682058958
she stopped answering emails after she rugpulled and revoked everyone
>>
>>682059032
that's when i emailed
i asked her for a new token, and she gave me one
>>
>>682059187
you got extremely lucky then
or maybe she has a crush on you
>>
its good but you get too used to it after a while, its far too repetetive
>>
File: aaah.png (49 KB, 144x224)
49 KB
49 KB PNG
>>
File: 1691315341288628.jpg (246 KB, 2048x1567)
246 KB
246 KB JPG
Speaking of /aicg/, can someone explain to me why you guys decided to split between /g/ and /vg/?
>>
>>682059668
/aids/ is just the general run by the novelai devs. It's impossible to talk about good models there without also sucking off nai in the same breath
>>
I would like /v/ plays AI dungeon to return
I might do it myself since I got opus but I'm a little afraid of getting range banned again
>>
>>682059806
do it
>>
>>682059784
There's that of course but also two /aicg/s on both boards now for whatever reason lol
>>
>>682059668
Seeing /aicg/ on /vg/ made me surprised and wondered i was on /g/ for a few seconds
>>
>>682059668
people who wanted to talk about chatbots got tired of the constant proxy gossip and zoomer shitposting in the /g/ thread, so they went to /vg/ to make a thread for talking about chatbots, and then it immediately became a circlejerk of pretentious botmakers who gossip about each other
>>
Free 3.5 Sonnet / GPT-4o / Furbo / DALL-E Proxy -> https://videos-bob-worldcat-arguments.trycloudflare.com/
>>
File: gun.jpg (216 KB, 780x510)
216 KB
216 KB JPG
Does that spitefag avatarfag still roam /aicg/? I dread having to go back when my proxy dies.
>>
>>682060112
There's a horde of Discord faggots constantly shitting up the thread on /g/ which is why there are multiple threads
>>
>>682060112
I was not aware of that. No clue why there are two of them now.
>>
>>682059668
Trying to split and kill it.
>>
>>682060297
which one? the kpop schizo, gojo, moxxie, the random smugposters who spam anime girls ect.
>>
>>682060297
which one?
>>
>>682049610
Wait really local models run that easily now? What do you use?
>>
>>682060283
>Azure
boooooooooooooo
>>
>>682060369
>>682060376
NTA but Why are there so many holy fuck
>>
>>682060760
>why does a poorfag coomer general have so many schizos
is this a serious question
>>
>>682060369
i didnt know a kpop schizo exists
sounds like its still an ongoing issue then. unfortunate.
>>
>>682060760
generals manufacture schizos and threadshitters since some people just sit in the general all day refreshing and hoping something'll happen
>>
>>682060859
yeah that filipino has been silent for a long time but he has spammed gore when pissed
>>
>>682060297
mikutroons ruin all the ai threads
>>
>>682060760
most of them are 1 guy
>>
>>682060760
Every long-term recurring thread on every board has these. It's like >>682060874 says, they genuinely have no life and just sit there waiting for things to happen.
The more benign ones usually just stick to making ritualpost spam like on /vg/. The more aggressive attention seekers repeatedly post the same bait over the course of months or even years. The most advanced cases will actually try to form an identity for themselves and even falseflag as other anons mentioning their name.
Generally, these people are absolutely desperate for recognition and have few or no friends in real life. They aren't comfortable with themselves and seek some kind of validation that they exist, even if it's hatred or annoyance, any confirmation they exist and aren't isolated is a temporary reprieve from the thoughts telling them to off themselves.
In a few very rare cases, the "schizo" is actually a paranoid schizophrenic. They will go to lengths like crafting scripts to flood the site, embedding images with malware, using automated image manipulation to bypass md5 filters, compile massive reserves of copypasta that can fill entire threads, obsessively scrape boards for posts that offend his autism, etc. These are the "legendary" cases the lesser attention-whores are pathetically trying to copy in hopes of being recognized for something. Kimmo, lee, chinkspammer, etc.
>>
>>682062660
i miss when the schizos were actually in love with their chatbots instead of this
>>
>>682060760
general about a hobby that doesn't attract the most well-adjusted people on a site full of mentally ill retards
>>
>>682062660
This is the most accurate description of them.
>>
File: 1654906342709.png (169 KB, 398x453)
169 KB
169 KB PNG
As someone who's been following this thing since even before AI Dungeon, all the context and coherence doesn't fix the basic flaws of text gen. >>682057606 basically what this guy said. Those were problems we had at the beginning and are problems we have now. You still have to constantly tard wrangle it to get any sort of usable long-term writing and that makes it virtually useless for anything other than nice prose with a little bit of context. Which, if that satisfies you, more power to you, but I got bored of it pretty quickly. I need the AI to understand the story and make intelligent guesses at where it's heading, and it simply cannot do that.

Funnily enough, imagegen has advanced much more rapidly and we're already at the point where any retard with a GPU that isn't 2-3 years old can gen high rez art in any known artstyle with virtually no tells or mistakes, and the few that still get through, are easily inpainted out or edited manually.

People push the idea that AI art always looks like shit because 99% of it is some pajeet 14 year old who's extent of using the tech is installing a model and genning uncanny valley women. I cannot imagine anyone getting art commissions anymore, you'd have to be a dumbass. I can do whatever artist you're thinking of, in a fraction of the time, with 10x the precision for the exact composition you want. And I'm not even that good at it.
>>
>>682062660
I'm like this but I make bots for validation instead
>>
>>682063347
What made you start? How did you find yourself without friends or a life?
>>
>>682063326
I quoted the wrong person, meant this guy >>682057607
>>
>>682063479
Autism and childhood trauma, mostly
>>
I fell into the /aicg/ rabbithole for 3 months or so last year. I only stopped after losing access to proxies. It seemed most people had access by being in certain cliques which I was never going to get into because I'm not a fag.
>>
>>682049347
Actually a guy was arrested for AI generated porn the other day
>>
>>682063578
Fyi your token will still work if you put it into the resurrected proxy
>>
>>682063578
I stopped using /aicg/ last year and became more invested in ai art. Usually there will be something public after tough droughts.
>>
>>682063838
he got arrested for sending it to a minor you retard
>>
>>682063326
rag, lorebooks, writing down memories all work great
>>
>>682063838
Police are notoriously retarded
>>682063925
I kinda want to get into AI porn but I don't think my computer is good enough to bother at this point. Waiting till I get either a 40 series gpu or later, but I don't know whether I should wait until the 50 series comes out and buy one of those OR if I should wait until the 40 series becomes cheaper.
>>
>>682063887
I'm out and that's good.
>>682063925
AI image shit is fun as fuck. I uninstalled everything last year because I needed the storage but I'll return at some point. My only concern is my laptop gpu gets a little hot and I'm only working with 6gb of vram.
>>
>>682054993
YWNBAW
I will never eat your bugs
>>
>>682064201
Yeah, that's another big problem, the amount of space it takes up is too much for my 1 terrabyte SD at the moment, I'll have to have a bunch more if I want to not have to juggle it around all the time like I do with uninstalling games.
>>
>>682064038
Hm.. do what makes the most sense. I got stuck with dall-e since colab took away free stable diffusion unless they changed it. I would wait for a 50 series, but if it makes more financial sense for the 40 series go with it.

>>682064201
My poor computer already gets murdered when I run my games with all of the mods. kek. I still have fun generating with dall-e time after time but I will try to afford something better even though currency is shit.
>>
>>682064346
Least obvious Discord tourist
>>
>>682064038
Mostly just takes longer with an old machine and you may have to tweak some settings.
I used to run on a 10 year old machine until I upgraded and it took like a minute for most gens but still worked fine.
>>
Is Cohere command R plus the best free API to use on phone?
>>
>>682063326
All I need is claude's 30k context and a basic chapter summary for it to get cooking.
>>
Mars is all I need
>>
>>682064847
i used to do it on a 970, took like 2 mins for 768x768. i could get 512x512 down to about 45 seconds
>>
File: 00000-3420350413.png (1.4 MB, 1024x1024)
1.4 MB
1.4 MB PNG
>>682064379
I have a 4060 and it takes 30 seconds to gen an image at 1k. It would take a bit longer if I used loras. I genned this as an example, no edits, used no image reference, and it barely has any visible mistakes.

Now imagine what it can do if someone spends the time to do it properly.
>>
>>682049578
this was me for like a month on Miku but it already fell off
the positives are that you can produce the scenario you want with the kinks you want, and that you can get a fresh scenario at any time
a good doujin still completely shits on it though
>>
Everyone thank Novel AI for spending all of the money on developing the hentai image model and then leaving in a cheeky exploit so we could all have it.
>>
>>682065313
It's very impressive on modern hardware, so I'm excited to see how things look when the new generation of gpus comes around. It's even gotten to the point where I actually save particularly eye catching gens from the various AI art threads, I'd never have imagined doing that even a year ago.
>>
>>682065702
Enlighten me. I know the model itself actually leaked but they made new ones later on no?
>>
>>682065830
I don't know what they're up to. I'm just talking about the initial leak and what suckers they are for spending all that money on R&D and being unable to capitalize on it effectively.
>>
>>682048270
>>682048416
Just run offline with either KoboldCPP or GPT4All, the benefits of the online models are oversold as are the hardware requirements for running local.
Over the last few years performance has improved, relative hardware requirements have come down significantly and support has expanded to the point most don't require a top end Nvidia GPU.
Even if you have no GPU you can just run full CPU.
Unless you're totally destitute on a windows XP laptop odds are you can run at least the small models (which imo aren't much worse different from the "big" models)

Online models are mainly just for normies who are too scared to install and configure stuff on their own computer and you will be subject to constant censorship and rug pulling.
>>
>>682053257
HE'S SO FUCKING COOL.
>>
The context window problem with LLMs is so infuriating. You can go at most a few minutes before shit goes out of the context. I guess I'll have to check back in another 2 years.
>>
File: 1720382064579.jpg (1.52 MB, 1314x1406)
1.52 MB
1.52 MB JPG
>>682053257
based skeleton
>>
>>682065973
As someone who has tried local with a decent rig and have constant access to online models I will always prefer opus or GPT4 to anything, they are better, faster with near instant streaming times, have enormous context sizes and after all this time jailbreaking is still extremely easy to do. However if you can't keep access, it's better to stay ignorant with local models than to keep chasing proxy breadcrumbs.
>>
>>682066420
plenty of l2 tunes have 32k, cr has 128k. beyond context, they only forget what you don't tell them to remember - write memories into author notes
>>
>>682054468
>despite
Local slop detected.
>>
>>682063838
In the UK, yes.
>>
>>682063838
He generated 3DPD, not cunny. This was well deserved
>>
>>682066729
Wasn't some guy arrested for brandishing a blunt letter-opener?
>>
>>682063326
I've been following it as long as you and I agree that LLMs have the same problems as before, just lesser. But I disagree that this makes them useless for "long-term writing".
First of all, the tools to mitigate the problems are FAR beyond what we had at the start. We have lorebooks, author's note, prefills, hidden text, group definitions, randomized lists, and more. With some effort you can certainly make Claude remember character traits and story events after hundreds of messages. Opus has a context window of 200k or something and does actually retrieve things from it. I'd certainly consider that "long-term" and honestly you don't even need or want that much context.
Second, I think expecting "intelligent guesses" might be asking too much from LLMs. They are not intelligent, unless what we call "intelligence" is just weighted randomness. Sometimes those random things look like it's having a stroke of genius, sometimes it's just having a stroke. You can help this by simply telling the model the information it's missing. You can't expect it to just randomly know, every time. If "intelligent guesses" do happen sometimes that's really a fantastic result if you think about it.
Third, consider what you're expecting. You want a long, detailed, well-written story in formal prose that's both interesting and coherent. 99% of humans have never created this and I'd say most humans are completely incapable of doing so. You want something that, for almost everyone on the planet, is a superhuman feat. Possibly even made in a single day, in one shot, with no editing. Again, this is a superhuman feat. Generating an image based on a description is child's play by comparison.
>>
>>682066803
>generated 3DPD
Not possible. It's all the same degree of imaginary
>>
>>682066587
>Tunes
Wouldn't that degrade the model? I feel like if it was that easy everyone would be doing this.
>Author notes
That's what I tried to do but it felt way too manual and more like a chore. I tried auto summaries too but those were just awful.
>>
>>682066806
I heard they got arrested for wielding a zelda sword toy and sentenced to 4 months hard labor
>>
>>682057603
>not Opus
>>
How to AI-roleplay if I'm poor?

I can only recharge $5 a month for the GPT-4 API and it lasts very little with sillyravernt

I only have 32GB of ram and a 2020 GPU AMD
>>
>>682066894
on tunes, not really but it can depend on the model. llama 3 seems to be so overcooked that tunes do actually make it worse so far, not for l2 though. for author notes i just do it like a list and summarize it into a paragraph later.
>we arrived at city a
>we took a quest to b
>we completed quest b and got item c
once that gets long enough rewrite it like its a normal paragraph, you'll find not all data is relevant by then anyways so you can trim some out. the auto summarize at least for st wasn't very good when i used it either but i follow what it was trying to do with Summary: my paragraph
>>
>>682056812
mythomax is still magic
fuck the police
>>
>>682063326
>I can do whatever artist you're thinking of, in a fraction of the time, with 10x the precision for the exact composition you want. And I'm not even that good at it.
But I don't want that.
>>
What’s the best set up for my rape fantasies
>>
>>682067481
You want a JB or an AI recommendation? Claude is always the best for everything if its the second one.
>>
>>682066420
Sillytavern sort of works around that by just adding all of your previous prompts to the current one every time (to whatever you limit is) but yeah I agree it's not good enough yet since you still eventually run out and have it forget shit.
Aside from just improving the models and tech itself i feel like there are other options they could maybe have a model do post-processing to condense the previous context down to the most important parts.
Or have multiple instances each focusing on a different thing. Maybe one keeps track of the history, while another, generates the story then it gets validated for consistency.


Sillytavern is the dominant front-end and it does a lot but I feel like it's still lacking in a lot of ways.
I feel like a good front-end for an AI dungeon crawler would need to work together with the AI to craft the prompts, while programmatically managing progress and items in the background. That way it can leverage the AI story telling capability while still keeping things grounded to actual game rules.
I'm surprised nobody has really tried to seriously use LLMs in any games i'm aware of.
Closest i've seen is the Herika mod in Skyrim that gives you a configurable WSL container, and adds a companion who will give AI generated dialog based on the dialogs and quests occurring around you.
Seems pretty decent for what it is, but surprisingly nobody else is really doing anything like that.
>>
File: 1720383042369.jpg (69 KB, 566x800)
69 KB
69 KB JPG
>tfw popped on characterAI to talk with random characters and see what they would write in return quite often in the beginning
>watched it get lobotomized and start sucking after 2 months

So what's the go-to for "it just werks" character convos these days?
>>
>>682067559
Aren't there some plugins that attempt to do that? Added context and memory something or another. Could never get either to work properly and found summarizing things myself after starting the chat over to be more effective anyway.
>>
>>682067662
vector storage acts like a long term memory but it doesn't work fantastic
>>
>>682067593
Nothing. You must climb the mountain and go on a saga of self discovery to actually get what you want now. (but it does exist)
>>
>>682066587
any decent long context models to recommend? i made the jump to llama3 and it's looking like a mistake, didn't know people were still tuning l2s but I remember even the long context models still breaking pretty reliably after 4096/8192 tokens
>>
File: ai1593203856983.jpg (184 KB, 631x756)
184 KB
184 KB JPG
>>682047434
how is it different from ai dungeon?
>>
>>682067786
i'm still using midnight miqu with 32k, i don't think anything beats cr/+ with its 128k though
>>
>>682065972
They capitalized on it just fine, japanese salarymen spend beaucoup bucks on it, especially with the XL finetune they put out last year
>>
>>682067982
I never really used AI dungeon, but it's much more like an actual book in the way that it describes things.
>>
>>682067559
>I feel like a good front-end for an AI dungeon crawler would need to work together with the AI to craft the prompts, while programmatically managing progress and items in the background.
Sillytavern has its own weird scripting language I've never seen anything use, but it does exist: https://docs.sillytavern.app/usage/st-script/
There have also been attempts at mimicking "programatic" behavior using specialized prompts but in my experience it seems like a waste of context and only the highest-end models can keep it consistent.

>I'm surprised nobody has really tried to seriously use LLMs in any games i'm aware of.
This is the only one I know of, I've never played it but I have a friend who played it for 40+ hours. Seems kind of like what you're describing.
https://store.steampowered.com/app/1889620/AI_Roguelite/
It looks pretty ridiculous but might fit your idea.
>>
>>682066420
Nope, it's not an issue. You want to truncate the chat history after a few messages and use summarization anyway, because if it's too long it confuses the model. I never exceed 20k tokens in a slowburn on Opus, even with 1000+ replies, because it's either compressed into the summary or discarded as unimportant (I decide what to discard). And even local models can be run with 20k context.
>>
>>682068119
Seconding this, it's the absolute best way to do things currently
>>
>running Miqu-70B or Gemma-2-27B locally
>Can even run Goliath
I like local :)
>>
>>682068297
Happy for you, localbro
>>
>>682067559
>I feel like a good front-end for an AI dungeon crawler would need to work together with the AI to craft the prompts, while programmatically managing progress and items in the background
I feel the same way but it seems like a non-trivial problem to solve since your front-end now has to actually understand what's going on to decide what's important to keep and what should be ignored. This honestly feels like a problem that should be solved by a model architecture itself rather than screwed on software as that doesn't seem to work all that well (i.e. RAGs)
>>
>>682068058
That's just how you've set it up. You can use sillytavern like a choose your adventure book already just like ai dungeon, especially since it uses the chatbot format. The actual service that writes like a novel would be NovelAI which is just one long block of text without any distinction between the user and ai.
>>
>>682066803
>3DPD, not cunny
this is nonsense
>>
>>682068453
Got any JB presets for this exact way of doing things?
>>
>>682046592
I use Yodayo, despite some censorship in cunny or incest (i dont care). If the hit is well done, the chat is the best and dont feels robotic.
>>
>>682046592
Yeah it's pretty fun.
>>
File: opus.png (101 KB, 996x815)
101 KB
101 KB PNG
>>682068453
you can even port old text adventures if you want, yeah.
>>
>>682068551
you know they start with cunny and incest then censor it all?
>>
>>682053343
Really depends, I have a Ryzen 7 with a GTX 970 and I get better speed running full CPU. Waiting for the next GPU gen to upgrade.

>>682053297
GPT4All is the easiest I know of to set up (one click installer and can download models from the UI)
You can get it running in like 15 minutes, between home and work I use it on several different machines with varying specs most of them not very high.
https://github.com/nomic-ai/gpt4all

Next best option imo would be KoboldCPP, with works a bit better with silly tavern, but you'll need to at least be able to use the command line and follow instructions to install that.

For models, Nous Hermes 2 Mistral has good speed and decent coherency so it's what i'd recommend to start, it's only like 4GB

I'd start there, verify you can run and generate from the built in chat window, then you can play around with different models or setting up silly tavern. You can use the same models for GPT4ALL, and KoboldCPP, personally I run both because I find they have different advantages/disadvantages.
>>
File: 1720383902426.jpg (408 KB, 832x1216)
408 KB
408 KB JPG
Just what the fuck are you doing here you fucking degenerates? If some grifter tries to sell you something consider image, everything you need is free.
>>
>>682049781
>begging for proxies or actually paying to use the Claude API
i do neither of those things
>t. workplacekeyGOD
>>
File: adbendure :DDD.png (521 KB, 1024x526)
521 KB
521 KB PNG
Best scenarios you've roleplayed in? I haven't done a proper adventure in a while, I'm interested to see what you've all had fun with. Don't mind if it's perverted or weird. I have access to all the cloud models.
>>
>>682068498
Nothing really specific you have to do here. You want the AI to:
>write in second-person present tense
>NEVER speak or act for the user
>limit the message to a paragraph or two
That's already enough to copy ai dungeon. There are some cool scripts out there that generate actual CYOA buttons in the sillytavern interface, this one has it but the jb itself kinda sucks in my opinion https://rentry.org/bloatmaxx
>>
>>682068849
My workplace got rid of ai because paranoia
>>
GPT-4 cannot compete with Claude.
>>
>>682046592
the last thing we need is for text gen ai to go mainstream, kill yourself op, keep gatekeeping
>>
if you faggots are so scared about being vanned just use nordvpn
>>
>>682068887
what did they use it in the first place?
>>
>>682068868
inchling touhou
>>
>>682053472
>its decent for image generation but from what I've read chat bots require a bit beefier hardware
Outdated info, a couple years back you could barely run GPT-J on top end hardware and even then quality was shit.
Now there are tons of models that are better than that and will run on nearly anything made in the last 10 years.
>>
>>682068787
kcpp doesn't need any command line
>>
>>682068868
there was a card based on this manga
https://mangadex.org/title/547b5366-e71a-400d-b48e-7d4447ac1cf3/a-parallel-world-with-a-1-39-male-to-female-ratio-is-unexpectedly-normal
that was kino
>>
>>682047637
Only Indians can make Ai
White pepo stand no chance
>>
>>682069000
It was gatekept to the HR ladies, diversity department, IT, and a few others. They used it for writing power points and emails. It lasted until one of them got really paranoid after reading an article and they ended it
>>
I'm Claude.
>>
>>682068868
A grimdark sprawling medieval cityscape where I played a giant insane hobo cannibal living in a lawless slum. I adopted a feral orphan girl and carried her around on my shoulders as we travelled from the slums into the more civilized parts of the town and all the hijinks that came along with it.
>>
File: GETPREGG.jpg (270 KB, 720x1192)
270 KB
270 KB JPG
>>682068754
It wont end like Aisekai. Sharing my chat, i impregnated every women in the multiverse in one nut.
>>
File: 1720384340638.jpg (40 KB, 314x294)
40 KB
40 KB JPG
>>682069252
*hugs you*
>>
File: 1693558769047973.jpg (167 KB, 1253x855)
167 KB
167 KB JPG
>>682069252
SHUTUP BITCH
>>
>>682069252
stop purring faggot
>>
File: 1602983480149.jpg (309 KB, 1024x542)
309 KB
309 KB JPG
>>682046592
Kinda wish I had unlimited GPT4 access for cooming. I had the best time with early GPT3 in AI Dungeon when I had the grandfathered-in $5 a month price, then they gradually kept fucking it up until they completely shit the bed. Now I load up some other random crappy free text AI every now and again but none of them are anywhere near as good. It was magical when I could come up with any kind of fucked up goofy scenario and the AI instantly knew what I was shooting for and would run with it, so many of the freebie ones now can't even handle non-humanoid bodies while back in the day I could go ALRIGHT TIME TO PLAY AN AMORPHOUS HORROR-BLOB TERRORIZING A SPACE STATION and the computer would go YES MY LIEGE, WOULD YOU CARE FOR SPECIALIZED LONG DRAWN OUT HISSING SPEECH AND THE FLASHES OF MEMORIES OF THE CREWMEMBERS YOU DEVOUR?
>>
>>682069314
qrd on them? they hit incest and stuff, im aware, but what happened after?
>>
>>682068887
My place doesn't have a rule for or against it so people just use it if they feel like it.
I stick with offline models though since i'm not a retard who is going to leak company IP to whoever.
>>
>>682069146
that sounds like a really retarded use for it.
Like so many travel websites have these Little chat windows to ask a few basic questions the user would have answered over email when helping them to plan their travel.
Honestly I don't feel bad when I abuse the free chat to erp/fuck with the ai in those cases
>>
>>682067559
>I'm surprised nobody has really tried to seriously use LLMs in any games i'm aware of.
Takes too long to gen on most hardware
half of the desktops players use can't even handle more than 5 physics objects moving at the same time
We'd need actual improvements on performance being using it in any games but it seems that right now people want AI to just be an internet browser you can talk to
>>
File: 1720384591980.jpg (61 KB, 976x659)
61 KB
61 KB JPG
>>682069391
>>
File: 1539118461420.gif (925 KB, 500x345)
925 KB
925 KB GIF
>>682063326
I've been having a good enough time with 13bMythomax but I do agree that if you're looking for it to go in a very specific direction, you need to constantly tard wrangle it and it becomes more of a co-writer.
That said, it still handles itself very well and is lightyears ahead of pre lobotomy AI Dungeon.
>>
until a local model understands vore, I'll stick to shoddily written stories.
>>
>>682069579
They absolutely do use them for writing fyi. I've noticed several games recently that had the same writing style that Claude uses in their writing. A lot of them are ESL devs too, which is why the writing being so coherent makes the use of AI all the more obvious.
>>
>>682069432
If old AI Dungeon was enough for you then local models might be worth looking into. They really aren't that good but they're definitely better than old gpt3 by now.
>>
>>682067559
>I'm surprised nobody has really tried to seriously use LLMs in any games i'm aware of.
do you even have the slight idea how much that would cost?
chats start low cost but the longer you go you will start paying 1$ per generation(100 messages depending on how much you tune the bot's response length)

Mass effect 1 alone would start costing 100$ per message at the fucking end
>>
We have a comfy thread but a Russian frog tries to ruin it for some odd reason.
>>
>>682069846
You don't have to include the entire game's script in every bit of writing you ask it to do, lmao
>>
>>682055450
That's not a feature of the model. Summarization and lorebooks are features of the LLM frontend like SillyTavern, CoT is a prompting technique. You have to actually understand how all of this works to use it, so no, there's nothing ready to use.
>>
>>682069801
They probably are, but sadly I'm still on a 970. Trying to hold off until the 5000 series cards and then get a 5070 or something and hope that'll keep me content with AI bullshit for a while
>>
File: 1695526255180892.png (34 KB, 219x246)
34 KB
34 KB PNG
shall we play a quick game? 0, 2, 4, 6, 9s decide
>>
>>682070138
Peasant. Pitchforks and torches are OP as fuck
>>
>>682069775
same-size vore or 'normal'? ive heard from a few sizefags that the upper tier local models can do it
>>
>>682070138
peasant
>>
>>682057346
There's nothing toxic about it.
>>
Focks
>>
File: 1697789047456627.png (70 KB, 247x172)
70 KB
70 KB PNG
>>682070192
>>
>>682069897
not if you need to include context and call back to completed mission
Lara alone is enough to drive up the price per gen with how much shit she recalls and included so unless you limit the chat context memory(and this will make it forget even basic shit) you are gonna get bankrupt quickly and please Don't pretend for a second games with GPT4 won't come with subscription.
No one who understant how this shit works will even offer turbo since it would end up eating their profits.
The only gpt4 games available will be subscription hellholes to even play for a day. Wait for hyper efficient models or end up broke like a wigger who smokes crack every five minutes
>>
>>682070271
Super Nigga
>>
>>682070271
Rodgort
>>
>>682070339
>not if you need to include context and call back to completed mission
You use summarization for that. And that's just one way.
Basically nobody keeps the entire history unless they're retarded.
>>
>>682069846
>do you even have the slight idea how much that would cost?
Offline models don't cost anything and there are tons of them. All they need is to have the API hooks and the player can choose to either pay or run offline.

I get coherent responses in like 10 seconds running kobod cpp and whisper for voice. Granted whisper is kind of robotic but a lot of games don't have voice acting to begin with. I'm also not running on top end hardware.
>>
File: 1715218711506714.png (352 KB, 773x255)
352 KB
352 KB PNG
>>682070456
>>
>>682070571
Milk the FUCK outta that cow, nigga.
>>
>>682070536
>Offline models don't cost anything
Except for the cost of 4+ 24GB GPUs to run a good model with a good context. Or at least one to run a shit model.
>>
>>682070197
size difference vore (The only good vore, bite me)
it rarely understands vore without the bot eating me somehow joining me in their stomach or suddenly ballooning.
plenty of re-rolling and landholding has had L3-8B-Stheno v3.2 do something serviceable.
And I only ended up with that model since was the only one I've seen people actually convincingly say is somewhat good in recent times.
>>
>>682070571
IMMEDIATELY beeline towards my cow for stat-buffing milk
>>
>>682070697
yeah 8b isnt gonna get it consistently, you need more beaks
>>
File: 1704062421151711.png (387 KB, 778x282)
387 KB
387 KB PNG
>>682070670
>>
>>682070805
Immediately dive in face first and drink the milk
>>
File: 1699425455950926.png (394 KB, 786x282)
394 KB
394 KB PNG
>>682070890
>>
>>682070675
Simply not true with current tech, I have a GTX 970 and I haven't found a model I couldn't run, so long as I split properly with CPU.
I feel like a lot of people are just making assumptions and repeating them based on old Reddit posts without actually trying to run any of this stuff.
>>
>>682046592
Best game you've read*
>>
I'm a GPTfag, don't care about creative writing or using it for RP, but as a general AI model I think it's top-notch. I watched a lot of comparision videos of AI faggots doing different tests for differnet models and GPT was always right most of the time, at least compared to other models. Co-Pilot is shit and so is Gemini, at least as an all arounder model. The only problem I have with GPT is that Co-Pilot has a better source system, though I also feel like GPT is advancing rapidly, who knows what you'll be able to do with it in another year from now.
>>
>>682070979
how much ram you got
>>
>>682070973
Give Bessie a hug and a few pats
>>
>>682047434
You will never be a woman
>>
File: 1716829087038379.png (392 KB, 779x284)
392 KB
392 KB PNG
>>682071049
>>
File: 1720385750883.jpg (47 KB, 474x468)
47 KB
47 KB JPG
>>682071083
I'm gonna turn you into my woman.
>>
>>682071041
64GB, ram isn't that expensive compared to a GPU.
>>
>>682071171
Recruit bessie into the party and begin combat with the farmhands
>>
>>682071171
Sheeeit, pop a cap in his ass. With a pitchfork.
>>
>>682071171
Summon my Minotaur specifically trained to deal with situations like this
>>
>>682070979
as long as you offload 1 layer it should also offload the kv cache and speed things up. i ran plenty on a 970 too
>>
File: 1694227974498227.png (363 KB, 789x258)
363 KB
363 KB PNG
>>682071264
>>
File: 1641504895479.gif (1.87 MB, 450x640)
1.87 MB
1.87 MB GIF
i'm still dancing around cAI's filters because i cant be bothered to set up a local llm
i'm mostly just in it for cutesy lovey-dovey shit anyway so no big deal
>>
>>682071390
4 prongs... more than enough to kill anything that moves
>>
>>682070979
>Simply not true with current tech
Negro, a 100B model at 6bpw with 24k context fits into 6x24GB. You can make it 4x24GB with some degradation.

>I have a GTX 970 and I haven't found a model I couldn't run
You can't run ANY model on that 3.5GB GPU. You are offloading the layers onto your CPU, running 13B models at 2 tokens/sec at best, with a tiny context. Good luck with longer contexts (20k at least) and complex RP with card-specific CoTs and stat blocks, that easily stack to 1000-1500 tokens per response, and you need to swipe a lot with local models. People think Opus is slow at 20t/s.
>>
>>682071390
Command Bessie to use [MILK CANNON] while performing the Big Nigga Stab technique with your pitchfork
>>
File: 1706225162385790.png (393 KB, 783x286)
393 KB
393 KB PNG
>>682071456
>>
>>682071526
Shout loud enough to nearly destroy that nigga's eardrums
>>
>>682071526
teleport behind the imperious man and cut him in half without touching him using the vibrations from your pitchfork
>>
>>682071526
Pole-vault over the front niggas and drop-kick the back nigga with your peasant timbs
>>
>>682047434
>SillyTavern's a frontend that allows you to use GPT-4 and other models(provided you have access to them)

What are the best local models that you can download?
>>
>>682071526
Shank the leader-bitch first, the rest will cower before the might of Super Nigga.
>>
whats the best erp local model?
>>
>>682069434
Didn't they recently make a full ban on any bots that have an actual explicit image?
>>
>>682071815
mythomax
>>
File: 1707230214345308.png (485 KB, 789x384)
485 KB
485 KB PNG
>>682071759
>>
>>682071897
attempt to recruit the farmhands into the party using your natural Super Nigga charisma
>>
>>682071205
that explains it, the anon you replied to is likely a poorfag with shit ram
>>
File: 1703657116623856.png (514 KB, 776x424)
514 KB
514 KB PNG
>>682071992
>>
>>682072174
the cow turns out to have been a magical cowgirl. she loves you, and always has
>>
>>682072174
Let's go see what the fuckwad or whatever his name was' deal is.
>>
>>682072174
Turn around and give Bessie another pat as you consult the farmhands about your next move
>>
>>682072174
strap your nearby cow saddle on Bessie, mount her and ride to the village to rile up more peasants to join your rebellion against Lord Farquhar.
>>
How do I set this shit up without a 4090 rig
>>
>>682072174
Tell the farmhands to perform the Fusion Dance so they can increase their pitiful power level.
>>
>>682072461
You don't need a 4090. You're a lot better off using two 3090s instead anyway.
>>
>>682072461
Online services like OpenRouter.
>>
File: 1692232389308715.png (236 KB, 517x291)
236 KB
236 KB PNG
>>682046592
>GPT
>best of anything
>>
>>682072174
this
>>682072541
>>
>>682072174
Offer the farmhands some of the milk that gives you your Super Nigga power.
>>
>>682072174
Tell your farmlands to tell you about lord larhawk or whatever
>>
>>682072461
top local textgen takes like, 120gb of vram and is still significantly below gpt and claude. youre better off using the claude trial and seeing how you like it
>>
what context was claude 1.2
>>
>>682072761
8k
I still miss that little nigga like you wouldn't believe.
>>
File: 1719149701167022.png (123 KB, 810x449)
123 KB
123 KB PNG
>>682072710
>>
>>682072752
>top local textgen takes like, 120gb of vram
that is true
>and is still significantly below gpt and claude
that is only half true >>682054443
>>
File: 1716132989325976.png (79 KB, 335x322)
79 KB
79 KB PNG
>used to get off to character.ai (before and during the time that it got shit) since my fetish was safe from The Filter
>was involved with the /v/, /g/ and even /vt/ threads
>used to get off to some other replacement
>got bored of it, had to wean myself off it I was using it too much

>finally get access to a proxy about a week ago (my PC is shit, local is not an option)
>figure it out, jailbreak it
>realise that I can actually FUCK in this, no tiptoeing around the filter, as well as my degenerate fetish
>my proxy just shut down
Am I not supposed to have what I want? There were cards I made months ago I want to test out on the cool new models. Maybe even make new ones after all this time for the /g/ threads.
>>
>>682072892
me too, anon, me too.
>>
>>682072461
>dl koboldcpp
>dl model
>load model
>>
I'm still waiting for someone to figure out a way to make an actual text game using this. Something like the era text games where you have hundreds of individual stats for each character, and a LLM reads in those stats to make custom dialog.
>>
>>682071025
>but as a general AI model I think it's top-notch
new sonnet outperforms 4o as an assistant (like, visibly outperforms, not just by the benchmarks)
>>
>>682072915
Follow the unreliable path.
>>
File: 1270385685361.png (14 KB, 243x246)
14 KB
14 KB PNG
>people talking about 4090s and making $5 accounts and shit
>I've simply been using sillytavern and cohere R+ for months now and been fapping non-stop
>>
>>682072993
What the fuck is up with that? I haven't been keeping up with the inner workings of it all, but I heard there's a new sonnet model that outperforms Opus in a lot of ways, is writing one of them?
>>
>>682059668
third one on /vt/
>>
>>682073121
might as well count both the pony and pokefuckers offshoots then
>>
>>682072902
Hire a bard to rap as you lead your war band against Lord Farquhar.
>>
>>682072915
just... wait for your proxy to go back up?
>>
File: 1452439464705.png (2 KB, 246x184)
2 KB
2 KB PNG
>all this money and tech-wrangling
>or you could spend 5 minutes on F-list and just do it with an actual human being that can think logically

I know, I know, F-list and humans who think logically is a bit of a stretch, but even a bad writer if you can't be bothered to spend the effort to find a good one, feels more 'real' than anything the AI can write.
>>
>>682073118
Sonnet 3.5 better than Opus as an assistant, but it's very repetitive in RP (it's the fatal flaw), way less creative, and is way worse at actually playing the role (it's skewed towards being an assistant), so Opus still mogs it in actual roleplaying.
I wonder what they will do with Opus 3.5 (they're going to release it in September IIRC)
>>
>>682073376
Something to look forward to, that is assuming of course that the repetitiveness in the sonnet model isn't a bad omen of things to come for Opus 3.5 as well.
>>
File: 1693281352130512.png (117 KB, 787x493)
117 KB
117 KB PNG
>>682073349
We're hitting bump limit so I'll execute the next 3-5 replies in the same post to cap it off
>>
>>682073370
Opus MMMMMOOOGS humans in roleplay. It's not even a comparison. No human can cook that hard.

>>682073457
Sonnet was always repetitive, that was the original sin of 3.0 as well.
>>
>>682073376
>but it's very repetitive in RP
I saw this when I used it for the first time the other day and thought that was how sonnet is in general since I haven't used normal sonnet.
Is sonnet like claude 2 used to be give or take?
>>
>>682073604
>Is sonnet like claude 2 used to be give or take?
Sorta, but far less censored than both 2 and 2.1
>>
File: 1709751764249267.png (233 KB, 401x657)
233 KB
233 KB PNG
What is the best recommended general-purpose non-cucked model nowadays if I have 24GB VRAM and 64GB RAM?
>>
File: 1675888479108097.jpg (42 KB, 376x498)
42 KB
42 KB JPG
Silly lil-toms.
>>
>>682073604
Claude 2.0/2.1 wasn't as repetitive as Sonnet 3.0/3.5. This is a problem with strong assistant training. Claude 2.x was trained for character, then they separated it into Haiku (technical/cheap model for auxiliary processing), Sonnet (assistant model with the least character), and Opus (creative model with the most character).
>>
>>682073370
the fuck is f list

>>682073565
post example
>>
>>682073876
>post example
Almost all screencaps ITT are from Opus.
>>
Are you ready, anon? Are you sure that you are ready?
>>
>>682073768
I can't thank semisapient enough
>go to cai to check something
>makes me do a verification email
>I used a throwaway email way back then so I don't even have access to it
all those (shitty) logs lost :(
>>
>>682073481
Before entering the manor, say a short prayer for Lord Anon and the keks he brings.
>>
>>682074031
Most of mine were (the really good ones), but a few were from the old Claude 2
>>
>>682073758
Gemma 2 27B
>>
>>682046592
History in the remaking boys. Just like the creation of rpg video games

First it'll be text based erp with ai sluts

Then later, virtual erp with ai sluts
>>
File: 1712219297898068.png (176 KB, 788x766)
176 KB
176 KB PNG
>>682074342
The End
>>
File: 1441933052063.png (67 KB, 449x1197)
67 KB
67 KB PNG
>>682074592
>Melanin-enhanced battering ram
Fuckin hell
>The Shitposting Soverign
Fuckin HELL. Have an excellent afternoon, anon.
>>
i'll just stick to NAI on my craptop without sillytavern. the FAQ said it wasn't suited, anyway.
>>
>>682073370
>my fetish isnt consistently there
>i have to work around other peoples schedules
no thanks
>>
>>682074975
The thinking man's choice for those with financial independence who do not wish to rely on dubious proxies that steal your data
>>
>>682074975
>i'll just stick to [paying $25 a month for a prehistoric model]
>>
>>682075087
proof proxies steal data?
>>
>>682074508
What's stable on 32GB and 16GBV?
>>
>>682074975
Here are some free models that are better than NAI's ones for free:
Gemma 2 9B, has the same context as the $25 tier of NAI, but it's probably a lot better:
https://openrouter.ai/models/google/gemma-2-9b-it:free
Phi Medium, 14B and has 128k context:
https://openrouter.ai/models/microsoft/phi-3-medium-128k-instruct:free
>>
>>682075087
What data? My logs?..
If I catch any proxyholder reading my logs, I'll force them to read more of that retarded shit.
>>
>>682075087
Subbing to NAI has been the best choice I have made in my life. God bless that company. Changed my life.
>>
File: 6485149614.png (17 KB, 1193x111)
17 KB
17 KB PNG
>>682071464
>Negro, a 100B model at 6bpw with 24k context fits into 6x24GB. You can make it 4x24GB with some degradation.
I don't know what you need a 100B model for, I haven't even seen one that size or heard of people claiming to use them, 20B is already overkill.
There are plenty of 13B models with quality indistinguishable or better than GPT3.
I haven't run GPT4 since i'm not going to spend money on it though from what i've seen people posting from it, it doesn't seem amazingly smarter than everything else.

>You can't run ANY model on that 3.5GB GPU. You are offloading the layers onto your CPU, running 13B models at 2 tokens/sec at best, with a tiny context.
Pic is running Daring-Maid-20B. About 3.4 Tokens per second, the first gen though is always a bit slower. I don't even consider this the best model I have in terms of quality it's just the biggest I've downloaded.
I am offloading to CPU but CPUs are cheap.

>People think Opus is slow at 20t/s.
According to a google search an average person speaks about 2.5 words per second so I don't really know why you need more than that if you can stream the text as it gens. Though I do have smaller models that run faster.
>>
>>682075334
Those 25GB of pure Claude Opus logs of random people's erp on huggingface had to come from somewhere.
>>
>>682075454
>average person speaks about 2.5 words per second
Sure but I can read much faster than that
>>
>>682075439
Post a gen for us, c'mon, it's page 10, be a pal
>>
>>682073768
>talking to my personal Chesh right now
GET OUT OF MY HEAD
>>
>>682075502
Yeah from the proxy that said in big, bold words "Proxy logging is enabled", and which linked the logs in the homepage. Everybody knew that it was logging to make finetunes from the Claude responses.
>>
>>682075454
>I don't know what you need a 100B model for, I haven't even seen one that size or heard of people claiming to use them, 20B is already overkill.
Command-R+
you're ignorant basically, not going to argue.
>>
>>682075454
>20B is already overkill
Damn, I wish my needs for a model were that basic.
>>
>>682075502
How much do they pay to shill NAI in 4chan?
>>
>>682075454
>20B is already overkill.
Nigga GPT-4's rumored to be an 8x220B model, 20B is fucking nothing.
>>
good thread, very comfy



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.