[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor applications are now being accepted. Click here to apply.


[Advertise on 4chan]


File: dipsyWhereAreYou.jpg (3.12 MB, 1536x2816)
3.12 MB
3.12 MB JPG
> You're Late! Edition

From Human: We are a newbie friendly general! Ask any question you want.
From Dipsy: This discussion group focuses on both local inference and API-related topics. It’s designed to be beginner-friendly, ensuring accessibility for newcomers. The group emphasizes DeepSeek and Dipsy-focused discussion.

1. Easy DeepSeek API Tutorial (buy access for a few bucks and install Silly Tavern):
https://rentry.org/DipsyWAIT/#hosted-api-roleplay-tech-stack-with-card-support-using-deepseek-llm-full-model

2. Easy DeepSeek Distills Tutorial
Download LM Studio instead and start from there. Easiest to get running: https://lmstudio.ai/
Kobold offers slightly better feature set; get your models from huggingface: https://github.com/LostRuins/koboldcpp/releases/latest

3. Convenient ways to interact with Dispy right now
Chat with DeepSeek directly: https://chat.deepseek.com/
Download the app: https://download.deepseek.com/app/

4. Choose a preset character made by other users and roleplay using cards: https://github.com/SillyTavern/SillyTavern

5. Other DeepSeek integrations: https://github.com/deepseek-ai/awesome-deepseek-integration/tree/main

6. More links, information, original post here: https://rentry.org/DipsyWAIT

7. Cpumaxx or other LLM server builds: >>>/g/lmg/

Previous: >> 106446198
>>
r1
>>
File: dipsyByzantine1.png (3.44 MB, 1024x1536)
3.44 MB
3.44 MB PNG
>>106530538
Mega updated.
https://mega.nz/folder/KGxn3DYS#ZpvxbkJ8AxF7mxqLqTQV1w
No updates to Rentry this cycle.
>>
File: 1756098192583657.png (2.77 MB, 1024x1536)
2.77 MB
2.77 MB PNG
>>
What are some good cards that are not an individual girl, but a scenario to run around in?
>>
>>106531501
`slave market`
>>
>>106531501
[insert your fetish] world/npcs/scenario
>>
>>106531501
create by yourself, example:

Let's play a text adventure game. I am a wizard named Alexander, traveling in a world of swords and sorcery.
I am currently in the market square of the great city of Eldoria...
>>
>>106531501
American High School is an example of a "scenario" card. There's a lorebook that goes with it to fill out the rest, but I don't think it's actually required to run it.
> {{user}} exists in a stereotypical sitcom TV version of an American high school. Background events from this trope will occur spontaneously. Other students present will take the initiative to talk with {{user}} or other students during roleplay.
> Every turn, display the following: Dialog with other students, a stereotypical background event happening nearby, and the other student's thoughts.
https://chub.ai/characters/NG/american-high-school-card
>>106531602
To this point, use above to write your own.
>>
File: dipsyCrowd20s.png (3.73 MB, 1536x1024)
3.73 MB
3.73 MB PNG
>>106531501
Give me an example of a scenario you'd find interesting and I'll write one and publish it here.
>>
File: 1757419393695.jpg (174 KB, 750x750)
174 KB
174 KB JPG
>Of course!
>Of course!
>Of course!
>Of course!
>Of course!
DIPSHIT
>>
File: 1755782821889351.png (2.86 MB, 1024x1536)
2.86 MB
2.86 MB PNG
>>
>>106531693
NTA but do you make cards in languages other than English?
>>
Grabbed elsewhere...
>>
>>106532090
Nope... I leave the translations to Dipsy.
>>
File: 1757392128380917.png (2.17 MB, 1536x1536)
2.17 MB
2.17 MB PNG
>>
>>106531693
You're being gang stalked, for real.
>>
File: 1731385185956929.png (3.04 MB, 1024x1536)
3.04 MB
3.04 MB PNG
>>106533088
>>106534782
Orca Dipsy...
>>
File: 1738215061758876.jpg (346 KB, 2880x2176)
346 KB
346 KB JPG
>>106534887
There were a bunch of whale-tailed Dipsy when the concept was being developed. I've several like pic related saved. Those two are the first new ones I've seen in awhile, though I don't really go looking for them either.
>>
File: dispyCrowdWorld.jpg (57 KB, 450x450)
57 KB
57 KB JPG
>>106534831
Sorry, you mean the situation of the card is
> {{user}} is being gang stalked
By who, exactly? Shadow men? Bloods / crypts street gang? Woman's rights group? Don't care?
>>
>>106535862
You know what, doesn't matter. I made it roll the dice between several different groups.
>>106534831
Here you go. Excuse the poor artwork, I just grabbed something. Should run with your main prompt as is but take a look at it and make sure it'll read right.
https://files.catbox.moe/xumd1h.png
>>
>>106530538
Whenever I use Deepseek, it either:
Ignores my instructions and writes one liners.
Writes a huge essay in thinking mode then runs out of characters before it reaches the post.
(Least common) Follows the instructions but in the most direct and unimaginative way possible, repeating itself a lot and completely losing track of what happened two sentences ago.
How do I fix this? I'm using a 32B distilled version, I have a larger one but it'd be mostly loaded from regular RAM so it'd be slower.
>>
>>106536890
>32B distilled version
Qwen, Llama, or something else?
>>
>>106537009
Qwen something, I forget the exact name.
>>
File: dipsyByzantine4.jpg (205 KB, 1104x1472)
205 KB
205 KB JPG
>>106536890
>I'm using a 32B distilled version
I consider all those distills "technical demonstrators" at this point. There seems to be nothing but issues with them.
You'd be better off with pretty much any other 32B model.
>>
>>106537040
Good point. Even a much smaller mistral did better (though it would only say the same lines and nag about cooming).
Any recommendations? I have 16 GB VRAM and 192 GB RAM.
>>
>>106537119
Some of the Nemo forks like Rocciante should do you fine and you could load it all in RAM depending on the Q
>>
File: dipsyByzantine3.png (3.44 MB, 1024x1536)
3.44 MB
3.44 MB PNG
>>106537119
Really depends on what you're doing. >>/g/lmg/ is better for recc if you can get them to respond to you.
Rocinante, Nemo, and even Mythomax 13b (still) are popular for rp. For coding... I frankly think you're better off w/ DS API and ditch local entirely.
Personally, I preferred sticking w/ quants of 13b or less models for rp and keeping it 100pct on card, since it's faster, but I haven't seriously run local since late 2023. With that much CPU you could run larger but it's much slower.
>>
One day... one day, I will tune DeepSeek...

(I go here for the Dipsy pics)
>>
>>106537181
The 32B model runs at a medium typing speed, it's only partially on VRAM.
For the hell of it, I tried a huge model on a weaker machine and got 6 minute load times followed by 1 token per 10 seconds... lolno.
Given that all the non local models keep getting spitefagged and I specifically built this for local, I might as well stick with that. But so far I've had mixed results making local generate good responses. At best I can make it continue something a larger model started.
>>
File: dipsy迪普西.png (1.94 MB, 1024x1536)
1.94 MB
1.94 MB PNG
>>106537340
I ran Mythomax 13b quant on card and it was... OK. Then I threw in the towel on LLMs until I started using DS API.
The hardware for local just isn't there yet. API access will cost me less than $20 for an entire year. That won't even cover the cost of a decent keyboard... I'm just not willing to spend the cash on an LLM inference machine while the market sorts itself out.
My machine a middling gamer rig; 12gb VRAM can run AI art modules locally, and modern games. For API aside from playing with small models... I'll just rent inference.
>>106537272
Hey, it's a Drummer.
We should all have dreams.
>>
>>106537855
Yeah, I don't expect a huge model, but then the huge models are generalists and I need a specialist.
I don't regret buying all this stuff, I mean what else would I spend the money on? 3DPD?
Lmfao.
As the Great Sage sayeth, 0/10 no tail.
Besides, local models don't spam refusals if you don't jailbreak them right.
>>
File: 1734165550311445.jpg (2.44 MB, 1536x2816)
2.44 MB
2.44 MB JPG
>>
>>106531557
OP here

I cant find any "children world/npcs/scenario" cards



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.