[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: 1734640484713153.png (1.47 MB, 1536x1174)
1.47 MB
1.47 MB PNG
Discussion of Free and Open-Source Diffusion models.

Previous: >>103569496

>UI
Metastable: https://metastable.studio
SwarmUI: https://github.com/mcmonkeyprojects/SwarmUI
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge
reForge: https://github.com/Panchovix/stable-diffusion-webui-reForge
ComfyUI: https://github.com/comfyanonymous/ComfyUI
InvokeAI: https://github.com/invoke-ai/InvokeAI

>Models, LoRAs, & Upscalers
https://civitai.com
https://tensor.art/
https://openmodeldb.info

>Training
https://github.com/Nerogar/OneTrainer
https://github.com/kohya-ss/sd-scripts/tree/sd3
https://github.com/derrian-distro/LoRA_Easy_Training_Scripts

>HunyuanVideo
Comfy: https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/
Windows: https://rentry.org/crhcqq54
Training: https://github.com/tdrussell/diffusion-pipe

>Flux
Forge Guide: https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050
ComfyUI Guide: https://comfyanonymous.github.io/ComfyUI_examples/flux
DeDistilled Quants: https://huggingface.co/TheYuriLover/flux-dev-de-distill-GGUF/tree/main

>Misc
Share Metadata: https://catbox.moe | https://litterbox.catbox.moe/
Generate Prompt from Image: https://huggingface.co/spaces/fancyfeast/joy-caption-alpha-two
Archived: https://rentry.org/sdg-link
Samplers: https://stable-diffusion-art.com/samplers/
Open-Source Digital Art Software: https://krita.org/en/
Txt2Img Plugin: https://kritaaidiffusion.com/
Collagebaker: https://www.befunky.com/create/collage/
Video Collagebaker: https://kdenlive.org/en/

>Neighbo(u)rs
>>>/aco/sdg
>>>/aco/aivg
>>>/b/degen
>>>/c/kdg
>>>/d/ddg
>>>/e/edg
>>>/h/hdg
>>>/tg/slop
>>>/trash/sdg
>>>/u/udg
>>>/vt/vtai

>Texting Neighbo(u)r
>>>/g/lmg
>>
Anyone tried this? Seems fast

https://www.reddit.com/r/StableDiffusion/comments/1hhz17h/ltxv_091_released_the_improvements_are_visible_in/
>>
>>103579178
Alright, Ill test it out so I can be disappointed for you. LTX is like the reddit hunyuan
>>
Blessed thread of frenship
>>
>>103579199
>reddit hunyuan
so I guess very SAFE
>>
>>103579214
Very safe, verry accessible, runs on 8gb of vram UGUU
>>
>>103579199
Thanks you are a hero that we don't deserve
>>
>>103579225
ah yes I totally forgot about the safety aspects, this won't be like hunyuan then, but being a lighter model might get some finetuning at some point hmmmm
>>
File: 1720864758303599.png (6 KB, 963x38)
6 KB
6 KB PNG
I'm generating the example video for hunyuan on my 3090 and it's taking an age and choking my PC. It's just too slow and heavy to be usable atm, unless there's some perf boosters I can implement?
>>
File: 1712641276581811.png (199 KB, 1472x844)
199 KB
199 KB PNG
>>103579284
Guess it's worth asking - I had to change I think 2 of the values which were invalid. One of them was the `prompt_template` which I changed from 'true' (invalid) to 'video'. I also wonder what quantization does, since it's disabled.
>>
>>103579284
>113.78
100% chance you're over your vram budget and spilling into ram.
>>
File: 1734242397912748.png (7 KB, 1108x48)
7 KB
7 KB PNG
>>103579317
Well I have 24gb on my 3090. Idk how to make it use less or whatever
>>
>>103579301
for the text encoder (bottom left) set the quantization to nf4 bnb. the rest of your settings seem fine to me.
>>
>>103579328
what res and frames are you usuing?
>>
>>103579336
Thanks - is there a way to make it end prematurely but still output the video?
>>
>>103579328
>>103579301
oh nevermind
>>
>>103579328
>Idk how to make it use less or whatever
Generate less frames. Are you fucking retarded? Holy shit.
>>
File: LTXVideo_00001.png (515 KB, 768x512)
515 KB
515 KB PNG
Okay, back from my first test of LTX video using their official workflow

>A brilliant black scientist in a lab coat looks at a vial of blue liquid in a sterile lab

Credit where credit is due. It was lighting fast, 30 seconds. But this output is garbage bro.

Let me experiment some more.
>>
>>103579429
i don't know man you're asking for the impossible with that prompt.
>>
File: 1603648616208.png (175 KB, 1762x978)
175 KB
175 KB PNG
>slopping a node because whynot
>first time actually touching Python
>always hated it by proxy
>really hated, never wanted to touch it
>total codelet
>pre-llm experience: <2k JS and Lua lines
>post-llm experience: <1k Ruby
>mfw here indentation matters
>mfw True instead of true
>mfw , instead of nothing
>mfw nothing instead of ;
>mfw fucking tuples

I HATE PYTHON I HATE PYTHON I HATE PYTHON
>>
File: LTXVideo_00001.mp4 (294 KB, 768x512)
294 KB
294 KB MP4
>>103579429
And I fucking uploaded the png again.
>>
>>103579453
just get used to it, lmao.
>>
File: LTXVideo_00002.mp4 (286 KB, 768x512)
286 KB
286 KB MP4
>A scientest looks at a test tube full of blue liquid. He has short, dark hair, dark skin, and is wearing a lab coat and safety goggles. He walks from left to right towards a window, his gaze fixed on something outside. The camera follows him from behind at a medium distance. The room is brightly lit, with white walls and a large window covered by a white curtain. As he approaches the window, he turns his head slightly to the left, then back to the right. He then turns his entire body to the right, facing the window. The camera remains stationary as he stands in front of the window. The scene is captured in real-life footage.
>>
>>103579429
it can do image to video too right, also yeah that guy looks like he's made from metal
>>
>>103579460
nightmare fuel

>>103579480
wtf are you sure your setup is right, this is awful lol
>>
why can't normal ram work as well as vram
>>
>>103579336
I get an error when I try that:
>No package metadata was found for bitsandbytes
>>
>>103579501
because only the cpu can use the ram, and ram is ultra slow compared to vram
>>
>>103579498
>wtf are you sure your setup is right, this is awful lol
I have no idea, Im gonna poke around a bit more and then go back to hyvid
>>
>>103579513
why don't ram manufacturers make their ram as fast as vram
>>
>>103579516
you can unclip a ram from your pc right? that's why it's slow, because ram isn't weld into your cpu like the vram is weld into your gpu
>>
File: LTXVideo_00004.mp4 (332 KB, 800x576)
332 KB
332 KB MP4
Okay... so img2vid gave me something
>>
File: front facing alan 1.png (180 KB, 463x554)
180 KB
180 KB PNG
>>103579523
>>
>>103579541
not bad, but how does it work with a little more action? and can it work on nfsw images?
>>
File: LTXVideo_00005.mp4 (740 KB, 576x800)
740 KB
740 KB MP4
>>103579552
Ill check it out in a bit.
>>
>>103579552
I doubt it can do NSFW, but yeah try to prompt a penis to see
>>
File: 1721103277491790.png (664 KB, 627x1370)
664 KB
664 KB PNG
>>103579569
kek, that's a kino face not gonna lie
>>
>>103579569
>>103579579
That there is the abyss staring back at me
i am finally terrified.
>>
>>103579574
yeah, I also doubt it can render a penis by itself, but if you go for a nfsw image and i2v maybe it can add some movement Idk
>>
>>103579599
worth a try really
>>
>try to recreate big titty asian mommy gen so i can do some gens for a character im making
>she gens with 4 fingers
>realize there's a twenty paragraph negative schizogen
>regen completely removing the negative
>gens perfectly with flawless anatomy
why do people do this in year of our lord twenty four and more?
>>
File: ComfyUI_01262_.png (1.45 MB, 1024x1024)
1.45 MB
1.45 MB PNG
>>103579608
i got the marty robbins quote part of this wrong, killing myself promptly
>>
File: LTXVideo_00006.mp4 (992 KB, 800x800)
992 KB
992 KB MP4
Uhm, guys?
>>
>>103579608
yeah, I'm starting to believe that negative prompting is a placebo aswell, cfg is only good for better prompt adherance
>>
>>103579620
holy shit, looks like a bot that got disconnected or something, terrifying
>>
>>103579620
did you use the workflow from the repo? from what I read, LTX needs a big schizo LLM prompt to get something decent
>>
File: LTXVideo_00007.mp4 (1.2 MB, 800x800)
1.2 MB
1.2 MB MP4
Okay, so LTX is really really fast. Like 1 minute 30 seconds for 121 frames. But the output is kinda...
>>
>>103579634
>from what I read, LTX needs a big schizo LLM prompt to get something decent
that's why I'm not using that model, I don't want to deal with that shit, hunyuan understand normal simple prompt fine
>>
>>103579634
I am using the schizo flow yes. Its the only one that works with their new vae.
>>
>>103579621
It's not placebo if you use it to get rid of stuff that appears in your gens as a result of your positive prompt.
Want furshit anthro ponies but you don't like horns? Put "horn" in the negative prompt.
>>
>>103579640
ouch
>>
>>103579621
it's strictly a tool for getting rid of stuff, do more than that and it's really detrimental.
>>103579664
pretty much this
>>
Idk, I think LTX has stuff to offer for sure. It won't erase nipples when doing NSFW stuff. I haven't touch any hardcore etc, but I can imagine using this to block out motion for vid2vid+ i2v
>>
>>103579779
if it doesn't know porn, it won't know how to react with a hardcore porn image unfortunately
>>
>>103579163
Average /ldg/ user
>>
>>103579826
imagine how long his dick is tho
>>
>>103579851
xe/xir probably cut xir long dick though :'(
>>
>>103579851
No way fag
>>
I'm impressed how prompt weighting work so well on hunyuan, when you go for low rest you have the tendancy of a zoomed in video, if you go for (full body view:2), it tends to fix that
>>
>>103579885
*low res
>>
>>103579885
:1.5 etc works with hunyuan?
>>
>>103573851
i like dis
>>
>>103579916
yeah it works, on the blowjob lora I had too much asian women in the output so I increased the "caucasian woman" weighting with (caucasian woman:2) and the problem was solved, it's a really powerful tool you should use it aswell
>>
>>103579927
oh that's great then
>>
File: 1714581423816631.mp4 (932 KB, 720x720)
932 KB
932 KB MP4
https://www.reddit.com/r/DalleGoneWild/comments/1hhzwp2/nesqum/
kek
>>
The speed of LTX just makes me sad that we cant have its speed and Hyvids goodness.
>>
>>103580048
you can't really have both, LTX is fast because it's a small model, and the results aren't good because it's a small model
>>
>>103580066
What if we used LTX as a SpEcUlAtIvE DeCoDeR?
>>
>>103580109
kek, I remember that meme on /lmg/ back in the days, are they still coping with that?
>>
File: 0.jpg (438 KB, 1152x896)
438 KB
438 KB JPG
>>
God bless city for his hunyuan GGUF quants, Q8 is giving way better prompt adherance and way less anatomy artifacts than fp8.
>>
>>103580251
Are you using llama f8 scaled text encoder?
>>
>>103580302
no, I use the regular fp16 model
>>
>>103580211
tight
>>
>>103580309
oh interesting, I wonder how much of a difference that makes to the result or not.
>>
You wish I was posting gens right now
>>
>>103580251
Why does kijai hate gguf so much? He seems to focus only on muh speed and not care about quality?
https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/issues/35#issuecomment-2549196849
>"Not going to happen for the wrapper though."
This alone makes me want to stop tinkering with his node and just focus on improving comfy's official version instead.
>>
>>103580390
there's no point on sticking with kijai's node anymore, comfyui's workflow is:
- faster
- uses less vram
- you can go for any kind of samplers/schedulers you want
- you can use GGUF
>>103562479 >>103558472
kijai was a hero, but he lived to long to be a villain, and comfy did his job to make his irrelevant once and for all lol
>>
>>103580423
can you link me the official workflow, i have been trying the entire to get something to work after my AI hiatus and i am getting too old
>>
File: 1726672451732113.png (61 KB, 1040x458)
61 KB
61 KB PNG
>>103580390
the fuck is his problem? did he had a beef with the llama.cpp devs or something?
>>
>>103580439
Sure
https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/
if you want to use gguf you just replace the Load Diffusion Model node with the Unet Loader (GGUF) node
>>
>>103580423
can you block swap on the official nodes
>>
>>103580442
>the fuck is his problem? did he had a beef with the llama.cpp devs or something?
pure autism, he does his stuff on his own
>>
>>103580459
I don't really give a fuck about that meme desu, I already have enough frames (and more frames on comfyui compared to kijai) on Q8 with my resolution, going further than what my gpu can handle makes shit too slow for my taste
>>
best pov blowjob prompt for hunyuan?
>>
>>103580473
it was just a question, I'm using it to generate proper 720p/125 frames stuff while I'm out, and if it's not supported, that's annoying
>>
File: 1710992308804361.png (79 KB, 1778x622)
79 KB
79 KB PNG
the fuck does that mean comfy?
>>
File: 1714679324345967.png (808 KB, 800x450)
808 KB
808 KB PNG
>>103580488
>>103580459
https://github.com/comfyanonymous/ComfyUI/issues/6110
let's hope he'll implement that to end kijai's career once and for all
>>
>>103580475
use cock in mouth and cumming instead of bj or blowjob
>>
>>103580496
looks like it's only working on tile_size 256 somehow
>>
>>103580496
Even he doesn't know
>>
>>103580475
>>103580522
you use that lora to get something closer to a real blowjob I guess
https://civitai.com/models/1047627/pov-oral-or-hunyuan-video-lora?modelVersionId=1175431
>>
>>103580456
hunyuan model is almost 26gb. how is that gonna work on my non h80 gpu
>>
>>103580539
like I said, you use the gguf, go for Q8, it'll work on a 24gb card
https://huggingface.co/city96/HunyuanVideo-gguf/tree/main
https://github.com/city96/ComfyUI-GGUF
>>
>>103580532
woah you can train loras for hunyuan. based bugmen letting it happen
>>
>>103580469
>pure autism, he does his stuff on his own
I think it's more of an ego trip, his hunyuan nodes are made in a way they are incompatible with comfy's core, as if he really wants us to depend on his repo and his repo alone, he's breaking the whole point of comfyUi's ecosystem, the nodes should be compatible with each other
>>
>>103580139
Still coping and one autist goes absolutely ballistic if you question the idea.
>>
>>103580390
>>103580442

I don't know what the fuck is wrong with him, but since we can load the model I wonder how hard it would be to just implement it ourselves and skip his autism
>>
>>103580577
>I wonder how hard it would be to just implement it ourselves and skip his autism
I mean, hunyuan gguf quants are working on comfy core, that's enough to me, but I understand that people are using block swap, it's the last appeal of kijai's repo at this point
>>
>>103580549
thanks anon
>>
>>103580563
The model loader node was briefly compatible when someone made a LoRA PR, but kij went in, made a separate LoRA node to plug into the loader and made it incompatible again.
I actually have no idea what the guy's deal is.
>>
>>103580595
you're welcome o/
>>
>>103580532
so that guy trained the lora on stillimages. how would one train on 3s video clips?
>>
>>103580598
I won't be too harsh on him because he's like a fucking god at coding, he made this shit happen in less than 2 days, which is absolutely insane, but he must think of the long term aswell, people will go elsewhere is the grass got greener less than 2 weeks later lol
>>
how do you prevent the "shaking camera" effect on hunyuan? I'd like a static shot and the camera always moves at some point it's annoying, I tried "static shot" but I don't think it's working
>>
>>103580611
>how would one train on 3s video clips?
33 frames seems to be the limit
https://github.com/tdrussell/diffusion-pipe?tab=readme-ov-file#hunyuanvideo
>>
>>103580643
33 frames is good enough for anime but not for real life. Jensen just has to gatekeep us from more vram
>>
>>103580662
I'm sure you could go for more frames if you use block swap, but that will be hella slow though
>Jensen just has to gatekeep us from more vram
Yeah I hate that motherfucker so much... in a normal world we would have at least a 64gb vram consumer grad in the year of our lord 2024
>>
>>103580672
At some point consumers need to sue them or demand our representatives to make them gives us more VRAM. Maybe some anti-trust law action
>>
>>103580672
possibly more desu.

gtx 1080 generation was the last time I really felt like wow this is worth the upgrade, now each gen feels like they are have perfected price vs least progress possible calculation such every penny possible before improving.
>>
File: ComfyUI_01878_.png (1.39 MB, 896x1200)
1.39 MB
1.39 MB PNG
>>
>>103580791
nice
>>
>>103580549
can some of that run in regular ram and a CPU?
>>
>>103580809
I don't know, and that's ironic because gguf quants were first created to run on cpu's only, then the gpu + cpu offload got integrated, but on the imagegen side, it seems to only be working on gpu
>>
I like comfy's core hunyuan v2v workflow, when you go for a video that doesn't have the 4n+1 frames (like 48 frames instead of 49), it won't give you an error like on kijai, instead comfy's sampler will just cut the video so that it's a 4n+1 frames, so it'll chop your 48 frames video into a 45 frames video
>>
>>103580892
got a workflow for me?
>>
File: LTXVideo_00035.mp4 (471 KB, 480x512)
471 KB
471 KB MP4
I'm worried ltx i2v is a preview of whats to come with hyvid
>>
>>103581082
no chance, ltx is just a bad model overall, that's why its i2v version is bad too, tencent won't miss this
>>
what's the recommended p2v hunyaun workflow right now?
>>
>>103581055
sure
https://files.catbox.moe/vope3n.mp4

Here's the gif input in case you want to reproduce the examples yourself
https://media4.giphy.com/media/Exyy0Ocz88jQY/giphy.gif?cid=6c09b952extmhr01u6iwf531l6lnl46m3vgdfc309anhytg1&ep=v1_gifs_search&rid=giphy.gif&ct=g
>>
>>103581090
agreed, if it was like that, every other img2video would be crap, but even a small model like cogvideo returns better results
>>
>>103580892
Find a bug on his implementation though, I went for a 12 frames gif and it extended to 13 frames, and the output was completly black
>>
>>103580643
You can go more than 33 if you have multiple GPUs. I fixed pipeline parallelism for hunyuan a few days ago. With 4x4090 you can do at least 512x512x97. And this is with deepspeed not doing a good job at balancing layers across GPUs to ensure equal memory usage. If I add an option to let you manually specify how layers divide across GPUs (possible in principle, would take a bit of code) you might even be able to reach the full 129 frame length at 512 res with 4x24GB.
>>
>>103581328
>I fixed pipeline parallelism for hunyuan a few days ago.
did you make a pr on diffusion-pipe's repo so that everyone can enjoy that aswell?
>>
>>103581341
I am the training script dev lol. Commit is pushed since a few days ago. Readme mentions that pipeline parallelism helps get longer video lengths. Maybe I could clarify it a bit in the wording.
>>
>>103581328
>4x4090
I need a better job.
>>
>>103581357
>I am the training script dev lol.
fucking based! you're doing the god's work anon
>>
File: He_brought?.png (391 KB, 526x400)
391 KB
391 KB PNG
https://huggingface.co/TheYuriLover/HunyuanVideo_nfsw_lora
Remember that nfsw_lora that got removed from civitai? I brought it back to life
>>
>>103581382
hello, based department?
>>
>>103581382
based yuri
>>
>>103581382
is there a way to use hunyuan loras with the gguf main model?
>>
>>103581405
They work with the built in LoRA loader. You don't have to do anything fancy.
>>
File: LoraHunyuan.png (188 KB, 1950x800)
188 KB
188 KB PNG
>>103581405
oh yeah you definitely can, just like any regular loras you do this
>>
this model allows nudity as well

https://civitai.com/models/1018217/hunyuan-video-safetensors-now-offical-fp8
>>
>>103581450
nudity? it can do porn right of the box!
https://civitai.com/images/46490599
>>
>>103581450
I just recorded over my moon landing footage for this.
>>
>>103581457
I noticed that they didnt filter dick or cum but using the word breasts disfigures the female chest
>>
>>103581482
>disfigures
enhances
>>
>>103581499
large breasts
https://files.catbox.moe/1ux3pd.webp
H-cup
https://files.catbox.moe/85sdas.webp
>>
>>103581518
I'm at work, can you just describe them to me.
>>
>>103581560
one has balloon sized honkers and the other one mini cupcakes isntead of H cup
>>
>>103581565
And which one should I prefer?
>>
>>103581518
coworker out of the room, got a peak, both look hilarious.
>>
>>103581220
>I went for a 12 frames gif and it extended to 13 frames, and the output was completly black
https://github.com/comfyanonymous/ComfyUI/commit/cac68ca8139270e9f7a8b316419443a1e1078e45
>The downscale_ratio formula for the temporal had issues with some frame
numbers.
thanks Comfy, it's working fine now
>>
>Hey Kijai, can you implement support for gguf into this wrapper?
>NO IT WILL NEVER HAPPEN
>*closes issue*

What drama sparked his hatred for gguf? He seemed happy to work with it for Mochi.
>>
>>103581776
I don't care anymore, comfy's core is better than his, if he wants to fall into irrelevancy that's his problem
>>
>>103581781
>I don't care anymore
I kind of do. Not that I can't just use comfy core, but it's such an outright dismissal of gguf without any further explanation that it leads me to think there's something interesting behind it.
>>
>>103581823
yeah, maybe he had a drama with city or something, something definitely happened
>>
>>103581831
Ultimately, it's not important, but it sparks my curiosity. Next move is to ask city if he can write nodes to work with the hyvid wrapper and see what kind of response it brings.
>>
>>103581823
Have you thought that he isn't able to do it?
>t. no coder
>>
>>103581840
>Have you thought that he isn't able to do it?
he already did it on Mochi and CogVideoX lol
>>
>>103581842
what a conundrum
>>
>>103581776
I looked into it. Theres only two possible reasons:
1. Comfyui uses different sampling i.e. whatever the fuck he did in his wrapper is too hardcoded/ingrained such that to add gguf support means rewriting everything. I was actually thinking a week ago of refactoring his code (and maybe city/s, too) to easily add in the improvements from the upstream Hunyuan github.
2. (Less likely) This model's slowness has really broken him and he needs maximum speed. FP8 goes vroom in a 4090.
>>
What would happen if plugged the latent output from LTX into the hhvid sampler?
>>
seems like tensor fragments are getting clogged up. why?
>>
>>103581886
>This model's slowness has really broken him and he needs maximum speed. FP8 goes vroom in a 4090.
He's already demonstrated he can use the GGUF format to torch compile on a 3090 making the speed between the two cards less steep. So it's probably not that.
>>
>i am a nigbophile
>>
>>103581886
>was actually thinking a week ago of refactoring his code (and maybe city/s, too)
if you can make torch.compile work on city's repo that would be cool yeah
>>
File: 1725434794561536.png (455 KB, 700x568)
455 KB
455 KB PNG
https://videocardz.com/newz/retailer-lists-e5999-geforce-rtx-5090-and-e3499-rtx-5080-acer-gaming-pcs-ahead-of-launch
>Retailer lists €5999 GeForce RTX 5090 and €3499 RTX 5080 Acer Gaming PCs ahead of launch
>>
>>103582197
I really don't care. Ill just buy a second 3090 and when intel released their 24gb offerings (which also seems to include two nifty ssd ports on the side) ill buy a couple of those.
In another year or so, the need for massive datacenters will prove increasingly unnecessary to sustain growth and nvidia will implode having lost its datacenter market from overselling GPUs to cover the next 10 years and abandoning the now intel dominated consumer market. In the year 2027, Jensen Huang will be found hanging from the rafters of his foreclosed mansion by the sleeves of his leather jacket. A jetson powered robot will have helped him perform the act.
>>
hear me out here, img2vid in ltx to to provide foundation of video and vid2vid it with hyvid.
>>
>>103582247
>the now intel dominated consumer market
https://www.youtube.com/watch?v=vAUOPHqx5Gs
>>
>>103582247
>ensen Huang will be found hanging from the rafters of his foreclosed mansion by the sleeves of his leather jacket. A jetson powered robot will have helped him perform the act.
top kek
>>
>>103582283
All they'd need is 24gb gpus at a "reasonable" price that don't shit the bed at ML prices and nvidia loses like half of its new gpu market.
>>
>>103582292
they need something stronger than just 24gb of vram, people won't give up on cuda for just 24gb of vram
>>
>>103580791
incredibly cute style
>>
>>103582306
>people won't give up on cuda for just 24gb of vram
I would if the price per gb of vram was cheap enough.

>Would you rather pay 5k usd (retailer price not scalper price) for 32gb of vram or pay 700 ~ 1000 dollar for 24gb of brand spanking new vram that reportedly works almost as well as cuda on ML tasks.

Like if you're running something that requires multiple GPUs there's ZERO reason to buy the 5090. If your only concern in single GPU performance and that performance mattes a LOT then maybe you can justify the 5090. I cannot.
>>
>>103582345
>that reportedly works almost as well as cuda on ML tasks.
that's the thing, it won't work aswell, it won't be as fast, so you need a stronger argument than that, going for a cheap 32gb intel card might be it imo
>>
>>103582345
>Like if you're running something that requires multiple GPUs there's ZERO reason to buy the 5090.
but a lot of things only work on 1 gpu, for example I have 2 gpus and I can't put more memory hunyuan shit on my 2nd gpu so far
>>
>>103582352
>it won't work aswell, it won't be as fast
if I can pick up a new intel card at the same price as a used 3090 in my country I'm going to risk the intel card.
>>
>>103582364
>so far
It's on their to do list, so it's probably not that far off. We can already speed up gens by using 2 gpus.
>>
There's also the prospect of escaping the nvidia prison. The fact that an alternative to NVIDIA even exists is an attractive prospect.
>>
File: 1708851567199451.mp4 (206 KB, 512x512)
206 KB
206 KB MP4
https://civitai.com/models/1054749/pixelated-retro-girl-style?modelVersionId=1183499
damn nigga that looks good
>>
>>103582402
Looks pretty cool. I wonder if it was trained on sprites or sprite animations.
>>
>Chinese models open source more
>Chinese models interact and engage with enthusiasts
>Chinese models have a healthy approach to trust the using to use the model appropriately.

When did the west and China flip?
>>
>>103582460
>When did the west and China flip?
I'd say around 2014, when the woke virus started to spread across the western countries
>>
File: 5856756.png (29 KB, 779x306)
29 KB
29 KB PNG
>>103579163
I read the rentry.org but upon loading a workflow I get this, I have no idea what more to install, please help
>>
File: 1708116383995518.png (94 KB, 1590x609)
94 KB
94 KB PNG
>>103582665
>I have no idea what more to install, please help
you click on "manager" and then on "install missing custom nodes", don't forget to update comfyui aswell, maybe that's the problem
>>
>flux redux
use cases?
>>
>>103582460
>>Chinese models interact and engage with enthusiasts
you lost me on this part
>>
>>103582682
>use cases?
none, IPAdapter is better
>>
>>103582692
can it transfer image composition?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.