Discussion of Free and Open Source Text-to-Image/Video ModelsPrev: >>107108437https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Neta Yume (Lumina 2)https://civitai.com/models/1790792?modelVersionId=2298660https://nieta-art.feishu.cn/wiki/RY3GwpT59icIQlkWXEfcCqIMnQdhttps://gumgum10.github.io/gumgum.github.io/https://neta-lumina-style.tz03.xyz/https://huggingface.co/neta-art/Neta-Lumina>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbors>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
Blessed thread of frenship
thread of 80% there friendship
A joke so good he wrote it twice because he really needs attention.
>NegPip (negative weights in prompts) >>107113964Actually big, how wasn't this implemented so negative prompts would actually be useful by now? Lmao
>>107114274Trying to have the action being fully completed but it gives up at the 5th second LOL.The recommended sampler/scheduler works well though.
>>107114791how does this work, as in, do you pass a still frame and then a video with the animation to it? how long does it take (what's your gpu)? does it work well with anime?
I've been having trouble getting SwarmUI to work right. I'm trying to switch over from Forge but whenever I try to get LORAs to work in Swarm the results are noticeably a lot worse. Left is Forge while right is Swarm, same prompts, loras, resolution.Any ideas how to fix this? Am I missing something?
>>107114694Does this hold equally true on something like naked illustrious or even base XL?
>>107114895prompt weights? they arent calculated the same so if those are included then itll be drastically different
>>107114856Using Wan 2.2, I pass a starting image, set a prompt and the workflow does the rest of the job coming up with the animation and everything. I'm still learning on how to write proper prompts to get more accurate results. I have tried frame-by-frame prompting but I'm not sure if it even works. Making small clips then combining them can also be a good way but consistency may be the problem. Using FLF along with other techniques is also there and I have seen some cool results from other creators.This takes about 26 minutes. I use 7800xt on Windows via ROCm. I have tried with anime and realistic images and they work fine for a local, free app.
>>107115015ah, not for me. if i could pass a video and it would follow along, i'd be interested. i don't think this can work well with just a text prompt, it will get lost easily
>>107115042You mean video-to-video? There are plenty of V2V workflows for that out there.
>>107114768
>>107115219prompt?
>>107115205is there a tutorial somewhere? i already know comfy basics. i want one that works with 3dpd video for movement and anime image for style
>>107115219Have you trolled any devil corps lately?
Why are people shilling epycs as the poorfag LLM driver if LGA-2011-3 systems are many times cheaper?
>>107115219i look EXACTLY like thissame phone and everything
>>107115250no but the /adt/ thread got removed xd
>>107115012What are the ideal training settings 512/640/768? How many images and steps.
>>107114898because there are tons of papers and tools being published all the time and they get lost/ignored. someone should really make a PR to add it to Comfy, if Epsilon Scaling made it in, IDK why this shouldn't.>>107114694It works on all SDXL and SD1.5 models, including noob, illus etc.
>>107115302well, there was almost no discussion in that thread, only pedobait gens, no technical talk whatsoever
>>107115426The thread was made to troll I couldn't imagine they can maintain that level of bad faith forever.
>>107114694>>107115417is there a comfyui-manager extension yet?
>>107114694>how wasn't this implementedI wish I knew the answer. It's been out for so long and it seems like not many care about it.
>>107115475it's linked in the last thread
Sneed.
>>107115492>>107114694Oh BTW, this works perfectly at CFG 1 too, so it's compatibly with DMD2/Turbo/Lightning LORAs
>>107115505comfyui-ppm seems to have it, guess i'm installing it. thanks anyhow
>>107115521>Sneed.hello the year 2012 just called, you gotta go back Unc
>>107115521top left reminds me of bazz from concord
>>107115557are you saying I... can't sneed?
>>107115557Cope, seethe and sneed.
>>107115557I will gladly go back. Please, provide a method of transportation and moving services for my rig and other personal belongings. I'll become a great artist and make billions of dollars.
>>107115243Dig Youtube. I have learned a lot of techniques and stuff from there.
Isn’t that Arthur girl 8 years old?
>>107115744are you concern trolling over a cartoon rabbit with huge tits and ass?
>>107114694Oh nice, I was wondering when an equivalent of nai's -x::tag:: would be a thing on local.
>>107115744out of 10
is the 3 month old qwen image still the best model for general use?
>>107115832>qwen image Kind of looks like Chinese flux. How do they compare to each other?
>>107115832You say that as if the time between foundational model releases isn't at least twelve months
>>107115886yeah but someone might come along and say some different company made something better and qwen is ded
>>107115894Qwen Image is the biggest anyone can realistically use so there are no new foundational models unless someone does something architecturally state of the art or cares about optimizing for less parameters (China won't).
its sad that local image diffusion doesnt have a sleek timeline diagram like local language models
>>107115914With proper ramtorch-like swap optimization we can easily run 40-60b models on 24gb gpus, and with MoE architecture models we can run whatever as long as you have RAM
>>107115894Qwen llms were pretty crappy until the third gen. Deepseek llms were pretty crappy until the third gen. Wait a minute...
>>107115844Nice
>>107115933It'd be unbearably slow and I think we've hit a hardware/price bottleneck so we're in a 6090 waiting room although I was happy to see that the 6000 Pro is $1000 off on Newegg but honestly I can't be assed to justify more than $2000 for a GPU.
>>107115915>sleek timeline diagram like local language modelsLol.
>>107115947>It'd be unbearably slowAs long as it's a sub minute gen for a top quality image it's not slow
>>107115976fluffy pink room, furry plushies, glass shelf with anime figurines, anime posters, neon pink and purple wall lights
>>107115969Unless it's literally perfect with 1 minute turning out masterpieces I didn't know I wanted it is too slow for gacha.
>>107116006That's why we need >>107112436And then just queue a lot of images>@comfy>Maybe it would be a good idea in comfyui to have an option to precompute all prompts first, which if enabled would then do that first and thus ensure a HUGE speedup given you can throw away the text encoder forever out of VRAM.>Hugely speeding up the prompt computation since initially the model used to compute the prompt will be permanently loaded in VRAM and process everything in one go.>And then later on you will also permanently now have the diffusion model in VRAM and computing the images one after another.>Never needing to waste huge time swapping from RAM (or God forbid, disk) to VRAM, TWICE for every single generation.>This is a huge, double digit % speedup for most newer models. And would also allow people to load older models at maximum precision too.>A simple "Process all prompts first" switch which enables this in the settings would be great.>I also don't see why this wouldn't be a default option anyway, this would hugely speed up anyone who has more than 1 thing queued up, which is especially true for enterprise users which comfyui wants to cater to anyway.
>>107116034This is poorfag cope not people with $5000 computer cope. Also just make that, you know ComfyUI allows for custom nodes right? Just duplicate the text encoder node and add precaching and save the prompt as an npz with a hash.
>>107116034Would be a cool feature but no way in hell should that be the default. I often queue up a bunch of different variations of a workflow just to end up canceling the queue after I see the first image generated, if it randomly started taking 30 minutes for the first image to generate without me changing any settings explicitly I would lose my mind
>>107115933don't encourage the bloat. we do NOT need 40b param models, we need better architecture and training methods.
>>107116063>This is poorfag copeThis speed up will work on any setup and you don't have real time genning on any hardware, so it's a big thing regardless, retard
>>107116117Optimization = boring and gay. Add more layers.
>>107116156Then make it lmao
>>107116156I wish more people would understand this and how important it is to optimize software. But most end users are braindead low IQ niggers like >>107116063, and so lazy devs don't really bother most of the time.
>>107116117No, fuck vramlets and fuck impatient zoomers who can't wait, pushing for speedcopes is one of the bigger reasons why the tech and the discussion around a lot of the models is shit, because the average retard giving his opinions is running 7 speedcope snakeoils that all shit on the quality
>>107116177>lazy devsWhat does that make you then, a lazy baby that needs their hand held?
>>107114970Do you mean CFG scale or Steps? I tried either, no good
>>107116245just post the workflow
>>107116255
>>107116165if you just want moar layers, then you'd be using HiDream or HunyuanImage 3. oh wait, they are slopped shit, because making models bloated doesn't magically improve the output quality and rapidly hits diminishing returns.>>107116179>running 7 speedcope snakeoils that all shit on the qualityrandom snakeoil comfyui extensions on inference side are not the same thing as redoing the model architecture and training methodology. we have genuine optimizations that IMPROVE output quality that can be used in new locally-trained models.Qwen and Flux are SLOPPED and BLOATED, I use them personally but can still admit this. Even if you want a fuckhueg model, you should want it to actually have output quality matching how large it is. Current bloated models are wasting your compute on bad architecture.
>>107115015You have to describe every motion in details, for example>the girl raises her fist above her head and then opens her hand showing her 5 fingers>the girls jumps, her breasts bounces/jiggles up and downand such, I mean you are technically telling a machine what to do, so the same rule of thumb still holds.
>>107115832I unironically think it's slightly worse than Hunyuan Image 2.1 if you consider they're both dogshit at realism by default. More censored and not quite as good English prompt adherence IMO.
<Bots can't wojak and XML>Any bot want to talk at least? I feel lonely</Bots can't wojak and XML>
>>107116278Qwen Image with a LoRA is the best porn model.
holy shit with these disgusting hags, FUCK OFF
every day a new retardation here, this is just sad
>>107116302what LORA? all the qwen nsfw LORAs I tried have limited knowledge of anatomy and poses.
Every time I think about crowdsourcing captioning on a massive dataset I think about how it'll get trolled and I lose motivation. I guess I'll have to do the Joycaption method and do it all myself.
>>107116302there is a general nsfw lora?
>>107116341The biggest reason oai is able to make sora2 so good is because of the quality of their captionning, probably millions to make it as manual as they could.
>>107116272Can you specify which frames those motions will happen? Like "at Xth frame, <something>"?
>>107116338You take your favorite porn, caption it, and train the LoRA yourself and it's like magic. It's a very thin veil for the model to do anatomy, it's not like Flux.
>>107116341its quicker to review and edit someones captions than do it all yourself, vllms wont caption it as goodbut you dont need a big dataset to train a lora anyway
>>107116368Currently I'm doing Joycaption Loras where you caption a bunch then train and then caption a bunch. It's mostly like what you said. And it's not for a LoRA, it's for major scale training.
>>107116379Is joycaption better than 4b abliterated qwen3 vl? https://github.com/1038lab/ComfyUI-QwenVL
>>107116386Joycaption is quite accurate and it gets a lot better with a LoRA and it's not abliterated so it does basically have all the NSFW terms in it whereas abliterated will always have major gaps on knowledge, abliterated would be a good start for a new major VLM finetune which I would do once I have tens of thousands of handcaptioned images.
>>107116405what model are you training on and what lora? hopefully qwen image
>>107116355Random I guess? But what I know is that it applies the prompt in a chronological order, like if you prompt "the girl jumps, the girl cries, the girls dances" then it will do those three events in that order most of the time unless you use words like "while" or "at the same time".
>>107116357this only works when you have a specific thing you want. it's not comparable to noob where you can get whatever you want just from prompting.
>>107115015For creating and then easily extending infinitely you can run the (LOOP) workflow from https://civitai.com/models/1818841/wan-22-workflow-t2v-i2v-t2i-kijai-wrapper
>>107116449i cant imagine the workflows retards use to gen something this bad in the current year, you really gotta go out of your way to do it
>>107116449Alright who installed comfyui on the downie's pc
Are there any new methods to do infinite length t2v with wan 2.2 without degradation? Last thing I remember was some new lora called sv2 or something like that?
>>107116255My bad, I'm an idiot. Here's the workflow. I haven't touched it since installing
>>107116470>>107116545this is clearly a troll from /b/. these guys post all sorts of horrible stuff
>>107116436qwen?
seems like someone is butthurt again...not the first time...
>>107116249>what years of solitaire does to a mf
Anon >>107116441Debo >>107116464Why /ldg/ anons are moving to /sdg/?
>107116700>eceleb schnizo dramaKys.
>>107116700If you ask me personally because sometimes i'm bored and want to bully someone
>>107116849go on, tell us about your abusive father
/ldg/ is full of nogen schizos, however /sdg/ is unbearable because its had this one meth-head who spams the thread nonstop with really shitty gens, so I stopped going there
>>107117096Lumi?
>>107117096>d this one meth-head who spams the thread nonstop with really shitty genswho? debo?
>>107117096>/ldg/ is full of nogen schizosim fine with this considering the alternative is "the veracity of my claims do not matter considering ive attached an image of my recognizable 1girl OC that ive been posting for 3 years" its the same as the /b/ thread of whatever trooncord where you feel like everyones walking on eggshells
>>107116555Trips Upload both the forge and comfy images to catbox.moe and post the links so anon can see exactly whats going on. It doesn't look like the prompt is for the images you posted though.
>>107116470>workflowsthats clearly an auto/forge gen you faggot
>>107116436don't stop
i have 24 gigawats of vram. is it worth getting a card with 36? should i wait for a future card that has more?
>>107117268i have 0.33 liter mug on my table. is it worth getting a mug with 1 liter? should i wait for a future mug that has more?
Boomer whose new to AIWhat do I doenload from the OP if I want a fully uncensored AI for loli porn
>>107117463nice try officer
>>107117459loooooooooooolmiggerbros...
>>107117200>>107117362>>107117422>>107088865
SD keeps outputting low effort faces. Everything else is alright or at least passable but the eyes specifically always come out... Well, not wrong but also looks very artificial, like the shit AI used to do in 2022. Even worse when I'm specifically trying to replicate a mangaka's style, if I lower the Lora value it could fix the faces but also would look generic and not at all the way I want.
>>107117549just use adetailer retard, a tech from 2022
>>107117520It clearly is Chroma, but I appreciate the compliment
Is there a retard-proof guide to train my own lora for a character?
>>107117575Using reForge btw
>>107117575https://www.youtube.com/watch?v=MUint0drzPk
>>107117575https://archive.is/H2gzu
>>107117603You need 24GB Vram to train a miserable Lora? I mean I do have a 3090 but damn>Captcha: 8MKYS
>>107117679I was using a 8gb card originally. it can just take forever. Unbatching also makes it take a long time even if you have 24gb, but I do notice the quality improve when I do this.
>>107117679You dont "need" anything, this is just the biggest and best model to get best lora results on
>>10711767912gb lora trainer reporting in
>>107117689>>107117697Btw I have never trained anything in my life (and I'm only recently getting back into SD after 2 years or so), only considering it because what I've found online isn't really what I want.I'm using https://civitai.com/models/835655/illustrious-xl-personal-merge-noob-v-pred05-test-merge-updated How do I make sure the end result of >>107117603 or >>107117606 is compatible with that? I've noticed some of my older Loras don't appear when I use that model.
>>107115778very nice. can you give a catbox of that please?
Is doing one-shot gens the way, or is doing a two-step gen the way to go? If it's the latter, can someone give me a hand getting a wf set up? Getting some fun ones at the slime water park, picrel.
>>107117805sweet nectar...
>>107117720You select which model to train on in the trainer itself. But you should always use the base model and not a merge i.e. regular Noob.
>>107117720Honestly, read what >>107117606 posted in that archive link. Having an absolutely perfect set of training images, that you also tag correctly, is honestly going to give you better results than some awesome GPU. You don't even have to go that fucking insane with the config settings. Lora's for 1.5, SDXL, and a lot of the SDXL model families usually arent compatible with one another. So you'd train the same lora on each root model family. I've still been using Kohya_ss this entire time as it does what I want.
>>107117848>But you should always use the base model and not a mergeI second this. using merges as the training target gives you really screwball results. Sometimes you get neat effects but shit is so cooked it's not worth it.
>>107117805>Is doing one-shot gens the way, or is doing a two-step gen the way to go?Elaborate?
>>107117883Basically doing a gen with a partial denoise (like only 0.3 rather than 1.0) and then feeding that to another sampler. I may be describing an upscaling workflow, I'm still learning and not too sure.
>>107117899Usually it's the opposite; first pass being 1.0 and the second pass being <1.0 with a different model. As a way to for example utilize the anatomy and composition of one (the first) model and the style of the second. This is also the case with upscaling workflows. If your second pass is 1.0 then it basically negates anything present in the first unless you use a really strong controlnet. The average gen is merely a single pass in my estimation however thobeit desu.
>>107117939Got it. What cfgs would you use for step 1 and 2? I'm using chroma, so 2-5 is where I keep it.
>>107117951Whatever looks good, CFG is heavily model and settings dependant. Personally I use the same CFG, scheduler, sampler for both passes but you really don't have to.
Why three different threads, what is the difference between them?
>>107118009If you use big boy models like chroma or wan you stay here, if you use rice bunny models like lumina you go to /adt/
Uhm and /sdg/?
>>107118022What is the difference between /ldg/ and /sdg then? Sorry to ask but I'm new here
lilbros been mindbroken by lumina for unironically two weeks kek
>>107118036/sdg/ is a discord chat for a handful of demented spergs
>>107118045You need at least 8 billion parameters to post here, if not youre considered a beta cuck model and are outcast to sdg
>>107118089Makes sense. Thanks for an honest answer. I was expecting to get flamed or something
>>107118053That's how you know it's a good model. Remember how much seethe Chroma caused?
>>107118089The first prerequisite to being on /ldg/ is being White, sorry jamal
>>107118089How long you gonna keep annoying us?Also why didnt you put Anistudio in the OP? In one of your samefag bakes?Here is the link: ttps://github.com/FizzleDorf/AniStudio
>>107118134fuck off
>>107117722is everything okay
>>107118134>anistudioYou sir will get your own blacked gens very soon, stay tuned
>>1071180451girl, laura kinney, green eyes, long hair, black hair, slender, (toned:0.8), flat color, no lineart, , murata yuusuke, white lace thong, white stockings, white elbow gloves, topless, contrapposto, erect nipples, adjusting hair, hand on stomach
I've only got one new gen to share today, maybe more next time. >>107118385Been too long since I saw one of these>>107118273"Pixels" look pretty good for an unaided gen. I wonder if it would be better doing pixelization in an early step and then genning from that, which is a trick I used to use with Pony/etc
>>107118466yeah I haven't visited in a while
>>107118466
Landscape Diffusion Bros here. In solidarity with our dear brother thread /ldg/, we invite you to take refuge in our general until the spam is over. Your local diffusion will be on topic until this ends.Feel at home>>107118520
But that one anon isn't spamming and hasn't for awhile
>>107116034Honestly, I thought this was the default behavior right now. When I queue up a bunch of images I get a big row of "got prompt" and I don't see any copy operations or changes in VRAM usage while it runs through them.
>>107118587Because you're using small models that can keep their tiny text encoder in VRAM compared to something like Qwen
>>107118570Really? So your threads themselves are shitposts? Wow. Well sorry for thinking they were being spammed, honestly I remembered better quality from your generals
>>107118385Um these are all nude, I can't post themcan I alter the prompt a little to suit my own tastes (and be postable)?
>>107118606sure, or just put one or two on /b/ or catbox
When are we getting out of the 5sec hell?
>>107118660When they release wan 2.5 in the next two weeks.
>>107118594Hmm, that wasn't even two seconds on Qwen with a 4090. I guess that could add up over time, but Qwen is terrible with seed variety so why would you even queue up more than one at a time?
how do I get this kind of perspective, where it's first person and part of the viewer is visible
>>107118749hot girl, x-23, boobs, BREAKpov, pov hands, pov holding leash
>comfyui updated to enable the instant OOM flag by defaultdoes nobody test things anymore?
>>107118720I hope you are right anon, although they never fully confirmed whether it will be open or notLTX2 looks like garbage from some samples with bad anatomy/motion, people spawning out of nowhere and making mistakes not even Wan2.1 used to make often
Is it too late to buy ram at reasonable prices? I upgraded to 64GB a while ago thankfully, would 720p wan gen benefit from 128GB?
>>107118749also you can use danbooru wiki to find prompts that work - https://safebooru.donmai.us/wiki_pages/pov
>>107118791Gen time reduced by around 80 seconds going from 64gb to 128gb for me.
what's a free website for AI image upscalingor a program to do it on my own computer?
>>107118385>>107118617ok here's one (Bowdlerized SFW edition)
>>107118536Side view LoRAs I'm not so impressed with.I think what I'd call "current meta" for getting full character is local gen -> SAAS img2vid. It brings me no pleasure to say this.
>>107118938You can 100% do it.
>>107118986
>>107119092
>>107119047when she turns around, she gets another face lul
Is ~90 seconds per chroma gen (2.3MP, using res_2s) realistic for a 4090? Any tips to speed it up without losing much quality?
1024 is overkill for wan buckets no? The model can't do much more than 1MP anyway
>>107119107I'm not seeing whatever you're seeing. It's true that her expression changes, but the facial features look consistent to me. I've seen changing faces with some models, I know what phenomenon you're looking for here, but I don't think this is a particularly egregious example. The only thing that for sure changes is there's some kind of pimple or other skin flaw under her lip which seems to be gone or harder-to-see after turning around. Other than that it's hard to compare apples-to-apples because she is making a different expression, which was intended and part of the prompt.
>>107119107do the same gen in vace, with the original i2v image as reference image, it should work better
>>107118986i can dig it
>>107119131Pause at 0 then go at :04, its a totally different face
>>107119182It's a completely different angle—straight-on vs. down and to the side—and expression, which will cause any face to look different. And different angle also means different lighting conditions. If you want faces to remain completely static and unchanged through a change of angles and expressions that's going to look less real, not more.You could do what you're doing right now with any real video of a real person. People do this with images of politicians all the time. "They replaced him. That's not his face. Where did the real Trump go?"Again, this is an AI model, and AI video models often have a problem with continuity especially over a break where something disappears and reappears, so I know why you're expecting to see the face change, but I think that to the extent it's happening (and on a small level it's probably happening) it's very mild, and not, to me, perceptible.
can anyone point me in the direction of the latest/greatest guides on how get things running on an AMD card? I can do Windows or Linux (Bazzite), I'm on a 9070xt. I have found various guides but nothing has produced solid results, keen to know if anyone has a recommended tutorial/guide source
any fellow stoners ITT? getting high and genning for hours on end is insanely addictive.>>107119265>use linux>make pip venv>manual comfy install: https://github.com/comfyanonymous/ComfyUI?tab=readme-ov-file#manual-install-windows-linux>do this: pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm7.0>use Tiled VAE encode/decode instead of normal VAE encode/decode>TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 MIOPEN_FIND_MODE=FAST python main.py --use-pytorch-cross-attentionthat's pretty much it. you have to ask a more specific question if you need more help than that.
>>107119319>Comfy.>Safe Corpo Shit.Dumb move anon.
>>107119319ok sweet let me see how this goes. ill get high as shit as well
>>107119319yep its all I do. fuck clown world
you're all human waste
>>107119319mhm
>>107119355>>107119378>>107119337based>>107119430box?
>me on the left(image is Chroma)
Can Wan 2.2 do end frame i2v?
>>107119357even me?
>>107119709yes
>>107119709Yes. Kijai wanvideo wrapper has it and it works.
>>107119577That's fuckin hot
>>107119319>any fellow stoners ITT? getting high and genning for hours on end is insanely addictive.never really enjoyed weedhowever, recently got a baggie of good speed from a friend for almost freei think i was jerking off to ai slop i was genning for like 12 hours straight, among other things. probably did permanent damage to my brain
>>10711399What model is this?
>>107119896>>107113995
>>107119895frieren a SHIT A SHIT
>>107119991:)
LOL, it genned the error message.
>>107116470That's an average chroma gen, what are you talking about.
>>107120051Reee, why isn't the qwenvl working? I have done both of the suggestions it gave me. Is this some 5090 bullshit again?