Discussion of Free and Open Source Text-to-Image/Video Models and UIPrev: >>106700474https://rentry.org/ldg-lazy-getting-started-guide>UIComfyUI: https://github.com/comfyanonymous/ComfyUISwarmUI: https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic/Neo: https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicneoSD.Next: https://github.com/vladmandic/sdnextWan2GP: https://github.com/deepbeepmeep/Wan2GP>Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.com/https://openmodeldb.infohttps://openart.ai/workflows>Tuninghttps://github.com/spacepxl/demystifying-sd-finetuninghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scriptshttps://github.com/tdrussell/diffusion-pipe>WanXhttps://comfyanonymous.github.io/ComfyUI_examples/wan22/https://github.com/Wan-Video>Chromahttps://huggingface.co/lodestones/Chroma1-BaseTraining: https://rentry.org/mvu52t46>Neta Luminahttps://huggingface.co/neta-art/Neta-Luminahttps://civitai.com/models/1790792?modelVersionId=2203741https://neta-lumina-style.tz03.xyz/>Illustrious1girl and Beyond: https://rentry.org/comfyui_guide_1girlTag Explorer: https://tagexplorer.github.io/>MiscLocal Model Meta: https://rentry.org/localmodelsmetaShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/GPU Benchmarks: https://chimolog.co/bto-gpu-stable-diffusion-specs/Img2Prompt: https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneTxt2Img Plugin: https://github.com/Acly/krita-ai-diffusionArchive: https://rentry.org/sdg-linkBakery: https://rentry.org/ldgcollage>Neighbours>>>/aco/csdg>>>/b/degen>>>/b/realistic+parody>>>/gif/vdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/trash/slop>>>/vt/vtai>>>/u/udg>Local Text>>>/g/lmg>Maintain Thread Qualityhttps://rentry.org/debo
This is the "bad" workflow with PR fix, but with chroma clip since I can't run SD+padding removal with. Will rollback and run it too.
Local MoE realtime video gen waiting room of low IQ retards
Mix of Ebins lel
This is the "good" workflow or whatever.
ComfyUI is SaaS adware and should be removed from the OP. He repeatedly botches local model implementations, such as chroma and hunyuan image, causing the larger AI community to have a negative perception of their capabilities. He is doing this deliberately to get people to switch to API nodes as that is what the $17m investment was for.
This general is embarrassing at times
Blessed thread of frenship
>>106703070>>106703076uh oh... comfy sissies, what's our next cope?
>>106703086we consneed that comfyui is shillware and should be removed from the OP...
>>106703070>>106703076looks identical to me. what's your point?
>>106703070>>106703076what model did you use? Chroma v50?
>>106703100v40
>>106703083>ranfaggot
>>106703083imagine the braps
unpatched (orig) sd.py with SD clip and padding removal>>106703100Base Q8
>>106703111Second workflow with 0-1 padding
>>106703070>>106703076>>106703111>>106703121use this to make comparisons it'll be more convenient for everyone https://github.com/BigStationW/Compare-pictures-and-videos
>>106703121>>106703124even better is https://imgsli.com
https://huggingface.co/nunchaku-tech/nunchaku-qwen-image-edit-2509let's go dood
>>106703135It disgusts me that people go out of there way to make this tech accessible to vramlets.
Now chroma clip with tokenizer padding turned off.>>106703124I did in previous thread, but I cannot be assed anymore because this seems to be a nothingburger to only affect pics if you use SD clip + paddingremoval with the flux mod but nobody runs this anymore.
>>106703140>It disgusts menice
>>106703140I don't think i've ever seen a "vram chad" make sovl
>>106703152Because everyone's including their specs when they post a gen, right?
>>106703140even worse because they're the first to shriek about a model being 'bad'24gb is the bare minimum to have an opinion96gb is the bare minimum to be taken seriously
>>106703161based
>>106703124Did you make this?
>>106703152this>>106703157>t. seething slopper
>>106703105Sorry disabled boy I would have a background. Also the arguing over chroma is pointless the model is bunk at it's core unless you're doing realism
>>106703157They always throw a tantrum right after posting. What is your gpu?
>>106703173voodoo2 12mb
>>106703158now give the suit a cameltoe
>>106703080The embarrassment is the Californian NEET that has to change his filename to post his slop in the thread because everyone has his images filtered even anons that post in the thread that only exist because he can't cope losing
>>106703172>chroma is pointless the model is bunk at it's core unless you're doing realismBut have you seen this guys LoRAs https://civitai.com/models/1948914/chroma-lora-tsukasa-jun-style https://civitai.com/models/1927225/chroma-lora-art-style-of-disco-elysium
>>106703172Retarded narcissist.
anyone have a erika kirk lora?
>>106703233Sure, it's down my pants, you just have to reach down there and take it out
https://files.catbox.moe/em0d11.pyHere's the sd.py with the included PR fix btw if anyone wants to do their own tests.
>>106703135Anyone have the workflow json that's in that preview?
still waiting for a nsfw audio to video model
Bros, I'm too scared of Comfy and still use Forge webui. How do I fix this?
>>106703342I'm told drinking something called Tylenol will help you make the transition.
haven't touched image gen since flux release, can i get a qrd on:1. what the actual fuck happened to comfyui, seems to be full of jeetware bloat now (including but not limited to """comfy accounts""", api models, shilling monetisation services, etc.) and the ui looks like absolute dogshit2. what's the current SOTA - from a quick look around i'm guessing flux kontext or chroma?
forge classic doesn't have clip setting, is it possible to enable somehow? or does it even matter, gen looked the same as with original forge and clip 2.
>>106703342>scared of thing on computeryou sound like you derive pleasure from anal stimulation.
>>106703405qwen is better than flux at comprehension
there’s no kijai flfv2 e5m2 yet right? i was going to try it out from the workflow example, only to get an error that happens when you try to use the e4m3fn one when your torch is too high of a version.any alternative?
>>106703405chroma for realismill or noob for anime (still...)qwen image for image editingwan 2.2 and its variants for videovibevoice for voice cloningnothing for music gen because even the best local model is ass compared to saas sotaComfy's UI changes are baffling. I can't speak to the api shit or accounts because I don't touch that stuff and it only nags you about it once when you first install. The program itself is still fully functional and still the most versatile option for all round AI use, and I don't see that changing any time soon.
With video gen I'd like to queue up maybe 10-15 video ideas each night and then have it randomly cycle through them (with new seeds) after that, rather than having it just repeat whichever one I put in last. Is there any simple way to do this that doesn't involve making 10-15 copies of all the nodes in the workflow?
>>106703466Just turn off the quantization in the loader and it should run fine.
>>106703485Use ImpactWildcardProcessor like this :{This is a video of a shark fucking a dolphin|This is a video of dolphin fucking an octopus|This is a video of 106703485 getting fucked by a shark, dolphin and octopus}
>Well, if you're really nice and pay us lots of API credits we *might* open source Wan 2.5Does anyone actually believe this shit? Once they taste the saas sauce, it's gone.
>>106703240>nigbo
>>106703492There are more inputs than just the prompt. E.g. the input image for i2v. I'm not some newcomer who has never heard of wildcards.
>>106703502Maybe you should have included that information the first time around, dolphin fucker
>>106703502Isn't this what loop nodes are for?
>>106703342The sad fact is that every other option lags behind Comfy and are usually only compatible with or focused on specific models, while Comfy is the swiss army knife of frontends. At some point you need to harden the fuck up and learn how to use it.
>>106703495instead of going from one extreme to the other, they could try selling the model as a downloadable
>>106703523And how is that going to work? Once the weights are out. They're out.
>>106703528how does selling games on steam work?
>>106703532Completely differently to how model weights work.
>>106703532ive never boughted a game once
>>106703532Games have no value compared to models lol
>>106703536kek explain to me the difference please
>>106703515>is called comfyui>is not comfy at all
>>106703540if the option is recouping costs via saas vs geting 0 (zero) money from "open sourcing", i think its a reasonable middle ground
>>106703544You can put measures in place to protect your product with games. However flimsy. Weights are just a file.
>>106703200Now try a complex background or better yet do multiple seeds and see the style swing you really don't understand chroma and it shows.>>106703228Disabled retard
>>106703567Guess it depends on the size of the model/vram requirements. Deepseek is open weights, but who the fuck can run a 671B model, so the API is still hugely popular.Same would go for Wan 2.5. If the requirements are beyond consumer scope, they can still make API money while VRAM chads get to eat for free via open weights.
>>106703569Steam drm is extremely easy to crack and there's gog that only sells drm-less games. they both still make money. as long as it's a good price and convenient to run people will pay for it
>>106703584>but who the fuck can run a 671B modelAnyone with a mid range 1.5k$ gaming rig with filled 4 slots of cheap ramhttps://unsloth.ai/blog/deepseekr1-dynamic>inb4 muh speed3-7t/s
>>106703569theres plenty of people buying games that have ZERO drm, both on steam and on gogsilksong hit nearly 600k concurrent and i dont think that has any other than steam integration
>>106703342just use neoforge. comfy is full of useless snake oils>>106703515comfy has really been losing it's edge. anistudio has the fastest inference and model loading and neoforge supports all the models that are worth a damn. the only thing comfy has is saas and that doesn't interest most people in the space. it's going to be over for comfy as something worthwhile learning since pytorch has been proven to be bloated and slow. it's showing it's age and python babies will be obsolete
>>106703594>1.58bit
>>106703594Use your brain for half a second
why is there a doom glyph at the end
Any decent LORA for turnarounds? I kinda just need front/side alignedAll the ones I tried either don't align the character views, or they completely change the style
>>106703602>>106703604>retards that never used itIndeed.A huge 671b model quanted down to A DYNAMIC Q1 quant doesn't lobotomize it like other toy models.
>>106703593Until I see it myself, I refuse to believe anyone will pay for a model who's weights can be obtained without paying.
>toyOh, it's *him.* I actually interacted with him, yuck.
>>106703606demons in puter
>>106703618Silence, ramlet
>>106703576You can't say anything positive about others.
>>106703584they could still double dip with both buyable weights AND api, but it would at least be available locally for people who want to "support" them and people living on the high seas
>>106703599>it's going to be over for comfy
>106703621At least I didn't pay $3,000 just to spam furry 1girls
>>106703485Select all the nodes that will need to change and make note of them, as well as the particular input and what kind of object that accepts. In your case, that might just be the input image and the prompt. If so, that's simple enough.First you have to figure out what kind of input the Load Image node takes. It looks like a string, but maybe it's a filepath, I don't know. Prompts are easier, because they're just strings. Worst case scenario you could actually do the image itself, but then you're making duplicate images on your hard drive—not ideal.Now make a ComfyUI node that accepts both of these as inputs, as well as a string input for a label. Its only purpose will be to save these inputs as a pair under that label so you can use this again some other night and use a different batch of gens.Next you make a ComfyUI node that loads a randomly-chosen input pair that has that label. Maybe that's a json filename, whatever, it doesn't really matter how you do it as long as you pick something that works for those input types. Strongly recommend doing this randomization with a seed input as well so you can replicate results.So now when you're queuing up your gens you're also running those inputs to your output node. As you queue your gens you're storing the info under whatever tag you've chosen for tonight. Finally when they're all queued up you connect your random loading node and feed it into the relevant inputs, and now you can leave it on Run (Instant).
>>106703083I got an idea for a lora...
>>106703342reforge2 works and gets the job done and have wan2gp for videos. Not really a must to use comfy unless you want to goof off playing with dead models with no community support.
>>106703640me too
>chroma still doesn't have dedicated controlnetsit's over...
>>106703666He should've trained them himself before he moved on to le radiance
>>106703681That in and of itself would probably have given Chroma more relevance than anything he's done so far. The fact Qwen edit lets me use controlnets makes it useful even if its slopped.
>>106703632you are coping alone on this one, I use comfy and I fucking hate how every update is more saas niggardry and lipstick on the pig for the front end
>>106703713But this really is just more proof that lodestones has no intention of making anything useful. He's just fucking around with people's donation money.
>>106703342don't listen to this queerfag >>106703650reforge2 is trash, go for neoforge instead.
>>106703729This post is a masterclass in comedy. Like you think it's going in the better direction, then it goes in an even more retarded direction.
>>106703640
NUNCHAKU LIGHTNING BROS!!!! WE WONNERED!!!!
>>106703728lel, he made exactly what the guy who put up the money asked for, a uncensored de-distilled base model
>add NAG to my workflow>cant use stepped strengths for the cfgREEEEEEEE
>>106703783Go be a nagger somewhere else.
stagnant thread of stagnant tech
>>106703768So how much does this lobotomize the model?
>>106703768I know nothing but can guess it's a face thing, right? Otherwise it fucks with your VRAM lol
It's not possible to get gguf models into this setup is there?>>106703821NAGGERS
>>106703838just ran the 4steps one, 19 secs to gen
>>106703860>other hand clearly doesn't have fingernail outlines>generated hand doesUseless
>>106703859The model loader should be able to read ggufs. Just turn off the quanting
>>106703860You need to test this on more complex shit. Backgrounds, artstyle changes, mixing with realism etc.
some redness on their armpit. no idea why
>>106703889Doesn't appear. in the list no matter the settings.
>>106703904from the looks of it, it has the same issues as any other quants (Q8/fp8 and below), meaning if you ask to make an image 'real', it doesnt work, it requires more deliberate prompting. Tested also with the shift sampling at 3, 1 and with the flow sampling 0.5/0.9kept the same euler/simple samper/sched.
>>106703967Try turning cfg norm to like 1.05. I found that helps with restyling. And cfg on if it isn't already.
Every Qwen Image Edit workflow on civit has some form of inane faggotry in it, either unnecessary nodebloat or the worst faggotry of all, Anything Everywhere combined with having every fucking node aligned in a big square box.Is there a normal workflow somewhere, just the barebones necessities?
>>106703967this image is the same exact prompt/cfg/steps/seed but using the old qwen edit at the same exact quants.The prompt is this btw:Make the image from an anime illustration to a real life photograph of a real life person in a real life train stationNote that here I had to reinforce 'real' multiple times to make it behave. In the old qwen edit I could've just prompted for 'make the image real' or even just 'real' and it would've just worked
>>106704002to add, in the new one if I just ask with the super deliberate prompting, I would get some satisfying result, but as soon as I also add a train station it completely reverts (adding it before or after I ask to make her from an anime ill. to a real photo of a real person)
note that all these tests have been done with the 4steps nunchaku lightning, but I observed the same behavior with regular nunchaku and with Q8/fp8.I don't have fp16 (or the hardware to run it, I'm a 16gb vramlet), but another anon in the previous thread confirmed my same issues, only that fp16 seems to 'follow' the prompt better.
>>106703966Are you talking about model ggufs or text encoder ggufs btw?
this especially sucks because the model seems better, but as soon as you try to make anything realistic or real, it shits the bed.
>>106704001Example workflow in comfyui directly.It works, it's simple, and it's a great base to add your own things.
>>106704001Never download workflows from civit lol. This community is full of fucking lunatics. Start with the comfy templates or by the model providers on hugginface and then switch or add what you need when you feel like it.
>>106704109>>106704117Comfy's workflow for image edit only takes a single image input, but it's fine, I found the TextEncodeQwenImageEditPlus node which takes three.
is there like a black market for celebrity loras? civitai is so dead now
>>106704122>Comfy's workflow for image edit only takes a single image inputNo, there is a new version for 2509.
>>106704133Probably on discord or telegram, but you don't really need them. A retarded monkey can take like 50-80 images of a celeb, JoyCaption them, then run them through a trainer.
>>106704075Text encoder.
>>106704099Seems to do realistic restyling in my machine. It's just they all look like washed up roasties.
>>106704140Use this to hook the normal prompt field and clip loader nodes what can read ggufs
>>106704136Oh, true. How did I miss that.Thanks.
WTF ALIEN
>>106704141are you using fp16? what's your prompt (or wf you can share)?
I use remove clothes lora as default, but I noticed that while it works, the nipples are always undefined and kind of blurrier than I'd like (didn't test genitals but I guess it would be the same).Is there a real "nude"/"nsfw" lora for qie?
>>106704150I'm using Q8. LoRA off when doing restyles. Everything relevant is here. What really seems to make it pop with the new model is specifying exactly what image number you want restyled.
>>106704160>hyper realisticCan you try just match style 1 to style 2?
>>106704166Uhh, lemme give it a spin. I doubt it will work though.
>>106704143Oh shit, it just slots into both positive and negative embeds in the NAG node, nice, thanks.
>>106704160>negatives + non lightningthis was literally a non issue with/without lightning in old QIE, NAG for QIE WHEN???>>106704166I tried and it didnt work, but I think I might be a promptlet
>>106704166This was it with a sonichu cover as its input. I think it was a hard ask though.
>>106704228>NAG for QIE WHEN???Can't you just hook the KsamplerwithNag into the existing workflow and use the nag standard prompt input field without the image inputs?
>>106704228>Negatives.They aren't really necessary. It will still change the style without them.
>>106704242Take 2 specifying the medium. Yeah nah, still looks nice I guess.
>sage still black-screens qwenGay.
Yeah it's not working.
>>106704330I just realized how many similar design choices this random character and chris chan's tranny form have in common.
>>106704330do 50 step 4 cfg no light lora
>>106704474If I can see it's not working by step 5 there's no reason to go beyond that.
is there a single flux fine tune thats worth a shit for nsfw?
in wan 2.2 with light loras, which sampler / scheduler do you guys use?
Not awful.Not great either.
>>106704632I used wan to animate some of my favorite manga pages, it was kinda interesting. No I won't show you them.
>>106704632For the best results you'd probably need to take some really well done color manga/hentai, turn it black and white, then use them as your target and control datasets to train a colorization lora
>>106704607I use lcm sampler with simple scheduler.
>>106704632>it can't translate the japanese kanji into englishSo close..
>>106704578Yeah, chroma.
Does anyone mind sharing their qwen edit multiple input workflow or linking me one?
>>106704693Open Comfy, File, Browse Templates, Qwen 2509
>>106704706Thanks...
>>106704632Man colorising it really shows how ass the anatomy is.
>>106704632Can you use a mask and draw blobs of colors you want as a guidance for the model or does it just colorize it howere it wants?
>>106704749I told it what color to do the hair and eyes for each character, but the rest I just let it do whatever. Multiple characters would be tricky, but yeah, you could do something like a simple color layer in photoshop for hair, eyes, outfits, etc, to give it some guidance. It'd take a bit of work, but less than actually colorizing by hand.
is the native workflow for wan animate considered decent enough?
>>106704680>chromawhich one?
How do you reference input images in the latest qwen edit? There's three inputs. If I have two images, do I say "Change image 1 to match image 2's style"?
does pony even have a purpose anymore?
they merged the wrong speed lora into nunchaku and fucked the model LMAO
RRARGH
>>106704837see >>106703821
>>106704837that nag implementatiion is dead, the dude hasn't updated it in months. would need someone else to step in and make it work with all the new models.
>>106704799I doubt it, anyone who used Pony is likely using Illustrous now
>>106704799You can use it as guardian for smaller lifestock
Nunchaku r128, 40 steps. Ignored the pose and cap.
>latest qwen STILL shifts the image's dimensions or zooms in slightly
Specified to keep the pose and clothes. Seems like the lightning nunchaku is just ass? Normal works fine. Empty neg prompt.
>train station platformLol fucking chink models.
>>106704884are you using the just now released nunchaku edit?they fucked the model by merging the wrong lora. they used the qwen image loras instead of the qwen edit ones.
>>106704967no this is base nunchaku
Seems like if I tell it to maintain the pose it won't touch the character artstyle itself
the woman holds the green plushie above with her headcool
>>106705016"the green plushies head pokes out between the womans soft large breasts, pushing them aside."
>>106704861And the hair color and the background.
>>106703105Check your eyes
Does anyone know how to disable updates for sd-forge?I gave up on that on life support shit long ago but there is one thing that is easier to run on it, (X/Y/Z charts) so I had been using it very occasionally.Unfortunately one of python libraries it relies on deprecated a function so it is no longer working.I can pip install the correct version but it is instantly reverting.I tried to launch without internet, but it refuses to launch. (RuntimeError: Couldn't install requirements.)
>>106703558Guess your mom never had Tylenol when she had you>many such cases
>>106705114Like requirements.txt has the == of actually correct version already but it just doesn't give a fuck and upgrooooooods to the hecking newest package and breaks functionality anyway?Fucking piece of shit I just want it to the one fucking and it can't.
>>106705166remove it from the requirements and manually reinstall the good version?
Anyone got anything but pure garbage out of local Hunyuan 3D models?
>>106705183Local 3D is so far behind saas 3D right now it's not even funny.
https://github.com/Tencent-Hunyuan/Hunyuan3D-Omnihttps://github.com/Tencent-Hunyuan/Hunyuan3D-PartSpeaking of which. Here's a new 3D toy from Hunyuan.
>>106705174No it's already ignoring it and reverting my manual install, did you not read >>106705114 ?I managed to discover that it is being forced by scikit-image. But the catch is that that is also above requirements.txt version.I will see if I can manually downgrade this chain without breaking everything.This is what you get for using something whose last commit was 3 months ago I guess.
>still zooms out even with a divisible by 112 dimensionsDoes this happen on Q8/6?
>>106705241The 112 thing was always a cope. I just use nearest megapixels and it just does it whenever the fuck it wants. I can't predict or control it.
trvke nvke: The bouncy boob LoRAs here are too bouncy.
>>106705237It went about as well you would expect it to.Just wanted to epoch comparison for my lora.Later I guess.
>>106705281trvke nvke: non-character loras are for people who don't know how to prompt
>>106705281FAGHE'S GAY
>>106704884>>106704947>>106705014im just sticking with the old qwen edit for now, sucks out all the fun trying to tardwrangle the model just for a quick laugh
>>106705340I think the new Qwen edit deserves credit where it's due. It does a really really good job of preserving the character between edits, and works shockingly well with controlnets. Plug a depth map or open pose image in as one of the images and watch.
>>106704632Wauw>>106704258Really nice
It's pretty gud at clothes transfers. I never used the first Qwen model so I don't know if it's comparable. It does lower the input image's quality, but you could just upscale it after. Feels like it'd be good wedged between txt2img and upscaling for clothing you can't quite prompt correctly/specific clothing/clothing you can't be arsed making a lora of.
>>106705354What controlnet does qwen use?
>>106705363It is actually really good at clothing transfers. You can even, generate a layout of the character you want's clothes against a white background before putting them on the subject. That way the target clothing's body doesn't influence the subject.
>>106705383
>>106705374None. It just takes the depth map, canny or openpose or whatever and uses it natively.
>>106705394can it take an image and make a depth map from it?
>>106705383>You can even, generate a layout of the character you want's clothes against a white background before putting them on the subject.Oh shit, I didn't even think of that. Use it to extract the clothing onto a blank slate, then use that as the control image. Noice
>>106705393Who wore it better?
>>106705417can you tell it to match the blurrines and lowres feel of the background image?
>>106705363It's missing Sweeny's necklace, BDSM outfit is mangled, and feet is weird and incosistent. But it's impressive yes.Also did you put this image together with a workflow?
>>106705423I haven't tried. I'm not sure what I'd need to prompt for it. It's honestly pretty finicky about what prompts it gets.
Long gen chads, I want to be able to gen 500 frames without OOMing. I already have the context nodes and apparently comfy does an automatic block swap, is there a way around this so I don't OOM at 220 frames? Possibly edit the blockswap of comfy if that's possible, allowing for more frames to pass?>inb4 get a bigger card (working on it)>inb4 use last frame bro
>>106700667What anime?
>>106705424Standard comfy workflow from the browser.You need to prompt it to include certain items, like jewelry or hats. It didn't include the tophat until I prompted it. It's why the nazi latex outfit is missing it too, forgot to prompt it in.Feet are weird, yeah. You could probably prompt it better. The fishnets are inconsistent, one image was perfect but she had a black bowtie and no hat, so I didn't use it.
>>106705439No I meant the images together.Did you use a workflow for putting all of them into a single image or photoshop them later.
>>106705451Oh, nah, I just threw them together in photoshop. Could probably easily vibe code a quick python script to do that though, I didn't bother since I rarely have to collate shit.
>>106705433use kijai nodes. native context windows are really bad. somehow kijai handles the vram better.i can gen 800frames with kijai but only 177 on native be4 it ooms
>>106705497Oh damn, I would but I'm 16gb vramlet, unfortunately, or unless it can some how still work?
Don't forget to spend lots of API credits on wan 2.5 and say nice things about qwen and maybe they'll release 2.5 for us. Maybe. *wink*
>>106705534>say nice thingsKek, its far too late for that. Did you see the absolute meltdown when they announced 2.5-preview? The devs for sure browse leddit and possibly here.
>>106704632>bj research clubthanks for helping me discover this masterpiece
Does qwen edit not to style transfers based on input images? I've tried shit like "Change the art style of the first image to match the exact style from the second image" but almost nothing changes
>>106704783Base or 2k test, the hd one sucks.
>>106705601Nah it's not really a thing. If it makes you feel better. Nano banana cant do it either.
>>106705437it's a story about AI perceived BAD BUT AI WAS LE HOOMANS ALL ALONG:https://anidb.net/anime/19415original work, but picrel is the kind of writing you can expect
>>106705674Actually wait, it kind of can. Never mind.
>>106705680ipadapter bros... our response?
>>106705685How did they do it?
>>106705680What do you prompt for that?
>>106705612I did not have a good experience with base, what am I missing?
>>106705738Are you doing realistic or anime? Chroma sucks for anime.
>>106705742I tried anime/cartoon/artsy stuff I think.I will give it a shot again for realistic but I am not too hopeful against 3 legs if I am being honest.
>>106705766Chroma is only really for realistic, and some artistic stuff. If you want to train a LoRA on a specific artist's style, it's amazing. For anything else, ie anime, animated, stick to Illustrious.
I wonder how wan 2.5 is actually doing since release. I haven't seen any outputs or talk about it.
>>106705777sorry meant stick to neta yume
>>106705677Thaaanks
>>106705779Nobody is going from something like Veo to censored and pay per gen Wan 2.5. It's just not going to happen.
>>106705779don't worry, they'll for sure open source the full model. r-right guys?
>>106705814yeah guailo just buy the api tokens and we'll open source it
>>106705766>>106705777My bad I never tested Base.I went through many different epochs and versions but Base wasn't among them apparently.What do you mean by 2k test btw?
>>106705814The only way and I mean only way it gets open sourced is by proving that they jumped the gun in API onlying it and outright refusing to use it to the point where hosting it on their servers is a net loss. Anything else is cope.
>>106705777It has tarined very few artists captioned by name, and mostly 'classic' artists, so it's hard to get any specific style out of Chroma unless you train a lora, that works great though, but I really miss the good old days of SD1.5 which knew a shit ton of artists, and celebrities etc.
what is wan animate and what do anon use it for?
>>106705850Replacing people in videos with other people.Animating static images using the motion of a different video.
>>106705861can I record a lewd dance of myself and replace with 1girl?
>>106705879Yes, absolutely.
>>106705756
>>106705889is it censored?
>>106705913Bruh you're getting annoying. It will take whatever base image you put in and animate. I don't know how you censor that.
>>106705913>is it censored?that's the question, I have yet to see a NSFW catbox with Wanimate
>>106705913Yes.
>>106705913No.
>>106705913define censored
>>106705919>>106705922I see conflicting information
>>106705497does kijai comfy come with sage attention already installed?
>>106705936can it animate pissing / cumming?
Fucking naggers, bro
>>106705964Download the model yourself and work it out. Stop shitting up this thread.
>>106705988ok so it can'tgot it
>>106705988>shittingcan it animate that?
https://www.youtube.com/watch?v=DJiMZM5kXFcthat chink was able to test HunyuanImage 3.0 before everyone
I'm messing around with the WAN context windows, the videos you can generate with it can generate seem good but is there some way to designate a prompt for each segment? Seems kinda pointless if you're just genning a 500 frame video with the same prompt for the whole thing, it tends to just sort of loop.
>>106705022
>>106706025I'm trying to figure that out too. Can only run the gguf models but then again kijai has the fp8 15gb models, hoping it will work on 16gb. Either way, context models are great for *repetition* ;)
wan2.5 is not getting open source'd. i can nearly promise that.not even a day after launching the model on wavespeed only premium members can use it.it seems to be pretty profitable right now.it is 200% absolutely over.any words of them saying "open weights soon :)" is complete fabrication to keep the retards placated. but hey, just 2 more weeks amirite
>>106705964no it can only really match limb and face movement. even detailed hand and finger movement can be shit. >piss and cumprobably lot of trial and error to find a seed that works
>>106705779>I haven't seen any outputs or talk about it.why would you talk about an API model on a local thread? the only moment it happens is when the API is so good and unslopped no one can ignore it (for example Seedream)
>>106706060thank god I didn't download try it.yet another wasted model
>>106704148this is definitely someone's fetish
>Only chinese models except 1top kek
>>106706076wanimate isnt that bad, it's just extremely geared towards "1girl, dance, slop, tiktok, hot girl, make her hot," kind of retards.>>106706090the details are ok. the ponyface is not.
>>106706029nice
Which model is best for creating porn? There has to be something better than Illustrious
>>106706118>ponyfacenta, but that looks more like an akira toriyama face to me.
>>106706118its bouquet from blue dragon. It's far from pony face.>>106706142based
>>106706137Illustrious/Noob for anime. There is nothing better. Chroma for realistic. There is nothing better.
>>106706163>Chroma for realistic. There is nothing better.this is grim, Chroma fucking sucks
>>106706163>Chroma for realisticchroma and qwen for i2i/low denoise are absolutely goat.
chroma still around?I thought it's a dead project because qwen exists
if there's one thing the new Qwen Image Edit does, it's plastify humans, it's so good at that it does it even when we don't ask for it! What a kind model!
FUCKING TOURISTS
old man yell at 5070ti
Qwen Image Edit is so hit and miss with some prompts. "Change the style into a real life photo" works on some images, on others it won't do shit. "Change the style of the image into 3D animated Pixar style", same diff. The only thing that works every time is lame shit like "herp derp le studio ghibli"
>>106706194have you even tried prompting for it to keep the style etc? give me the original image and i'll post how you do it properly to make you look like the retard promptlet that you are.
>>106702370>>106702448I could kind of sympathize if it was a different character, but it's a fact The Major is a slut and sexualized from the start. that's just how Shirow writes.page 55https://www.scribd.com/document/606252408/Masamune-Shirow-Ghost-In-The-Shell-complete-uncensored
>>106706216get back to work chang, your model is slopped as fuck
>>106706011this video is interesting, the edit version of HunyuanImage will be released in a month
Have you guys done any tests for diminishing returns with more steps while using the speed up loras?
>>106706011>>106706253https://youtu.be/DJiMZM5kXFc?t=185he's also saying that the SPRO guys are currently finetuning Qwen Image so we'll also get that soon
>>106706263it's generally what they're set to. if you mean start and stop steps and lora strength, i haven't, since i only use them as previews.
>>106706237skill issue, figured.>>106706253the fact they have not shown off any of its capabilities publicly is a worrying sign.
>>106706331>skill issuesays the nogen who didn't showcase his "skill", ironic
>>106706337> didn't post source img> expects resultsdamn, you really were dropped on the head huh?
>>106706343>noo you don't understand, I need this unique input image to show that QIE isn't a slopped model, it won't work if we go for another image!!let me guess, your mother has eaten a lot of tylenol when you were on her belly?
>>106706118>details are okbro she has 6 fingers
>>106706361tylenol? what? oh, you're the singular shizo guy from earlier. lmaoo filtered.>>106706194>facial expression changed>"polaroid"massive, terminal skill issue. this was the first gen. definitely not perfect but shits all over your slop. if i gave an actual shit i could easily refine it. but it's enough to show your sorry ass up. now fuck off and learn to prompt before you EVER post here again. cunt.learn to prompt before you talk to me or my slop ever again.
>>106705677The series is way behind, we're already basically at the point it portrays but the anime takes place over 10 years in the future. Also she literally fapped to her step bro in the fist 10 min.
>>106706211style change is harder in new qie+
>>106706404>massive, terminal skill issue. this was the first gen.Wait, don't tell me you're proud of this shit? It really looks like someone just copy-pasted the image onto a background without bothering to adjust the lighting or anything else. It's getting embarrassing...
>>106706404>Everyone I don't like is the singuler schizo (not "shizo" you ESL)Still trying to find poopdickschizo?
>>106706407I just want to see the DeepSuck R1 powered androids battle it out with GPT-5 BENCHMAXXED and GEMMA SAFETYMAXXED androids.
anon, do you know these models and can you share a comfy workflows?https://huggingface.co/ShinoharaHare/Waifu-Inpaint-XLhttps://huggingface.co/ShinoharaHare/Waifu-Colorize-XL
>>106706426>definitely not perfect but shits all over your slopyou really are a stupid cunt if you think anyone would be proud of any ai image they produce.could you please use a name+trip? so i can avoid your pitiful existence. you already post like an avatarfag so go ahead.>>106706441don't have to. i'm him.
>>106706179>>106706190This pathetic samefagging
>>106706424>style change is harder in new qie+that's what happens when you want to save an model with finetunes, at some point you're trying too hard and the model starts to lose some of its concept, that's why the pretraining is always the most important part, if the base model is too weak, it's already over
>>106706450>>definitely not perfectsmells like skill issue am I right?
>>106706484>>106706484>>106706484>>106706484>>106706484
>>106706450>you really are a stupid cunt if you think anyone would be proud of any ai image they produce.why wouldn't you be proud of a good AI image? then what are you trying to achieve if you can't be proud of those results?
>>106703083>showing ass on the underboob squirrelhow dare you