Discussion of free and open source text-to-image modelsPrevious /ldg/ bread : >>101811014>Beginner UIEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Advanced UIAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUIInvokeAI: https://github.com/invoke-ai/InvokeAISD.Next: https://github.com/vladmandic/automaticSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI >Use a VAE if your images look washed outhttps://rentry.org/sdvae>Model Rankinghttps://imgsys.org/rankings>Models, LoRAs & traininghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://github.com/Nerogar/OneTrainerhttps://github.com/derrian-distro/LoRA_Easy_Training_Scripts>Fluxhttps://huggingface.co/spaces/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/flux>Pixart Sigma & Hunyuan DIThttps://huggingface.co/spaces/PixArt-alpha/PixArt-Sigmahttps://huggingface.co/spaces/Tencent-Hunyuan/HunyuanDiThttps://huggingface.co/comfyanonymous/hunyuan_dit_comfyuiNodes: https://github.com/city96/ComfyUI_ExtraModels>Kolorshttps://gokaygokay-kolors.hf.spaceNodes: https://github.com/kijai/ComfyUI-KwaiKolorsWrapper>AuraFlowhttps://fal.ai/models/fal-ai/aura-flowhttps://huggingface.co/fal/AuraFlows>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>GPU performancehttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Try online without registrationtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-restsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-medium>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/trash/sdg
>beat me to the bakeHave my collage.
>not included in either collage I'm going to kill myself
>>101813750DO A FLIP https://youtu.be/QibeKQ9W1UU?t=9
Blessed thread of frenship
>>101813658schnell isn't good.>>101813688I don't fully understand it, but if you're doing the setup which lets you use negative prompting and you drop the CFG down from 6 to like 2.5 it becomes invisible. More steps can help too.>>101813742>2 of my gens in the last thread OP, 2.001 of my gens in this one Feels good man
>thread blessingwe bac
>>101813742>tiny mikukek
since when does discord hook into comfy? the "Prompting [...] with Comfy" status thing
>>101813810You can make a discord bot do anything you want
Your lack of respect for DALL-E will be your downfall when DALL-E 4 is revealed.
>>101813836>>101793904
>>101813836DALLE-E Snore more like
>>101813836>Your lack of respect for DALL-E will be your downfall when DALL-E 4 is revealed.We already have Flux. By the time Dalle 4 drops we will have Flux v2 either on the way or already here. There's nothing to worry about since open > closed, and Flux's release has rendered Dalle useless for anyone who's smart.
>>101813876>Flux's release has rendered Dalle useless for anyone who's smartThat's what someone who is not smart would say.
fluxpro.art banned me for genning all those vaginas, lmao
>>101813750Neither am I and I made one of them
>>101813886kek
Okay Flux, we're cooking
>>101813957based
>>101813730>>101813742nice collage
You've converted me, mikuanon
Does this shit do anything? I tried playing with it but it didn't change the picture
>>101814092I saw people playing around with "realism" of the output by putting these parameters extremely low, like 0.3-0.8
>>101814092Yeah Idk man I tried a lot of values, even 0 0 and I always got the same picture, is there something wrong with my workflow?
>>101814147your workflow is fucked
>>101814168post yours right now
>>101814183
Are any of the DynamicThresholding, Guidance Neg/Pos, AdaptiveGuider, SkimmedCFG et c at all applicable to schnell? I guess not given the docs
>>101814225Holy based
>>101814147The purple node before modelsampling being "bypassed" means it's literally being bypassed, so any shit connected TO it is still going to the next node, am I getting this right?
>>101814225How do you get the straight lines?
>>101814238yup
>>101814147why are there two line coming out of unet loader? that means your nodes are not all using the same config, you have one path that goes through ModelSamplingFlux and another that does not.The node works for me btw
Do you really need to have a 24 gb vram graphics card to run inference of the biggest and better Large Language Models locally?When will models become efficient enough and their size reduced without losing quality so that they are able to reasonably be ran locally on a 4gb card or even 2 gb?
>>101814238>>101814243Ok I removed the purple node and nothing changed>>101814250what do you mean by that? Here's the workflow: https://files.catbox.moe/8l9d2u.png
>>101814270Your basic scheduler is connected straight to the model itself, not the modelsampler
>>101814250Ok I think that's it, thanks anon I didn't know it wouldn't work like that
How well does flux make 2B art???I'm going to start genning this weekend.Ty!
>>101814256>Do you really need to have a 24 gb vram graphics card to run inference of the biggest and better Large Language Models locally?You need >100GB of VRAM to run Llama 3.1 405B at a low quantization.Even LLama 3.1 70B won't fit on 24GB, anything smaller than that is too dumb to bother running.>>101814270I mean you have one line going to ModelSamplingFlux and another going to BasicScheduler, they need to be in series, not parallel.>>101814285Nice.
>>101814240https://github.com/pythongosssss/ComfyUI-Custom-ScriptsLink Render Mode
>>101814285For some reason Dynamic Thresholding breaks the image for me.IIt does increase the generation time like it's supposed to but all I get are blank white images.
>>101814290Flux does not know 2B
>>101814311>It does increase the generation time like it's supposed tothat's not the fault of dynamicThresholding, that's the fault of CFG > 1, that shit halves the speed>all I get are blank white images.Did you use this workflow anon?https://reddit.com/r/StableDiffusion/comments/1enxcek/improve_the_inference_speed_by_25_at_cfg_1_for/
Holy fuck you can tell he's enjoying his burger, kek
>>101814290>How well does flux make 2B art???It can do pixel art, Miku and Trump, and that's it
>>101814290>2B artTry Hunyuan.
>>101814349and Meghan Markle for some odd reason
>>101814365In particular https://desuarchive.org/g/thread/100907329/#100910377Was meant to be 2B (I purposely didn't prompt for the blindfold and also prompted that particular style), freeway finetune should be even better at this.
>>101814408>Was meant to be 2B (I purposely didn't prompt for the blindfoldif the model knew who 2B was it would've automatically put the blindfold
>>101814092why does it have a resolution? why can't it adapt to the resolution you normally put on the Empty Latent Image node?
>>101814454it doesn't have that info at that point in the graph
>>101814461that's bullshit... there's probably a way to get that information before starting the graph thing, it's retarded to have to change resolution on each node
>>101814433Not how Hunyuan works anon. I got many gens that it had blindfold and more of her features matched, also see https://desu-usergeneratedcontent.xyz/g/image/1718/07/1718073928017.pngI asked for Gojo's blindfold and just said "purple orb", the model inferred properly what they looked like and how the technique is used. 2B is a video game character, so if we imply that in prompt the model will get closer. Hunyuan works best when you prompt it this way, if it seems to forget to had amazing recall ability.
>>101814454>>101814461Just set up a couple of primitives called Height and Width and pipe them into both the latent image and ModelSamplingFlux, then you only have to set it once.
>>101814472this? how does that work?
>>101814496right click -> convert widget to inputon the node you want to connect to
>>101814503>>101814496Once you've done what the other anon said, drag from the primitive to the new input and the primitive will automatically convert itself to the right mode.
I don't even know if this is NSFW or not because I can't really identify the body partshttps://litter.catbox.moe/8vm4mu.pngjust in casejesus
>>101814496>>101814503Ok I think I got it, thanks anon
>>101814469>to hadit has*
>>101814329> A picture of a bitch with a nice ass who is dressed like a maid who also knows how to kill you.
>>101814290This is who it thinks 2B is
>>101814679No
*cough*
>>101814871magnificent.
>>101814837
For those using DynamicThresholding, I think you should change the threshold_percentile from 1 to 0.9, 1 is simply overkill and destroys the colorshttps://files.catbox.moe/y2h7zd.jpg
Is tiled VAE decoding built into comfy or do I need a node?
>Heres you deep fry nipples anon enjoy
>>101814978thats one thing that comfused me, I always used 0.97 or so and never 1
>>101814986>double-click the interface>type "tile"
>>101814989https://files.catbox.moe/h8avmw.png
still gooning to nuns
>>101814468Of course there is.
>so many newfags on comfycute
>>101813836DALLE is already impressive in its current version, but every time it spouts nsfw you get dogged.It literally doesn't compete with local.
It doesn't like schizores :(
>>101813886Did they ban you afterwards? Is there some manual reviews?
>>101815084let me guess, I need to install a new package to get that one?
>>101815133I dunno but it works again so I must have been flagged for manual review and passed whatever test I was flagged for.
>>101815149let me guess, you're even unable to understand the information given to you in the image
>>101814256Anon, the bottleneck is real, why do you think most LLMs/quantizations magically need 24GB of VRAM ?If we had a "4090 titan" with 64GB of VRAM, local LLMs would target that instead.
>>101815149See the text in the green box above the node? it's the name of the custom node it came from. Comfyanon rarely ever does qol improvements so this shit is basically unusable without custom nodes.
>>101815169>Comfyanon rarely ever does qol improvements so this shit is basically unusable without custom nodes.can't wait to go back to forge, his software is fucking ass
>>101815175Probably not gonna happen any time soon, illya is reworking the whole thing, and reforge is a half-working mess made by an amateur.
>>101814349The way they didn't take out Miku is so weird.It also knows Goku.I guess they're way too well known or something.
>>101815175illyas gonna abandon forge again. at least comfy is consistent.
bless flux for filtering a1111, comfy is so good, thank you for making me learn.
>>101815205>comfy is consistent.his spaghetti shit is consistently bad yeah
I think I solved the sylization of flux for painterly styles.- One prompt with everything (style and subject)- Gen with Flux- Gen with SDXL- Use SDXL Image as IPAdapter style only input.- Gen with SDXL-Ipadadter model using Flux image and low denoise (0.5,-0.6)
>>101815224works on my macheenand has done so since ~December
>>101815237>I think I solved the sylization of flux for painterly styles.or you can just do thathttps://reddit.com/r/StableDiffusion/comments/1enm9og/discovered_by_accident_a_trick_to_make_flux/
>>101815224That's the least of the problems, especially with shit like get/set nodes and UseEverywhere that let you save anything globally and access it from elsewhere without any spaghetti
>>101815127Skill.... issue?I think with high enough steps you could get something decent out of it, but it's not really worth it given it gens like 5x slower than at 1MP
>>101815261>https://reddit.com/r/StableDiffusion/comments/1enm9og/discovered_by_accident_a_trick_to_make_flux/I tried it, it's very far from what I am looking for.
>>101815224note how I did not include a quality tag describing the experience of working with comfyUI.>>101815263gotta admit tho, steep learing curve. my first weeks with comfy weren't fun.
>>101815224It's just the way Blender works.
>>101815263get/set nodeThe what now?
>>1018153031) you don't spend your whole time doing spaghetti shit on blender2) and? fuck spaghetti shit, is that clear enough?
>>101815320I don't do spaghetti shit. My workflows are neat, square, and organized. I only build things once, and then I can import them as modules.
>>101815333How do you manage your modules? Having them all in that dropdown seems like it'd be a nightmare.
>>101815320I don't pass my time doing spaghetti on comfy, I set it once then generate.
>>101815347finally, a vagina
>>101815320okay nigger do you use flux with diffusers in python whats your point
>>101815361the dropdown isn't that hard to manage, I think? That's just for select modules like IPA or different CNs. I always maintain the same base workflowBut I can either just copypaste them from a different interface, or just save them as screenshots of the workflows, with the workflow embedded in the file. Just throw it into the window, ctrl+c, ctrl-zbut I rarely need to import things like that - i usually just have them bypassed somewhere.
>>101815311Just another set of custom nodes, werks like this.Notice how the model and positive/negative inputs glow in the ksampler, same with the clip input in both text encoders - they automatically take the value stored in anything anywhere nodes. Like setting a global variable or something. Get/set nodes connected to each other, you can name then however you want to save as many things as you want.
>>101815389NTA but I do something like that. finally managed to get it to be faster and more stable than the UIs
>>101815411I feel like its not a huge stretch from comfy
>>101815441yeah. I definitely stretched it though
The community is going to be retards and focus on schnell, because they are RAMlets, aren't they?
>CRITICAL HIT
>>101815477What's that last model and doesn't schnell require the same amount of VRAM?
>>101815477Also the licensing bullshit, you can't do training on Dev and make money off it, so all the guys who have patreons will do Schnell instead.
>>101815523https://comfyanonymous.github.io/ComfyUI_examples/flux/#simple-to-use-fp8-checkpoint-versionAnd Schnell does, but it gens in 4 steps as opposed to 20 so vramlets can have a chance to finish in under an hour per image.
>>101815571i enjoyed this funny image
>>101815571That's not what the license says. It says you can't use Dev to train a competing model. Donations become a grey area, because technically they can be seen as profiting.
I wonder what it takes to get the AI sloppa aesthetics out of a model.
>>101815588Can I go back inside the womb?
>>101815604>I wonder what it takes to get the AI sloppa aesthetics out of a model.this? -> >>101815261https://imgsli.com/Mjg1Nzk5
>>101815571>>101815127Looks like the model doesn't care about total resolution, just that either direction isn't above 2048. Beyond that you run into the usual Cronenberg issues.>>101815600Sure, but why take that risk when the plebs will happily slurp up Schnell quality anyway.
>>101815600>Donations become a grey area, because technically they can be seen as profiting.So in simple terms it's DOA because no one is gonna risk it
>>101815600>can't use Dev to train a competing modelAny finetune is a competing model
>>101815639that's not importantthe important thing is that you've found something that can help you feel superior to others
This so called realism LoRA doesn't do shit. The outputs are identical.
>>101815671consider thisyou're doing it wrongor everyone else is faking their outputs
>>101815642>>101815655>>101815629>Non-Commercial Use Only. You may only access, use, Distribute, or creative Derivatives of or the FLUX.1 [dev] Model or Derivatives for Non-Commercial Purposes.>Non-Commercial Purposes>In the case of Distribution of Derivatives made by you, you must not misrepresent or imply, through any means, that the Derivatives made by or for you and/or any modified version of the FLUX.1 [dev] Model you Distribute under your name and responsibility is an official product of the Company or has been endorsed, approved or validated by the Company, unless you are authorized by Company to do so in writing.https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
>>101815696I will translate this into retard. You are free to train fine tunes of Flux.1 Dev. Everyone is a dumb piece of shit who has a reading level lower than a 5 year old.
>>101815678I very probably am because there's no instructions on this. Cranked the strength up to 2 for a 40x slower gen, but let's see if anything happens
>>101815657Anon if I was capable of feeling superior to others I wouldn't be here on an imageboard.>>101815612There's no romb inside the woom>>101815696>You may only access, use, Distribute, or creative Derivatives of or the FLUX.1 [dev] Model or Derivatives for Non-Commercial Purposes.I've read this like eight times and it makes no sense>You may only... or creative Derivatives of or the FLUX.1 [dev] Model...???Was this written by an ESL?What does 'You may only... ...creative Derivatives' mean?Even if that's a typo and they mean 'create Derivatives', the next bit makes no sense either.
>>101815775elitism sure tries
>>101815629every model has some rare condition where it doesn't look like AI slop once in a while
>>101815762>Cranked the strength up to 2 for a 40x slower genlora strength doesn't affect generation speed
>it's called flux dev>you can't dev with it
>>101815775You may only access, use, distribute or create derivatives of the FLUX.1 [dev] model for non-commercial purposes.
>>101815856>it's called pro>you actually pay money instead of making profit
>>101815856>>101815714retard-kun is right, you're free to train finetunes.You just can't charge for them.
>>101815775>Was this written by an ESL?Yeah, Americans. I have to assume they mean 'create'. I cut some of the thing off but here's the rest of the Non-Commercial clause>If You want to use a FLUX.1 [dev] Model a Derivative for any purpose that is not expressly authorized under this License, such as for a commercial activity, you must request a license from Company, which Company may grant to you in Company’s sole discretion and which additional use may be subject to a fee, royalty or other revenue share. [Email here]If you want to use flux.1 Dev for commercial purposes i.e make money off it, for example host it on a site and sell token access, you need permission from BFL and you need to give them a cut.More context for distribution and creating derivatives.>in the case of Distribution of Derivatives made by you, you must also include in the Attribution Notice a statement that you have modified the applicable FLUX.1 [dev] Model; and>in the case of Distribution of Derivatives made by you, any terms and conditions you impose on any third-party recipients relating to Derivatives made by or for you shall neither limit such third-party recipients’ use of the FLUX.1 [dev] Model or any Derivatives made by or for Company in accordance with this License nor conflict with any of its terms and conditions.>In the case of Distribution of Derivatives made by you, you must not misrepresent or imply, through any means, that the Derivatives made by or for you and/or any modified version of the FLUX.1 [dev] Model you Distribute under your name and responsibility is an official product of the Company or has been endorsed, approved or validated by the Company, unless you are authorized by Company to do so in writing.If you train a fine tune from Flux you must give them credit for the base model, but you are soley responsible for it, and any legal action that might come from it, for example you create a cunnytune, the feds will come for you, not BFL.
>>101815902It's written by Germans, Flux is a German modelIt's amazing how fucking stupid Euros are
>>101815879it's pro as in professional
>>101815886this one is cool, reminds me of lice
>>101815860Minus the cat ears this art style reminds me of a game I used to play back in the 90s on Mega Drive but the name escapes me. There was a barbarian, a wizard, and a sexy scantily clad woman who is the only one I ever played as.
>>101815604Never understood the idea of AI slop, isn't it just when you see many variations of the same theme so you get tired of it and attribute that to the model being "slop" instead of you just getting used and bored to it?
>>101815991When you see enough AI gens you realize how incestuous the outputs are, things like "sameface" and homogenous aesthetics. Every model has an underlying repeating style.
>>101815984definitely the vibes I'm going for, 90s pc games have a great style
>>101815942>BlattmanI just thought of a cool super hero
>>101816010perfect timing, right as I'm going to bed
>>101816027Yeah I can understand that, the lack of variations when you prompt without asking for an artist, style etc.
>>101816044https://www.youtube.com/watch?v=qOXYnT3aMzU
>>101816044
>>101816027it is a skill issue
>>101816047>lack of variations when you prompt without asking for an artist, style etc.sometimes flux switches between realism and cartoon so it might just be a model size issue. 700 milllion parameters in SD 1.5 isnt a lot
>>101815942stupid enough to make flux and Mistral huh
>>101816113My guess is that it's because "styles" and "artists" are mixed and not/badly labelled.
>>101816115It's incredible that the only competitive text/video models outside of the US ones are French (Mistral) and now German (Flux).Where are the Japanese, Korean, English?
>>101816092
>>101816146one look at data sets of automatically captioned images, even the ones that used the latest and greatest VLMs, shows that's exactly the issue. the VLMs don't know much so the captions are pretty generic.OpenAI trained their own caption model for DALL-E 3 (which was trained on 95% synthetic captions) and clearly it is leaps ahead of what is available
>>101816194OAI also didn't scrub mentions of "Character xxx", "Celeb yyy" and "Artist/Style zzz" or even a lot of nude/nsfw, instead choosing to do the moderation a posteriori by rewriting prompts and analysing outputs, which is the wiser choice imo.
>>101816168Tencent is making one. Hunyuan is from them, and they are doing an LLM too.
https://github.com/mcmonkeyprojects/sd-dynamic-thresholding/commit/5d63447afbc44b377f706a5eb0430f85791dcf30The script that allows AdaptiveGuider and DynamicThreshold to work at the same time has been integrated to the main repo, no need to download the script anymore on this tutorial:https://reddit.com/r/StableDiffusion/comments/1enxcek/improve_the_inference_speed_by_25_at_cfg_1_for/
>>101816247Now draw her pregnant
>>101816258>HunyuanIsn't that specialized into Chinese stuff only?
>>101816294Not only, but it obviously has a Chinese focus, being Chinese and all.Sigma is also a Chinese team I believe, but NVidia bought them so who knows.
where's a millionaire furry to make FluxDev the best it can be? It's been over a week already.
I would do it but I am not a furry nor am I a millionaire.
>>101816319Nice then, I want so much more competition than what we have now.
>>101816293
Just checked the latest threads to try my hand at the wuxia gens, flux may not do sword riding but goddamn does it make for some sick grinds
The red haired girl was supposed to be a tomboy, can't believe flux doesn't know that concept :(
>>101816554Just describe what she is supposed to look like
>>101816274Nice, I could finally test it in my workflow, and I've gone from 70s to 50s generating an image (using the cuda0/1 trick too).
>>101816573Not only it makes it faster but having cfg 1 at the very end kinda cleans the artifacts the high CFG made previously, it's really a win/win situation
love the group map from the EasyUse node set
Any artists or three-letter codes that will give me flat shading like this in pony derived models?
>>101816630also stole this guys 2x flux upscale group and made few changes to it. can recommendhttps://civitai.com/models/620294
>>101816630How does that work?How do you configure the nodes?
Morning
>>101816664>How does that work?it auto-detects the node groups you created and allows you to turn them on/off. see >>101816662
>>101816637Junji Ito
>>101816667Good morning
>>101816637Seems to be really hard to do, I really did my best there, CFG 6, GuidanceNeg 10, boomer prompt made by Claude 3.5 Sonnet...
>>101816664>>101816681the rgthree node pack also has group muting/bypassing nodes, in case you dont trust the chinese language on the EasyUse node set github
>>101816681>>101816713I installed the easy module thing.It's really a useful idea
>>101816724>yasuke-sama....
>>101815958and it's professional as in doing this for profit
Why do random old nodes I used before (unconnected to current nodes) reappear when I refresh comfy?Does that happen to anyone else?
>>101815158there's literally soft cheese pizza on the gallery every now and then so I can't fathom what would be a bannable offense when you're genning vaginas
>>101815992nice
>>101814739use a description of her from wiki it'll be closer
>YoRHa No.2 Type B from Nier Automata standing over a luxary car, on the bottom of the image, there is a white text subtitle that says "Why does Flux only knows Miku..."I had some hope that writing the full name of 2B would trigger something accurate, oh how wrong was I
>struggled a lot with tying to make a realistic Miku because it always force anime, or at the very least 3d shit>mfw all it took was adding "a woman cosplaying as..." to the prompt
>>101816826it doesn't work if you put "anime, 2d" on the negative prompt?
First time trying flux. Not bad. I have 16GB of VRAM. How over is it for me if I want to run this local?
>>101816748Well, it's broken for now, will wait until the commit fixing it is merged.https://github.com/yolain/ComfyUI-Easy-Use/issues/300
>>101816852>I have 16GB of VRAM. How over is it for me if I want to run this local?it's completely over, buy more ram my nigga :(
>>101816852how much RAM? Even with 24GB VRAM people have to offload heavily.
>>101816847Miku is too strong for that, even the negative mumbo jumbo with 10 flux guidance doesn't work by itself
Has anyone tried experimenting with max_shift and base_shift?>>101816852I have 12GB
>>10181687016 as well. Would 32 be enough RAM?>>101816890Nice. How much RAM?
>>10181690032GB but usually I need around 26-27GB
>>101816810>>101816667Very nice. Making see-through injection molded parts was an old fetish of mine, I have since given up on. Which model are you using?
>>101816890this is the only thing ive found on it
>>101816870Yes, from what I see you need 24GB for model + 10 for clip.So either one card > 34GB VRAM or two cards with 24+12.
>>101816935>max/base schift: 0.1/0.5wait, how can the max be lower than the base shift?
>>101816935So basically that shift thing is an undercooker? nice, I'll try it out and see if it helps at high CFG
>>101816932I'm partial to CrystalClear and PixelWave
>>101816932i'm using flux.1 pro on glif.app, not sure of the other anon
>>101816935thanks. It seems to have barely
>>101816982any impact* is what I meant to write
>>101816797Well, Flux tried.
Now that flux is here, is SD kill?
>>101816935Ehh that's not bad at all, do we know what the default values are when we're not using this node?https://imgsli.com/Mjg2MjYy
>>101816970>>101816955>>101816982>>101817072this guy also talks about the shift values in regards to his upscale workflow (timestamped)https://youtu.be/L6_yi539V-s?t=362
>>101817123Oh cool, I was tired on trying those values randomly without knowing what it really means, thanks anon
How do I use wildcards in ComfyUI?
>>101816797>>101817026I guess flux, like most virgin models, are still pretty bad at anime?
>>101817141>trying those values randomly90% of stuff I see in random nodes in comfy.Zero explanation in guides, zero explanation in their own git.Great.
>>101817184I would say at 75% of the time you kinda get what one parameter means by just trying the max and min, but for that shift shit it's so subtle I don't really know what it does kek
>>101817123interesting. Probably only works when using flux for upscaling. I'll give his workflow a try
>>101817167Tbh it's pretty good at anime
>>101817221As much as I love flux with the bottom of my heart, I don't think I'll survive multiple months gening only Miku and Trump while waiting for a big finetune because Flux seems to only know 2 characters kek
>>101817221where sonic?
>>101817123So basically it's the difference between base and max that matters? so it's the same thing when you do base 2 + max 3 against base 3 + max 4?
>>101817167I just copy-pasted her entire design description from the nier fan wiki and added "A high-quality, high-polygonal 3d anime image featuring a woman standing in a desert".Also that's one big fucking head.
avatarsissies....... people think we are REPULSIVE FREAKS
>>101817276
>>101817308purrfect
>>101817307WTF SATAN WOJICKI IS REALLY DEAD, RIP BOZO LMAOOO
>>101817308kek'ed
>>101817261What about Meghan Markle and Sonic?
>>101817361yay...
>>101817434Come on we can have some fun with that
>>101817468Flux can't into gloves.
>>101817468your picture is a bit overcooked, you can fix it with AdaptiveGuidance, dunno if you have it on your workflow yet?
>>10181689032GB Ram and 16GB VRAM, I'm not having any major issues other than it being slow.But if it works, it works. It's better a superior model that runs slowly than a fast model that only gives you what you are looking for by chance. Makes experimentation more tedious but you are better at the end. What I am saying is: run Flux no matter the cost.
>>101817307dalle would NEVER!
>>101817562This is CFG 1.0, Adaptive Guidance won't help. Also I can't be fucked.
>>101817588I agree. It takes around 2:30 per image + a couple seconds to upscale. With 16GB it'll probably be around 2 min
>>101817641>Adaptive Guidance won't help.oh it can, you can really uncock the shit out of the output with it https://imgsli.com/Mjg2MTI0
pixel ass
>>101817665Left looks like low cfg, I am using 3.5
>>101817698it was cfg 6 + guidanceNeg 10
>>101817703Right so I am not using Dynamic Thresholding, it's a straight flux gen at cfg 1.0 and guidance 3.5I mispoke but yeah, adaptive guidance won't do anything because it's already at 1
>>101817719oh ok, I get it now
>>101816681>>101816662What's the name of the node?Searching groups gives me nothing after installing https://github.com/yolain/ComfyUI-Easy-Use
>>101817764Right click on an empty space and turn it on
>>101817781ooh it's not a nodeOK thanks anon
>>101817764>>101817781also there should be these icons in your comfyui to make it appear. for me its in the bottom left corner
>>101817307
https://civitai.com/models/636355/flux-detailerhttps://imgsli.com/Mjg2MjU4https://imgsli.com/Mjg2MjU5
>>101817850you made that lora anon? that looks nice
OC Donut Steel
>>101817850>soulless vs soulfulimpressive
>>101817803lol
>>101817798oh indeed
>>101817307I remember when ldg threads were 2 days each and everyone was like "it's better this way"
>>101817946which was like 2 weeks ago
>>101818058Pre-flux, so it's before the current era
>>101817850How did you do that?
>>101817946>>101818058Sigma bump
is there any prompt to get a character with a receding chin? I've tried everything to no avail.
>>101815942>Average german RCD test
>>101817946It kept the thread schizo away
>>101817946slow threads are always the best since the only people that hang out then are the ones that actually care
>>101817946yeah I remember when diffusion was stale
On the Schnell vs Dev discourse. Schnell isn't great, especially at text, Dev can have issues with prompt adherence when CFG is 1. Schnell converges fast but has quality degradation. Dev takes more steps and can lose coherence on some details. (the text on screen in the prompt for example, prompt was: computer screen with text that reads "an image of what a gecko would see crawling up the side of a building." with a smug woman in a desk chair, view over shoulder, sweater, )Think the best speed and quality tradeoff for people that aren't high VRAM chads is to do the block merge tip from https://blog.comfy.org/august-2024-flux-support-new-frontend-for-loops-and-more/Or use a pre-merged diffusion model from https://huggingface.co/maximsobolev275/flux-fp8-schnell/blob/main/flux1-schnell%2Bdev_fp8_unet.safetensors that uses the workflow from the Comfy post.Way better than Schnell as is when targeting 4 steps, but remember the license implications that this merge takes on the non-commercial aspects of Dev.
>>101815477>comfy mixwhat's that?
it's so easy to add text to an image after it's generated. why are they trying so hard to get text right, it's a waste of time.the exception is if someone is holding up a sign or something with text on it but even then the actual text isnt the big problem
>>101819094???because text is basically in every image
>>101818936>base schnell doesn't give women buttchinsMaybe it's not that bad after all...
>>101818936>but remember the license implicationslolsure thinglmao
>>101819094Yeah there could be different model for text that works like adetailer>>101819569gm
Those are some weird Miku and Donnie
>>101819795what does this have to do with local diffusion?
ran starting his schizo-rants againthis place was much better when you were gone, hardly any bitching at all
>>101819846schizo anon from the other thread is trying to actively sabotage this one now that sdg is becoming less relevant. it's unfortunate for him because it had the opposite effect, at least on me. i don't even check the other thread anymore because of this.
>>101819893you go to sleep and wake up thinking about debo, its incredibly pathetic
>>101815127This is very cool promvt?
Let's hope this fresh bread can stay a bit cleaner...>>101819993>>101819993>>101819993
>>101815860>>101816034prompt me up nigga
>>101815902I'll use my gens for commercial purposes and there'a n9thing they can do about it.