Previous /sdg/ thread : >>101713099>Beginner UI local installEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Local installAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUISD.Next: https://github.com/vladmandic/automaticAMD GPU: https://rentry.org/sdg-link#amd-gpuIntel GPU: https://rentry.org/sdg-link#intel-gpu>Use a VAE if your images look washed outhttps://rentry.org/sdvae>Run cloud hosted instancehttps://rentry.org/sdg-link#run-cloud-hosted-instance>Try online without registrationsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-mediumtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://openmodeldb.info>Black Forest Labs: Fluxhttps://huggingface.co/black-forest-labs/FLUX.1-schnellhttps://huggingface.co/spaces/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/flux>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>View and submit GPU performance datahttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Share image prompt info4chan removes prompt info from images, share them with the following guide/site...https://rentry.org/hdgcbhttps://catbox.moe>Discord6wUwtcJsr2>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/trash/sdg
>mfw Resource news08/03/2024>ComfyUI/Forge Implementation of Smoothed Energy Guidancehttps://github.com/pamparamm/sd-perturbed-attention>TryOnDiffusion: A Tale of Two UNetshttps://github.com/fashn-AI/tryondiffusion>Nvidia reportedly delays its next AI chip due to a design flawhttps://www.theverge.com/2024/8/3/24212518>ComfyUI Frontend Modernization: Transitioning to a New Era on August 15, 2024https://github.com/comfyanonymous/ComfyUI/issues/4169>CEO of Invoke says Flux fine tunes are not going to happenhttps://www.reddit.com/r/StableDiffusion/comments/1eiuxps>ComfyUI-FLUX-fal-APIhttps://github.com/gokayfem/ComfyUI-FLUX-fal-API08/02/2024>Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generationhttps://yixiaowang7.github.io/OptTrajDiff_Page>UniTalker: Scaling up Audio-Driven 3D Facial Animation through A Unified Modelhttps://github.com/X-niper/UniTalker>Smoothed Energy Guidance for SDXLhttps://github.com/SusungHong/SEG-SDXL>Mitigating Multilingual Hallucination in Large Vision-Language Models https://github.com/ssmisya/MHR>GalleryGPT: Analyzing Paintings with Large Multimodal Models https://github.com/steven640pixel/GalleryGPT>The Manga Whisperer: Automatically Generating Transcriptions for Comicshttps://github.com/ragavsachdeva/magi08/01/2024>Stable Fast 3D: Rapid 3D Asset Generation From Single Imageshttps://stability.ai/news/introducing-stable-fast-3d>Announcing Black Forest Labshttps://blackforestlabs.ai/announcing-black-forest-labs>Flux: The Next Leap in T2I Modelshttps://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal>ComfyUI: Basic Flux Schnell and Dev implementationhttps://github.com/comfyanonymous/ComfyUI/commit/1589b5>Kolors ipadapter FaceID Plushttps://github.com/Kwai-Kolors/Kolors/tree/master/ipadapter_FaceID>The EU’s AI Act is now in forcehttps://techcrunch.com/2024/08/01/the-eus-ai-act-is-now-in-force
>mfw Research news08/03/2024>Image Super-Resolution with Taylor Expansion Approximation and Large Field Receptionhttps://arxiv.org/abs/2408.00470>Localized Gaussian Splatting Editing with Contextual Awarenesshttps://arxiv.org/abs/2408.00083>Hierarchical Conditioning of Diffusion Models Using Tree-of-Life for Studying Species Evolutionhttps://arxiv.org/abs/2408.00160>SynthVLM: High-Efficiency and High-Quality Synthetic Data for Vision Language Modelshttps://arxiv.org/abs/2407.20756>Real Face Video Animation Platformhttps://arxiv.org/abs/2407.18955>ObjectCarver: Semi-automatic segmentation, reconstruction and separation of 3D objectshttps://arxiv.org/abs/2407.19108>Multi-Expert Adaptive Selection: Task-Balancing for All-in-One Image Restorationhttps://arxiv.org/abs/2407.19139>Exploring the Adversarial Robustness of CLIP for AI-generated Image Detectionhttps://arxiv.org/abs/2407.19553>Advancing Prompt Learning through an External Layerhttps://arxiv.org/abs/2407.19674>VolDoGer: LLM-assisted Datasets for Domain Generalization in Vision-Language Taskshttps://arxiv.org/abs/2407.19795>Mixture of Nested Experts: Adaptive Processing of Visual Tokenshttps://arxiv.org/abs/2407.19985>Perm: A Parametric Representation for Multi-Style 3D Hair Modelinghttps://cs.yale.edu/homes/che/projects/perm/>ML-Mamba: Efficient Multi-Modal Large Language Model Utilizing Mamba-2https://arxiv.org/abs/2407.19832>Adversarial Robustness in RGB-Skeleton Action Recognition: Leveraging Attention Modality Reweighterhttps://arxiv.org/abs/2407.19981>Exploring Robust Face-Voice Matching in Multilingual Environmentshttps://arxiv.org/abs/2407.19875>MaskInversion: Localized Embeddings via Optimization of Explainability Mapshttps://walidbousselham.com/MaskInversion/>Task-Adapter: Task-specific Adaptation of Image Models for Few-shot Action Recognitionhttps://arxiv.org/abs/2408.00249
>>101715462Literally just now it stopped sucking for unknown reasons.
i am, actually, and in fact, gay. these catgirls are a form of psychological cope. in reality i want to be buggered by a futa fennec. the bigger the phallus the better. giggidy
pw and debo, come hither and rape me, willingly. my orifices are open for business!
when you turn the lights off in the nighti'm the shivering feelinbwhen you cast about for meaninghere i am.
>>101715644RIP miku. you never deserved to be born in the first place. I hope you enjoy hell.
>>101715602congrats but now you're legally required to vote democrat>>101715622buy me dinner first
>>101715692never.
wow mage Miku:
>>101715739kys, this is a me and debo thread. you are not welcome!
>>101714931The prompt changed the image too much to get the same exact view.
>>101715764jump
how far can you stretch the prompt interpretation?would it get stuff like>the last one in the group is wearing the same clothes as the firstor>the number of windows present in the fibonacci sequence are cracked, the rest are intact
>flux has no negative promptit is over
>>101715750no bully wow migu poster or else wow obama will ban you from the wow server>>101715764too bad. not very cyberpunk either. worst of both worlds (though cool gen otherwise)>>101715779you want to avoid any phrasing that relies on reasoning. the models are literal and are just mapping tokens to concepts
its so over, even my random nonseise is...
>>101715808>whinefags be like>"my shit to piss ratio wasn't within normal range today">"it is over"
>>101715774
>>101715838makes me want to try out backflips on flux cuz sd could never get it, however I'm currently yasuke prompting
Is Flux on Stabler Horde yet?
won't give up, it wants me deadwon't give up, it wants me deadgoddamn this voice inside my headwon't give up, it wants me deadgoddamn this voice inside my headwon't give up, it wants me deadgoddamn this voice inside my headwon't give up, it wants me deadgoddamn this voice inside my headgoddamn this voice inside my head
>Salami nipples>Puts diapers on naked charactersSo is the model just censored or do you need to jailbreak it first?
>>101715895he willl never pverweja;;, i smedjhdco srd i hioserdiohsdriohnoi sdrji rhjio riohpo ishepoi hrthi
>>101715913true
>>101715895"jailbreak" is not possible, bobs and genitalia are just not in the data set, you can try as hard as you want, its not in there
I think I'm hitting the snyth training wall on these. they're all very same-sy
>>101715972Lies! I know it's in there I just need to figure out the right keywords.
>>101716014no cfg is the culprit, the pro version has a lot more "life" to it
>>101716032I have guidance in my workflow but, tbdesu, I haven't found it to do that much
>>101716032dev is a distilled model of pro, so you might be up to something there
>>101716032so you end up with a model that is really good at following your prompts, but isn't that creative.
>>101716043in my experience guidance helps most if you want a certain style, but its counter intuitive, guidance 1.5 is for high style acceptance
>>101716061I tried 1.5-5 with this prompt but haven't felt much of a difference. maybe its just something with the prompt
>>101715808it does, use cfg guider node, keep the cfg low
goo morning all>>101716049still pretty goobular shit you can make with it, sure I miss this RNG lucky gen feeling of SD to, but the model makes what prompt it, even if its very complex compositions.>>101716083in my trials the styles prompts have to be at the very beginning and have to quite precise, but then the guidance works pretty well, but the overall quality is abit better at 3-4
>>101716094black forest anon over on ldg verified that the inference code in comfy does not support negative prompts, and cfg is just a fluke of how comfy does it, it should only run at cfg = 1.0, what cfg does by accident is change saturation
>>101715872Nice robot
>>101716134can/should you use negative prompts with flux? been wondering about that
>>101716134>cfg is just a fluke of how comfy does itguidance is an additional input tensor. nothing to do with how comfy does it
node update: can replace regular prompt node with clipTextEncodeFlux, so this is how you adjust cfg (many say 1 works well)1.5 to 3.5 cfg is "recommended"
>>101716213i dont see whats new here
>>101716177cfg is not guidance>>101716166I asked that the black forest employee that dropped into /ldg/ yesterday his answer was:>>101702060>>101702166so basically they think negative are pointless and the code they gave comfy to implement flux does not support it, doesn't mean you couldnt get an inference method done that does, and my direct trials with the said cfg == 1 setting prove that, you can write whatever you want with whatever guidance setting you want into a negative prompt, the output wont change
>>101716227my bad, it has a cfg setting, regular prompt node had none
>>101716235>negative are pointlessI don't understand. are we supposed to prompt counter-factuals instead?
>>101716236ignore this, new workflow I have uses ksampler which has cfg and is betterhttps://comfyanonymous.github.io/ComfyUI_examples/flux/
basic tard question, I have 16gb vram, this means I can't use the 22gb model, right?
>>101716235>classifier free guidance is not guidancethere's no negative prompt and the guidance part of classifier free guidance is an additional input tensor. that's how the model works.
I am once again asking if there's any way for my SEGS of people to be individual SEGS, instead of just one that has everyone.
>>101716254They seem to that you don't prompt what you dont want and then it wont appear, they think like engineers.. I strongly disagree with them, I hope some anon changes the reference code they supplied and makes real negative prompting a thing, since in my opinion its an important tool. When you ignore that the model wants to have cfg == 1, you can negative prompt but it will probably not be interpreted by the text encoder correctly and who knows what effect it has .. abit of a hustle of em.>>101716275you can but you will have to swap to ram and gens will be slow (very slow!)>>101716281yaya
>>101716296>you can but you will have to swap to ram and gens will be slow (very slow!)very slow like how many minutes swapping vs having a card with enough vram?
>>101716343up to 15 minutes per image. I recommend you run the FP8 mode. picrel, 16gb 4080.
>>1017163542d backrooms
>>101716374>FP8 modeI have no idea what that means.
>>101716343depends on your system (gpu, ram speed etc.) but you are looking at something like 20-50 seconds per iteration at the minimum, so with flux.schnell (6-10 its) it might be worthwhile, but flux.dev (20+ its) might be a painya see >>101716374 for 4080 performanceas comparison on 4090 its 1.16s/it on this gen
>>101716374>>101716389Oh I found it and am downloooding
>/g/
>>101716389the model comes with weights as 16bit floating point number .. then its ~20gb in size in vram with text encoder leaving your 4gb for your picture, but you can tell comfy to load it in 8bit floating point (cutting precision) then it will be smaller.. not quite fitting into 16gb but you will have less swapping and faster time loading
>>101716402dude they are literally right now discussion tech.. whats with you?
>look at random civitai post >image looks pretty good,let's see the resources used tab >3 different outdated negative embeddings >disney princess lora without calling for any disney princess character in prompt >incase style at 0.1 strength why?
"how do I get email to run" is not a tech discussion
This is so confusing, same prompt. I queued up several gens and cancelled the first one, the remaining gens were all relatively quick but the first one was slow as shit.I feel like this is something for ComfyAnon, if you are here. Maybe there's some optimizations you can do?
>>101716421he pulled the same shit over in the other thread >>101716006 and also got shut down
>>101716430ppl are idiots>best quality, masterpiece, 4k, by Greg Rutkowski
>>101716430>disney princess lora without calling for any disney princess characterPeople are convinced that Disney princess lora makes the image/face better just by having it in the prompt
>>101716456>>101716460the sad part is that I've seen quite a few good gens that are bloatmaxxed to the prompt limit,like legit best quality,gorgeous supermodel girl,amazing masterpiece 4k etc spam. As for the disney lora its so fucking retarded you can look at its page and in the gallery theres anal horse penetration because retards keep spamming it at full strength for some reason
Hmm. I'm able to gen using Flux dev FP8 but schnell just seems to crash when using the bottle workflow from https://comfyanonymous.github.io/ComfyUI_examples/flux/#simple-to-use-fp8-checkpoint-versionChanging the weight_dtype from default to something else seems to have fixed it.
ok there, workspace allocation modethis is with free after use mode>eager: workspace is allocated immediately and not released until module is unloaded>lazy: workspace is allocated at first run and not released until module is unloaded>free after use: workspace is allocated each run then freed after usetested lazy mode as well, testing eager mode nowfor split modules the modules are maybe unloaded after use so lazy mode would be the same as free after use, but there's also the option to keep the modules loaded to keep the constants loaded then free after use should be used. probably some use case for lazy mode like idk something that gets repeated in a pipeline but its ok to load the constants at the start, maybe vae
>>101716545whoops wrong image.
>>101716529Yeah, if you look at the gallery, people aren't making disney princesses with it. It does change the output. I didn't think for the better though. People don't realize best quality, masterpiece, etc. were NAI specific tags that had been trained into that model. I know what you mean with the lora bloat, I've seen posters using 1.5 negative embeddings with Pony on there.
>>101716545yea you went from default (fp16) to fp8, therefore reducing the ram load.. if you want to "try" using fp16 to check quality difference, turn on vram swapping (if you have a GPU that can do that, NVidia can)
reminder to use fp8 if you dont have a shitload of memory or a 4090 cause otherwise it will be slow
>>101716645
>>101716645That is crazy, actually the best model atm, except for porn. Even better than dalle. Only thing dalle has that flux doesn't have is different artstyles
WoW Miku:
>>101716683artstyle are completly possible in flux, they are just harder to prompt than in dalle, I am pretty sure dalle has em overbaked to make em easy to access for normies
>>101716711neat
>>101716432It is. Do you think it's a basketball discussion?
Coding isn't technology. It's just words.
>>101716711can you do Kamala Harris Miku?
>>101716769does this count?
>>101716788I guess lmao
>>101716788and of course, the don
First finetune dropped:>https://civitai.com/models/621563
Why isn't comfyui an appimage? python can be a huge jerkface, and appimages literally solve that.
>>101716810bullshit
>>101716813use the portable version, it has local python environments then your own system python doesn't need to be in order, it manages it all by itself, if you use the git cloned one, it will use system python tho
>>101716844there's a portable linux one with rocm?
anime won
ok workspace mode is pushedrecompiling the split modules then ill do some testing
>>101716912I guess there is not .. didn't know you have special needs, sorry.
we did it, guys.
>>101716978idk, maybe I have to install Windows, didn't want to have to.
YES, no jibberish this time.
>>101716821>https://github.com/bghira/SimpleTunerIt's hard to believe after all the doom and gloom but this came out with Flux support like a couple hours ago.
>>101717007>The scripts in this repository have the potential to damage your training datain the trash it goes
>>101717007The days before SDXL released they said training loras for it was impossible to.
>>101717007>bghirathat is the guy who doomed and gloomed about flux training
>>101716978btw appimage is literally how to make Linux portable apps
They say to run pip but totally don't explain you have to have an env, apparently
>>101716986trump + trans = truns
>>101717090ya, but isn't it the job of distro maintainers to make these, cause there are like half a dozen linux appimage formats (flatpak etc.)? I don't think comfy has the time to maintain them all
>>101717007>>101717056Remember guys, it's impossible to train
>>101717132>dozen linux appimage formats (flatpak etc.)?no those aren't appimages those are different package formats/methods of distributing software for linux, appimage is just one of those methods.>but isn't it the job of distro maintainers to make thesethe guy who made the software makes it. appimages are similar to those .dmg files mac uses.
>>101716810>https://civitai.com/models/621563Kinda pointless since flux is already good at spoopy shit
>>101717157see >>101717136
>>101717132lm studio uses appimage, and it werksanyhoots, the reason people are getting screwed is ubuntu I guess doesn't make a default environment? I'm not into Python, but whatever.
>>101717193check the prompts of the example images lol
>>101717200What does Schnell mean?
>>101715397Meh, Pony is better than Flux, at least with Pony when you make a simple change like raising a hand or changing eye color it tries to preserve the rest of the image. With flux you get a wildly different image every time so it makes tweaking prompts annoying.
>>101716810This is the ugliest looking shit I've ever seen. Can flux really not do any nice styles or are we all just prompting it incredibly shittily? Everything it produces looks like some ugly nu-disney pseudo 3d shit, even when it's flat
>>101717242It means Quick in GermanIt's their version of SDXL Turbo. It will generate in 4 steps at the cost of quality.
>>101717200Not sure what you're trying to tell me, Anon.Was my sarcasm not evident?
>>101717233>warrior walking in the forest, goth girl, cute sexyI tried this one with base model
>>101717253Have an ugly photo
ahhhhh can I skip church and make images mom
>>101717292also, this is the default one for comfyuiv1-5-pruned-emaonly.ckptI got off hf, and what exactly is it? Why does it look nice?
>>101717299I just tried dargs (my capcha). Here is what it thinks dargs is.
>>101717276We went different directions with the forest walking
reminder: this is the law
>>101717299Anon.. that's sd 1.5...
>>101717352It's cute. So I put the flux in the right folder, but I think it's wired up wrong.
sd 1.5, maximum donkey
oh, I dragged the image into it.How did that work lol
I'm feeling modestly gatekept.
got promptmodel_type FLUXmodel weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16Using split attention in VAEUsing split attention in VAE....This is because I have too little system ram at 16gb?
>>101717431>he fell for the 16GB RAM is enough memebro upgrade, you wont get far using newer models with 16GB RAM
>>101717453clearly I'm tarded, why is so much needed vs vram?
>>101717461it spills over to ram once your vram maxes out
there that's betteri can keep all the FluxTransformerBlock loaded and nearly all the FluxSingleTransformerBlock, any extras are loaded from file
>>101717461this >>101717481the UI will load big chunks of the model into system ram then drops it into VRAM, on a gen like pic related, python grabs ~28GB of system RAM at one point and the VRAM is about 23 out of 24 GB used
Last one from me, good night anons
>>101717512installed more ram, have 32gb\o/ WE HAVE LIFTOFF (my gpu is noisy)
>>10171766140%
>>101717661hooray! enjoy your flux
>>101717347based migu
Me after I go for a walk before family dinner
neat, simple comicsA 4 panel comic of Miku Hatsunethen describe what happens in the first/second/etc.
>>101716683And porn.Without the dog dalle has actual nsfw as training so it can be better for nude/explicit/suggestive image gen.
>>101717855make her do loss
>>101717675Thanks!
>>101717863
>>101717967very nice
>What's up guys it's ya boi Dony. Oh thanks for the sub, queef_mucher_499
>>101717399
>>101718073Prompt?
>>101718090Diablo playing simcity..
>>101718090Twitch Live stream of World of Warcraft, ingame UI and HUD visible, in the bottom right of the screen is a small rectangular box containing a face cam of Donald Trump with headphones and puffy red shirt
Anything optimized for Windows on ARM?
>>101718159kek, go edit your excel sheets business man
>trump>miku>trump>mikuWow such a great new model.
>>101718176plz anon I just want some good use for my snapdragon elite cpu
>>101718296put it in a phone where it belongs
>>101718279Fucking Lanie Lugven
>>101718075Close, amd lolIT'S WORKING!fp8, but I can't seem to shake the lowvram thing. it may be lowvram is the mode that lets amd cards work.my 6950xt has 16gb, as they all do.1024x1024 takes 320 seconds. How's that compare?
>>101718355~20s on 4090. 20 steps
>>101718355WOW!!!!prompt: an anime version of barack obama wearing a two piece bathing suit, with massive breasts
>>101718365Neat. Any idea why it's that much different, though?
>>10171835525s using 2x3090.
>>101718379Swapping between vram/ram takes time.
>>101718314prompt? Looks lovely!
>>101718355>1024x1024 takes 320 seconds. How's that compare?16gb 4080 takes about 25-30 seconds per image on fp8
>>101718395Was a promptless gen, just exploring the latent space. I can find the seed for you>476284198889048
>>101718279meds needed
>>101718455
>>101718486kek
>>101718490No blue pill, the rabbit hole is greasy.
>>101718422Does control_after_generate need changing for this to work? Just leave the prompt blank?
Is it possible to make LoRAs for Flux currently?
>>101718784It is possible. Just not for you.
>>101718784Yes, but you need to rent one or two A100.
>>101718385Can/Do you nlink them utilizing the 48Gb vram? //jelly>>101718817This.Training requires 2 A100's.
>>101718804wouldn't it be possible for sub $200?
<self censoringwew
>>101718841They have nvlink (got it for free with the second card), but it's functionally useless.
>>101718766No, it's the FP8 all in one model comfy posted, Here is the catboxhttps://files.catbox.moe/mt5jkr.png
>>101718926>>101718314>>101718422I think I successfully used 476284198889048, for this image? Or should it always be identical, with the same seed?
>>101718986yes the seed will produce an identical image. if you queue up a bunch of random seed no prompts you can explore the latent space
>>101718899nta, if not through nvlink how are you utilizing 2 gpus for stable diffusion? (i am assuming that's what you are doing)
>>101719122this looks beautiful, is this flux? could you share the prompt
>>101719122I have other uses for the cards.As for now, I'm using flux, and just sending vae/clip to the second card.
>>101719092lol why is mine different from yours? I tried it again, and yes mine is identical to my own.Is it an amd/intel quirk?
>>101719164Oh it is slightly different. I dunno, probably gpu slight variations
>>101718899The nvlink is functionally useless? They're stupid $$$.
>>101719122you offload the text encoder to a second gpu with a custom node, so technically not really gual gpu. &nice muppet gen.
>>101719139here is the prompt>(masterpiece, best quality, CGI, illustration:1.2), ethereal green-haired young (loli:1.2), hands behind back, delicate fairy wings, translucent golden armor, floating on stardust, tending to starflower garden, nighttime sky, lush constellation garden, bioluminescent flora, radiant crystal fruits, intricate vines, glowing branches, vast space, near planet, Earth's silhouette, emotional depth, feeling of awe and wonder, (warmer hues in stars:1.1), stunning background, photo-realistic CGI rendering, breathtaking space adventure, space odysseyInitial batch of 4 640x360 images done with abyssorangemix2nsfw & lolidiffusion0.14_AOM3A3 50/50 model and clip mergeSeed - 120 steps with dpm++2m karrascfg 82nd image from that batch
>>101719216Just thought about it. Maybe the model is a slight variation, though the filename is identical?
>>101719222They're useless without the right implementation taking advantage of them.
>>101719287unless comfy uploaded a different model I don't think so. Or are you using the 11gb fp8? because that is different. Comfys has the clip and vae included
>>101719297Understood, just read inference gains are minimal while it does help with training.
>>101719332oh oops feel retarded. I downloaded the 17.2gb version. Well that explains it going to lowvram mode on my 16gb card lmaooo
>>101719375prompt is kill. 4chan needs to fix that.
>>101719400The 17gb version is the same one. https://huggingface.co/Comfy-Org/flux1-dev/blob/main/flux1-dev-fp8.safetensorsSo I dunno why it's different, has to be slight variations in the gpu die or something
>>101719277some quick gens on SDXL, enjoyhttps://files.catbox.moe/pbvi7v.pnghttps://files.catbox.moe/9q6a1w.pnghttps://files.catbox.moe/ovdley.png(all SFW)
hey guys im not entirely new to SD, been using it for a few months pretty successfullyrunning autism right now, usually making anime themed pictures but I wanted to do some cosplay genningany tips on how to prompt cute japanese girls? is that possible with base autism or do I need loras? i tried to fool around with some obvious prompts but all I got was western women ...
>>101719479What does 567060412920133 give you? It gives me this dopey fox:https://files.catbox.moe/51k7fc.png
>>101716374>4080>43 s/itIs this a troll post?
>>101719552Agreed, he should be way whipping me, with my 6950, I'm getting around 14.88 sits.
What's the rentry for retards of August 2024 for NSFW generations?
>>101719534absolutely doable with autismmix, I'd need to see your prompt and the result to assess. (esp. pony) lora market a bit crowded but that is always an option, of course.
>>101719552What's the resolution gen at?I was 768x1024 and getting 9mins I then changed it to 512x768 and now get 3mins.>fp16 btw
>>101719479>>101719546aha! Guidance is different, you have 2.5, I have 3.5, let me try 2.5.
>>101719633tried something in the vain of >Hatsune Miku, cosplay photo, photorealistic, japanese cosplay, cosplaywhich gens me western half of the time, slapped asian into it and now I get asian women but they aren't very prettyJust a general guidance words I could sprinkle in would be nice or a full on prompt I could steal and work on further.
>>101719546I get this little guy>>101719643Yeah 2.5 is my default
>>101719496>some quick gens on SDXLnicetakes my GTX1060 6gb 15~20mins for one image
>>101719695Wild, the same number for me isn't in front of a cabin, instead he's much goofier, and inside.I tried changing Guidance to 2.5, I get the same image that I already got with 3.5, so it's probably not that setting.
>>101719673careful there, the term "photorealisic" might throw autismmix off, thats something you'd use for those photorealistic pony merges like godiva or cyberrealisticpony, etc. just do a civitai search for "cosplay", pony, lora and parse the images for prompt ideas of whatever suits your boat. I don't gen cosplay so I can't help you with a prompt. shouldn't be too hard to find something decent on civitai tho. tons of crappy loras there too tho so beware.>>101719713glad to be of service, you stay on sd15 (not that you have any other choice), merging models to get something unique is a good way.
>>101719730>>101719695I GOT CONFUSEDIt's the guidance number. That's all. Yes, I got the same as you, with the 476284198889048, guidance set to 2.5.sheesh lol I need to pay attention!
>>101719789I already messed around with photorealistic, with or without, the girls still kinda ugly.I'm trying to look into a model called WAI REALMIX but civitai limits the fucking 6.5gb download to 500kb/s lmfao
>>101719635>>101716374Just to be clear - do you mean fp16 version of the model (24gb) vs fp8 version (12gb) or do you mean fp16 clip vs fp8?Anyways, I'm prooompting at 832x1216 with the full 24gb dev model with my old ass RTX 2080 with 8gb of VRAM. Picrel top is fp8 clip, bottom - fp16 clip. Naturally it takes all 32gb of my RAM and occasionally it offloads shit to my SSD for like a second or so.
>>101719825I was running the fp16 24Gb model and the fp16 clip until now. trying out the lesser.
>>101719768>>101719831>>>/pol/
>>101719825another anon, I have a 6950xt, and I get 14.88 sits, using the 17.2gb fp8 checkpoint.
>>101719871
>>101719797>>1017196952.5 gets your lovely version rather than my goofy 3.5 version.I think what is happening is the seed sets up the noise, and then the model sees that. But it's at a lower resolution, that's how I think it is, because the thumbnails look similar. But Guidance seems to cause it to diverge. This is me guessing.
>>101719825I use the fp8 model because it's simpler but it does the same thing as using the full model with an fp8 clip
>>101719893nobody likes you, even among this pit of degenerates
>>101719930I like him. Nobody likes faggots who tell people to go to pol any time they see something a little edgy.
>>101719930>Karen alertThis is art, and now we have text, just wait.
>>101719947I didn't tell him to go away, I'm just hoping to chip away at his self esteem until the problem takes care of itself
>>101719924>>1017196952.4 gives this, I don't get it.
>>101719820man wtf civitiai can go fuck itself>its your connectionI get 20mb/s on steam right now. Civitai is shitting itself.
>>101719978oops, this was actually seed470751355484866, with guidance 2.4
>>101719978Guidance is how closely it follows your prompt, the lower the more precise. With no prompt it's following nothing more closely, so it's just making shit up from the latent noise it has. In the noise it "sees" some kind of figure. It loosely forms it into a child in this case, a dog in the others where it has more freedom to be creative.
>>101719978I had one where I added 5 more steps to the gen and it turned from an old man and a child at a stream to a man working on a laptop
>>101720007Yeah, I got it wrong and feel dumb, but checked the stupid png in gedit.My mistake was I hit queue before hitting ok on the seed, I think.
>>101720003use IDM. Gives me consistently high speeds.
Say his name!
>>101720012excellent lmao
tried testing on A100, cutlass error even with float16, idk, maybe a linux issue, ill have to test locally on linuxi think theres a cutlass compilation option that should give me more information than just "internal error", its difficult to diagnose, i mean it works on windows after i found that some weights needed to stay in float16
>>101720003just started downloading the exact same model on civitai, I get full speed; it's on your end. also, not so sure about that model, looks pretty sloppy.
>>101720015IDM?
>>101720051https://www.internetdownloadmanager.com
>>101719947It's cringe not edgy>>101720051Internet Download Manager, it's /pol/ boomer software
>>101720062>it's /pol/ boomer softwareThat actually works beautifully
>>101720013>>1017196952.4 changes his chain into a lock.
>>101720061>>101720062Hm.I guess its just my ISP being a bitch then. They got some issue with cloudflare over pricing and so they don't use it. Civitai servers must be in the US then.
# du -hs models/18T models/>>> LORA.count_documents({"modelVersions.0.downloaded": True})161375>>> LORA.count_documents({})168932nearly got all the loras
# du -hs models/18T models/
>>> LORA.count_documents({"modelVersions.0.downloaded": True})161375
>>> LORA.count_documents({})168932
>>101720028>>101720129Just a question, did you used to post in the dalle thread when they first came around here on /g/?That character reminds me of an OC donut steel from those days.
>>101720137big image, how long did it take? I have only been doing 1024x1024, and it takes 5-6 minutes, rarely less.
# du -hs training_data/548G training_data/>>> TRAINING_DATA.count_documents({"downloaded": True})6038nice collection of training data from civit too>>101720148nope
# du -hs training_data/548G training_data/
>>> TRAINING_DATA.count_documents({"downloaded": True})6038
>>101720129Stolen data gang!!!11!!!kys nigger
>>101720157My initial gen is 512x768, i then upscale it in s separate workflow adding detail etc.
I can't wait for the first nsfw flux finetune!
>>101720192all data is free use like your mom
>>101720157took 1min 4 seconds on a 306012gb, 64ddr4
>>101720204Did you manage to run it locally?I tried the github installation guide to run it in python but im too retarded to understand it once pyhton throws syntax errors at meI need a zip folder with a suspicious install.exe ...
Flux is decent. 6/5
>>101719789I have recently implemented SDXL into my workflow, still trying to figure it out though.
can anyone help with Kohya DyLoRA? I read that its basically the same as fine tuning then extracting a lora which is 1000% superior to pure lora training and its 7-17x faster. Is this true?Why are there conv dimensions in kohya for it?do I need to change any other parameters? I just set rank 256, alpha 256, conv dim 256 and 256?also does anyone have a colab notebook for fine tuning sdxl, sd.15 and or onetrainer or stable tuner?the original kohyaa ones have all broken because of some repo update to do with xformers
Long time lurker here just got comfyui setup. I have checkpoints ponyv6 and juggernaut xl.My gens look like garbage and I have some questions. Are the sd1xl loras I downloaded not going to work with juggernaut?I have downloaded a bunch of workflow jsons. They either have errors even after I get all missing nodes or just look like trash once I switch to the loras and upscalers I have. My question is, are workflows so particular to a model combination that any changes to the models wrecks the output?Tips on keeping track of the clip, cfg for all your models?Thank you
>>101720350works fine on Forge for me
>Cockpit view of two dark-skinned muslims looking at the camera with excitement while piloting a plane. The two buildings of the World Trade Center can be seen from the window.Love how there's no mention of terrorism in the prompt but the buildings have smoke, kek.
>>101720350yo. you got a bit of a journey ahead of you but put in the time, it'll pay out eventually. base pony is not really recommended, grab one of the autism mixes: "autismmix confetti/dpo". build your own basic workflows and keep going until you can get results similar to the example images.a decent workflow will be something universal and you can switch models freely.grab "rgthree" nodes, it has "fast groups bypasser" stuff. put all "sections" of your workflow in groups and bypass them with those rgthree nodes easily. another useful node set:https://github.com/jags111/efficiency-nodes-comfyuithis is the loader and sampler I use. long winded topic. etc etc
is it just me or does pony do better with no embeddings? or prompts like "absurdres" or easynegative
>>101720523there are pony-specific embeddings you can use that certainly do something. sd15 embeddings wont do anything. and all these absurd absurdres expressions are moot because "score_9" etc takes care of that>>101720496OMG YES you win the price. the golden lemon
>>101720348bump,cmon guys I have a 4tb porn dataset to finetuneIS there really not a single working colab notebook anymore?
>>101720222any chance of a catbox?
>>101720350efficiency nodes like that other anon mentioned are what you primarily needa good workflow should be able to swap between other models of the same type easily (SDXL/Pony, SD1.5, Flux, etc)Base pony is garbage, you won't get anything good out of that without loras or excessive promptingTry Lustify or BigLust and/or Zonkey 5.0DPM++3M SDEExponential 40 steps4 cfgany 1024 res (I use 832x1216 mostly)remember booru tags, realistic and score_9 when prompting either of these
>>101720578>2024>colab>4tb datasetlol
>>101720607big lust & lustify lookin good, ty for the info. ahh big lust is from waterdrinker, awesome. fuck, those realistic pony merges are really starting to pile up.
>>101720234I just use an online service
>>101720605Here's my messy workflow. https://files.catbox.moe/xc395o.json
>>101720614I have about 10 google accounts and obviously will split the jobs and do merges etc and move the models
>>101720776thank you anon, hopefully I can get the same quality you managed!
i cant match the fluxdev results on replicate and fal. tried all the samplers and schedulers with matched settings. i guess its the fp16 difference
Fuck dependencies. Fuck parameters.
>>101720800yw, my separate workflow is just SD3 with and Ultimate SD Upscale node.
>>101720797Go on then lol, try it
>>101720432>>101720607Thank you
>>101720814I'm ready but the colab finetuner notebook from kohya is broken and the guy has killed himself or stopped developing
>>101720812https://files.catbox.moe/e37ky3.json
>>101720834There's a reason you won't find a working colab. It's not 2022
>>101720847thanks again anon!
are there any pony realism models that do excessive pubic hair well?
having trouble getting a myspace aesthetic. I already have the "boring picture circa 2008" trick
What is the proper way to prompt flux?I try to get a certain art style working, but it simply doesn't do it, bing and chatgpt do it with no problems.Does flux suck at making painting-like art styles?
>>101720852I ahve some lora options working and one sdxl model but not pony or full finetunesI can finetune on my 8gb card locally.... how cant I on fucking colab with 15gb t4
>>101720557>OMG YES you win the price. the golden lemonstill feeling out the classic boxart feel
>>101720892>What is the proper way to prompt flux?natural language with elaborate and descriptive sentences>I try to get a certain art style workingthe current consensus seems to be that its weak with art styles & creativity but strong with literal prompt. you can try running the style prompt explicitly through clip to see if that helps with style influence
>>101720496>>101720933damn, i now want to play that.
>>101720933>>101720496Isn't it usually spelled "meido"?
>>101720958kek>>101720967>implying I know how to spell
>>101720933super cool, can it do mega drive as well? (probably not). working on a realism catgirl set btw, soon..
>>101720984>that yin yang
>>101721004it can do good yin yangs too but i liked that one because it looked kind of evil
Man, I could've sworn I was previously able to do darker skin than this
>>101720986ain't happening. not enough images of them I guess
>>101720887maybe it needs some.... *F5*
>>101721072>*F5*huh
>>101721107good rng
>>101720887jpeg artifactsdirty mirror low res
>>101721107how many other people round here tell you to use/press F5
>>101721072>>101721127oh
>>101721011>>101721011>>101721011
flux has pretty good facial variation
Ryan gosling at home
when your goth gf is 35
>>101721033Once in a blue moon.
>>101721223anon. that's a 26 year old
>>101721291idk she looks pretty old
>>101721334she looks like a 26 yo i used to know
>>101721291if she was 26 in 2006, that makes her 44 today