/sdg/ - Stable Diffusion GeneralPrevious /sdg/ thread : >>101436631>Beginner UI local installEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Local installAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUISD.Next: https://github.com/vladmandic/automaticAMD GPU: https://rentry.org/sdg-link#amd-gpuIntel GPU: https://rentry.org/sdg-link#intel-gpu>Use a VAE if your images look washed outhttps://rentry.org/sdvae>Run cloud hosted instancehttps://rentry.org/sdg-link#run-cloud-hosted-instance>SD3 info & downloadhttps://rentry.org/sdg-link#sd3>Try online without registrationsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-mediumtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://openmodeldb.info>Animationhttps://rentry.org/AnimAnonhttps://rentry.org/AnimAnon-AnimDiffhttps://rentry.org/AnimAnon-Deforum >Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>View and submit GPU performance datahttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Share image prompt info4chan removes prompt info from images, share them with the following guide/site...https://rentry.org/hdgcbhttps://catbox.moe>Discord6wUwtcJsr2>Related boards>>>/h/hdg>>>/e/edg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/trash/sdg
inb4 the news lel
>mfw Resource news07/17/2024>Kolors-IP-Adapter-Plus weights and inference codehttps://huggingface.co/Kwai-Kolors/Kolors-IP-Adapter-Plus>DepGAN: Leveraging Depth Maps for Handling Occlusions and Transparency in Image Compositionhttps://amrtsg.github.io/DepGAN>Intel Capital invests in 43 Chinese AI companieshttps://www.tomshardware.com/tech-industry/intel-capital-investments-in-chinese-ai-startups-draw-us-govt-attention>Novel Artistic Scene-Centric Datasets for Effective Transfer Learning in Fragrant Spaces https://zenodo.org/records/12700182>FasterLivePortrait: Bring portraits to life in Real Timehttps://github.com/warmshao/FasterLivePortrait>TGIF: Text-Guided Inpainting Forgery Dataset https://github.com/IDLabMedia/tgif-dataset>Measuring Style Similarity in Diffusion Modelshttps://github.com/learn2phoenix/CSD>SDPT: Synchronous Dual Prompt Tuning for Fusion-based Visual-Language Pre-trained Models https://github.com/wuyongjianCODE/SDPT>CIC-BART-SSA: Controlled Image Captioning with Structured Semantic Augmentationhttps://github.com/SamsungLabs/CIC-BART-SSA>Towards High-Quality 3D Motion Transfer with Realistic Apparel Animationhttps://github.com/rongakowang/MMDMC07/16/2024>ComfyUI_frontend: Modernized TS Front-end https://github.com/Comfy-Org/ComfyUI_frontend>Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AIhttps://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/>DataDream: Few-shot Guided Dataset Generationhttps://github.com/ExplainableML/DataDream>UltraPixel: Advancing Ultra-High-Resolution Image Synthesis to New Peakshttps://jingjingrenabc.github.io/ultrapixel>AccDiffusion: An Accurate Method for Higher-Resolution Image Generationhttps://github.com/lzhxmu/AccDiffusion>US Senate introduces bill to setup legal framework for ethical AI developmenthttps://www.techspot.com/news/103820-us-senate-introduces-ground-breaking-bill-setup-legal.html
>mfw Research news>07/17/2024>Efficient Training with Denoised Neural Weightshttps://yifanfanfanfan.github.io/denoised-weights>Animate3D: Animating Any 3D Model with Multi-view Video Diffusionhttps://animate3d.github.io>Encapsulating Knowledge in One Prompthttps://arxiv.org/abs/2407.11902>SpaceJAM: A Lightweight and Regularization-free Method for Fast Joint Alignmenthttps://bgu-cs-vil.github.io/SpaceJAM>Contrastive Sequential-Diffusion Learning: Approach to Multi-Scene Instructional Video Synthesishttps://arxiv.org/abs/2407.11814>Video-Language Alignment Pre-training via Spatio-Temporal Graph Transformerhttps://arxiv.org/abs/2407.11677>Scaling Diffusion Transformers to 16 Billion Parametershttps://arxiv.org/abs/2407.11633>QVD: Post-training Quantization for Video Diffusion Modelshttps://arxiv.org/abs/2407.11585>Self-Guided Generation of Minority Samples Using Diffusion Modelshttps://arxiv.org/abs/2407.11555>AEMIM: Adversarial Examples Meet Masked Image Modelinghttps://arxiv.org/abs/2407.11537>Length-Aware Motion Synthesis via Latent Diffusionhttps://arxiv.org/abs/2407.11532>How Control Information Influences Multilingual Text Image Generation and Editing?https://arxiv.org/abs/2407.11502>Isometric Representation Learning for Disentangled Latent Space of Diffusion Modelshttps://arxiv.org/abs/2407.11451>Controllable Contextualized Image Captioning: Directing the Visual Narrative through User-Defined Highlightshttps://arxiv.org/abs/2407.11449>Model Inversion Attacks Through Target-Specific Conditional Diffusion Modelshttps://arxiv.org/abs/2407.11424>Zero-Shot Adaptation for Approximate Posterior Sampling in Inverse Problemshttps://arxiv.org/abs/2407.11288>Unconstrained Open Vocabulary Image Classification: Zero-Shot Transfer from T2I via CLIP Inversionhttps://arxiv.org/abs/2407.11211>Amortized Inference with Diffusion Models for Learning Clean Distribution from Corrupted Imageshttps://arxiv.org/abs/2407.11162
Morning anons
>>101443950Goodmorning.
>>101443950gyoo evening!
rodents
>>101444041common misconception, quokka are actually marsupials, not rodents
Go with the (Aura)Flow.
>>101444061have you tried pixartSigma900m yet?
>>101444061AuraFlow, the chinese cheap copy of ideogram kek
>>101444121AuraFlow gives the bigger ears.
>>101444060well i learn something new everydayapologies to quokka anon
>>101444177as penance, you must generate a quokka
>>101444177What tags do you use for this?
for me, it's only the lowest of quality jpgs>sponsored by NightCafe>PR2PGNo. I will not get out of your head. Stay sane, idiot dads.>MMA8WYes. I am literally the internet king of woofboxing. Get destroyed.>82AYG82% aye /g/ when do I get my job?52%>X4K8WX's 4 Kuwait?YO WUT UP E-DAD>AT0RR@0:ReadReceipt(/s) I will NOT USE TOR.>YJMKGAh. Yes. No.>Schizophrenia, please respond. Doctor, why are we doing this?:And then Wildbow typed inside my brain:>8j0m0A little captcha bear.I would recommend not reading this post.
>>101444318>I would recommend not reading this post.keep on with it! I like your style
>>101444372thanks I like everyone in /sdg/ especially the bitter chubs>HYXXKI miss hlky.
>>101444358I'm glad I can't generate convincing Morenagraphy.:|
some wildcard collections have the most silly stuff .. __emoji-combo__ is weird AF ...do emojis actually have any impact on gens? maybe some?
>>101444415#mentioned
>>101444474depends on model
a quokka peace offering
>>101444539rare tiger quokka! its a cute!
>sdxl at high res without medvram a1111 OOMs (rtx 4090)>with medvram I get 99% RAM usage (32gb)This has to be a joke
>>101444663sd 1.5 keeps winning
>>101444663did you turn off swapping to system RAM in the driver? if not the driver will radically go swap into system ram when you even barely scratch 16-20GB of VRAM usage its in NVidia Config (right click desktop) -> 3D Configuration -> CUDA System Memory fallback -> turn that off (preferred not)Had me wondering what the heck why is my 4090 so fucking slow after a driver update sometime ago.. was introduced with 546.1 driver update
>>101444701>did you turn off swapping to system RAM in the driver?Yes, thats why it OOMs at high res (instead of grinding to a halt)
me on the leftimpromptu marsupial hour
>>101444728then something is wrong with your a1111 install, only time I had high ram usage (20GB+!) with it was with 1.10RC using SD3 .. with SDXL pytorch usually idles around 1-3GB of RAM usage even on very high res .. also are using the gimped pytorch+cu12.1 release? that makes everything horrible on a1111
>>101444784>also are using the gimped pytorch+cu12.1 release? that makes everything horrible on a1111Yes just checked it, which version should I update to?
>>101444837you should downgrade to torch: 2.0.1+cu118 and xformers: 0.0.20 you can do that by downgrading via >git checkout 7dfe959f (v1.7.0 I think)then delete venv and get the old version then add >set XFORMERS_PACKAGE=xformers==0.0.20to webui-user.batgo back to with>git checkout masterto the 1.9.4 current releaseyou will then have newest a1111 but with old still working pytorch/xformers combo, you will get a warning, but you wont crawl to death in gens .. no clue whats up with the cuda 12.1 pytorch release and a1111 but its broken for me .. maybe that will fix things for you
>>101444903>>git checkout 7dfe959f (v1.7.0 I think)waaiiit wrong hash 7dfe959f is 1.9.3its >git checkout bf08a5b7>
>>101445014well anything from early this year will work
Always a nice surprise when the OP uses one of my pictures.Always makes me think I should put more effort into them, though.
>>101444903I just did a pip install torch==2.0.1 torchvision==0.15.2 --index-url https://download.pytorch.org/whl/cu118Thank you, now it works (Can do high res fix/img2img at high resolutions)
>>101445099dont worry /sdg/ is the slop containment general
>>101445099seems like they're just the right level of cute if they're getting picked, anon. However, if you can somehow dial up even more then let's see it!
>>101445112great! glad it worked for you .. ya using pip is the more sensible approach .. strange tho that the new torch version works fine on SD.next .. must be some weird bug with a1111, its even worse when you use several loras in a gen, a simple 1024x1024 with 3-4 loras crawled to like 15 seconds per iteration with the cu12.1 release for me on a 4090
This one may have been a lil too nsfw, so just in case..https://files.catbox.moe/dt1ztc.jpg
>>101445191Perfect.
>>101445129Kinda. LDG is full of pretentious fake art so I'm not sure if it's better.
>>101445252Yes, as opposed to the real, genuine art 1girl #84858599272664959505264978241
soul diffusion general vs lame diffusion general
>mfw Resource news07/17/2024>Gaylors-IP-Adapter-Plus weights and inference codehttps://huggingface.co/Kwai-Kolors/Kolors-IP-Adapter-Plus>DeepShitGAN: Leveraging Depth Maps for Handling Occlusions and Transparency in Image Compositionhttps://amrtsg.github.io/DepGAN>Intel Capital invests in 43 Chinese AI companieshttps://www.tomshardware.com/tech-industry/intel-capital-investments-in-chinese-ai-startups-draw-us-govt-attention>Novel Artistic Scene-Centric Datasets for Effective Transfer Learning in Fragrant Spaceshttps://zenodo.org/records/12700182>FasterLivePortrait: Bring portraits to life in Real Timehttps://github.com/warmshao/FasterLivePortrait>TGIF: Text-Guided Inpainting Forgery Datasethttps://github.com/IDLabMedia/tgif-dataset>Measuring Style Similarity in Diffusion Modelshttps://github.com/learn2phoenix/CSD>SDPT: Synchronous Dual Prompt Tuning for Fusion-based Visual-Language Pre-trained Modelshttps://github.com/wuyongjianCODE/SDPT>CIC-BART-SSA: Controlled Image Captioning with Structured Semantic Augmentationhttps://github.com/SamsungLabs/CIC-BART-SSA>Towards High-Quality 3D Motion Transfer with Realistic Apparel Animationhttps://github.com/rongakowang/MMDMC07/16/2024>ComfyUI_frontend: Modernized TS Front-endhttps://github.com/Comfy-Org/ComfyUI_frontend>Apple, Nvidia, Anthropic Used Thousands of Swiped YouTube Videos to Train AIhttps://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/>DataDream: Few-shot Guided Dataset Generationhttps://github.com/ExplainableML/DataDream>UltraPixel: Advancing Ultra-High-Resolution Image Synthesis to New Peakshttps://jingjingrenabc.github.io/ultrapixel>AccDiffusion: An Accurate Method for Higher-Resolution Image Generationhttps://github.com/lzhxmu/AccDiffusion>US Senate introduces bill to setup nigger framework for ethical AI developmenthttps://www.techspot.com/news/103820-us-senate-introduces-ground-breaking-bill-setup-legal.html
>>101445153Looks like a darker k1
>mfw Research news>07/17/2024>Inefficient Training with Denoised Neural Weightshttps://yifanfanfanfan.github.io/denoised-weights>Animate3D: Animating Any 3D Model with Multi-view Video Diffusionhttps://animate3d.github.io>Encapsulating Knowledge in One Prompthttps://arxiv.org/abs/2407.11902>SpaceJAM: A Lightweight and Regularization-free Method for Fast Joint Alignmenthttps://bgu-cs-vil.github.io/SpaceJAM>Contrastive Sequential-Buttcheekz Learning: Approach to Multi-Scene Instructional Nuggershttps://arxiv.org/abs/2407.11814>Video-Language Alignment Pre-training via Spatio-Temporal Graph Transformerhttps://arxiv.org/abs/2407.11677>Scaling Diffusion Transformers to 16 Billion Parametershttps://arxiv.org/abs/2407.11633>QVD: Post-training Quantization for Video Diffusion Modelshttps://arxiv.org/abs/2407.11585>Self-Guided Generation of Minority Samples Using Diffusion Modelshttps://arxiv.org/abs/2407.11555>AEMIM: Adversarial Examples Meet Masked Image Modelinghttps://arxiv.org/abs/2407.11537>Length-Aware Motion Synthesis via Latent Diffusionhttps://arxiv.org/abs/2407.11532>How Control Information Influences Multilingual Text Image Generation and Editing?https://arxiv.org/abs/2407.11502>Isometric Representation Learning for Disentangled Latent Space of Diffusion Modelshttps://arxiv.org/abs/2407.11451>Controllable Contextualized Image Captioning: Directing the Visual Narrative through User-Defined Highlightshttps://arxiv.org/abs/2407.11449>Skibidi Inversion Attacks Through Target-Specific Conditional Diffusion Modelshttps://arxiv.org/abs/2407.11424>Zero-Shot Adaptation for Approximate Posterior Sampling in Inverse Problemshttps://arxiv.org/abs/2407.11288>Unconstrained Open Vocabulary Image Classification: Zero-Shot Transfer from T2I via CLIP Inversionhttps://arxiv.org/abs/2407.11211>GAYmortized Inference with Diffusion Models for Learning Clean Distribution from Corrupted Imageshttps://arxiv.org/abs/2407.11162
>>101445576>>101445597fuck the idiot woke up .. go do that in /ldg/ for a while okay?
@101445629>nig*bo
>>101445587>darker k1was a result of wildcards ..>{ Briarberry Cohort|Mangekyou Sharingancrazy eyes|Magmaw}>{ musician|sulking|John Zeleznik||sunglasses}
>>101445661I didnt ask
what's the SOTA of skeleton animation extraction from monocular video data?
>>101445573At least AI is good at that.So the results are more interesting than "beautiful landscape, drawn by Francois de la chateau du merde, late 17th century, incredible, stunning, abstract, award-winning, oil on canvas, holy shit, wow, unconceivable, inbelievable, nice looking, awawesome, omgomgomg, winner of deviantart contest, I am a genius" or whatever. The 1girl posters aren't pretending to produce art. They can admit it's pointless slop.
whats wrong with non-pony anime XL models? I tried anythingXL, animagineXL, bluepencilXL all of them generate stupid bingo cards or casino stuff?? any good XL anime non pony models out there still?>>101445679I don't care
>>101445689whatever vtubers have been using for years
>>101445730>whats wrong with non-pony anime XL models? I tried anythingXL, animagineXL, bluepencilXL all of them generate stupid bingo cards or casino stuff?? any good XL anime non pony models out there still?okay forget this .. its fine.. I forgot to remove the score_987 schizo tags from pony on the gens .. they are fine
>>101445689why do you specify monocular?
What's the state of grifting with AI '''art''' these days? Can I still sell my soul to offset some of the cost of my GPU with coom or would I get a better ROI by destroying my shiny new toy with mining?
2 questions/requests>is there a forge fork that's being kept up to date?>if there is, and I'm using windows, how do I switch to that fork "in place" from my old forge folder? if it is at all possible
>>101446357forget it, peddling porn is a difficult business .. you maybe have luck if you find rich furries to sell some furry porn to, but mostly the market is oversaturated, only way to get some more bucks is make a really good model and put up a patreon, but thats real work you know?
>>101446357I think there are "bounties" in civitai where you get points if you give people the loras they want, I don't know if these points equate money though
>>101446373>>is there a forge fork that's being kept up to date?forge is already a fork of a1111, so try sd.next (https://github.com/vladmandic/automatic) if you are unhappy with forge>>if there is, and I'm using windows, how do I switch to that fork "in place" from my old forge folder? if it is at all possiblethats very ill advised .. mixing different forks of a software will just lead to catatrophic results, make a new install.. check if you like it, then migrate the models and extensions folders, thats not alot of work and most a1111 forks use very similar folder structures
>>101446387there was some changes that were supposed to be happening to get Creator's paid or something
>>101446378>put up a patreonThat's the plan. No effort, just like the AI slop I make because I failed to learn to draw in time.
>>101446387You get buzz if the user who requested the bounty picks yours as the winner. If more than one person trains a model for the bounty, which will almost always happen, the user chooses which one they like best and only that person gets the buzz. They actually aren't obligated to give anyone the reward if they don't want to.
>>101446632>I failed to learn to draw in time>t. not even 30 yetWhat does 'in time' even mean
>>101446632if you wanna peddle porn on patreon you will need to have their age check thing up .. wont get much customers
>>101446661>>t. not even 30 yetI wish you were right buddy.
>>101446716Well, unless you're 87 or something I don't see how it is too late. All you need is the dedication and some time
i think trani is shit
>>101446632>No effortthen nothing will happen. online engagement is 10% the content and 90% the marketing. most of the legwork is in funneling people to your content, otherwise it all sits in a dark corner of the internet never to be seen by anyone
>>101446749that's why nobody gives a shit about what you think
>>101446754100% this .. I seen some total slop sell like hotcakes just cause marketing was good, while gems sit in the dark
Do I have to register to use SD, why do they need to know my name and why do I need an organisation? How do I download and install it on windows 11 pro?
>>101446754>>101446770Why is shit always like this bros? Why do marketingfags always get their cut?
>>101446858No, just download models fromhttps://civitai.comUse one of the>Beginner UI local installlinks at the top of the OP
>>101446867cause normies fall for marketing like flies for shit, the normie brain is just not resistant to it and they will ignore the quality content even if if they stumble upon it cause they are entrained to eat shit all day
>>101446867I think there's two components to it. the first is that people are generally passive and the ideal slop consumer (your target demo) are maximally passive. if you're not going out and aggressively leading people by the nose to your content, nothing would motivate them towards it otherwisethe only thing is just the monumental amount of content available makes it hard for people to find stuff even if they're looking. maybe the stuff you make is perfect for some people but they just can't find it. actively marketing your stuff widens your footprint and makes it more likely for people to find what you're putting out
>>101443841I ordered an rx 6950xt. Can it do any sd?
>>101446924looks great!
>>101447011yes, but slower than on an equivalent NVidia GPU.. it will work tho, mostly cause horrible software, the pure silicon is not bad, but CUDA is just supported way better and more mature than ROCm or DirectML>>101447023thankyou
>>101447036I tried the vulkan side under LLM on NV and worked very well, same speeds as cuda for me, but I don't know about or if there's any vulkan diffusion stuff.
>>101447046can you give her a shave
>>101447054I think all (most?) local image diffusion implementations use pytorch as framework and that uses CUDA/ROCm or CPU .. so no Vulkan implementation .. also no idea how feasible Vulkan is for that, cause while it allows low level GPU access, I have no clue how much of a framework exists for something like ML diffusion
i miss schizo anon
>>101447090I agree, this hairstyle is not in-regulation
>>101447011send it back
>>101447139https://pytorch.org/tutorials/prototype/vulkan_workflow.htmlMight be a fun experiment. Looks like it exists but just isn't included ~~to sell nvidia cards~~ for reasons.
>>101447011cancel the order, get an RTX 3060 instead or RTX 4090 if you have the cash
>>101447188Can't, it has better renderinf than the 4090
>>101447225I know it's offtopic, but has anyone compared 768p gaming performance of highend graphics cards?
>>101447243>768pthe heck why yould you want that? .. but as real answer: it will be heavily limited by CPU, most any game will be topping out above the refresh rate of your monitor anyhow (unless you have one of these 500hz monsters) .. also 768p? aint that for 4:3 aka 1024:768? you mean 720p?
>>101447243Any gpu made in like, the past decade will handle anything at 768p beastmode.It'll come down to CPU and memory subsystem bandwidth then.
frog croak prompt? is there such a thing?
>>101447296maybe?
>>101447269>the heck why yould you want thatWhat if the game is so demanding that no card runs it at playable framerates, at real resolutions?I would also be interested in 480p.hard to explain, but some levels look amazing at 768p. You'll insta-die, but it looks neat. I am thinking of the sky one in Portal. I was using integrated graphics. Before an antihacker patch, some levels were playable, oddly. I'll find the level.
>>101447277Yeah, but which gpu get the highest fps? Like does the 4090 just go crazy to 400fps or something?
>>101447323why would anyone make a game that no one could play?
>>101447306trying to get my frog girl to croak adorably but she just stares with her tongue out pondering my status as food
>>101447323>What if the game is so demanding that no card runs it at playable framerates, at real resolutions?what game would that be? I have yet to find anything that would run slow on my 4090 .. even in 4k mostly>>101447343>why would anyone make a game that no one could play?well sometimes that happens, when Crysis originally was released there was no real hardware that could run it reasonably well .. but you are right these days games get rarely released that aint work on top hardware.. sometimes the optimization maybe sucks (Cities: Skylines 2 comes to mind)
>>101447355try "shout", "moan", "ahegao"
you gotta respect your shit before others will anon
>>101447343All major dev teams literally do this, assuming new hardware will handle their games. (engines):) I hope some leak
I have no fucking idea
mfw hit with a.i. burnout.
>>101447576first one?
>>101447617um no
>>101447628just make something different or watch anime then
>>101447657im probably going to play tf2 or more likely lay on couch browsing toktik for an hour or so
>>101447667cozy
>>101447576change prompt subject.. making same creepy girls all day everyday ofc gives you burnout.. try some funny gens or some landscapes or abstract art for a change
>>101447667>couch browsing toktikdoom scrolling wont make you feel better, do something productive.. clean your room/flat/house .. write a song, dance.. touch grass
>>101447727they aren't creepy they are beautiful and aesthetic and make me feel good to look at and more
Don't know if this is the right place to ask, but is Waifu2x still the best for upscaling a low-rez anime image?
>>101447751to much of anything (even good things) is poison.. just rest your mind and eyes for a while
>>101447740>touch grassI got poison ivy really bad from doing this
>>101447764touch grass! not touch ivy! yikes
>>101447777My ivy problem is so bad it's growing in the grass
I had a dream last night that I had a gf and I was happy, now that feel is gone forever and i can't stop thinking about it
>>101447797prompt her
>>101447764Poison ivy looks nothing like grass
>>101447740shut the fuck up jordan. go rail some benzos
Sorry to ask but is there an AI, hopefully web app, that can extract clean line-art from an anime image? Discarding all coloring and effects
>>101447815the fuck is up with you? your benzos ran out?
i did pet a cat this morning, for first time in 7 months. only highlight of the day. after a 2-day streak of nofrappe, i failed this morning and have j.o.'d 4 times so far today. i am obsessed with... well enough about me. i sort of wish my pc would die so i could be free of this sd mind prison. but then id just use midjourney so whatever
>>101447797I’ve been married for twelve years and still have those times
>>101447823>that can extract clean line-art from an anime image?yes controlnet extension>>101447823>hopefully web appprobably not? you need a1111 or comfyUI with controlnet extension .. not sure if any free web version has these extensions installed, I guess not
>>101447859post wife
>>101447840the prison is not SD or Midjourney .. its your habitual mind, just habit it to healthy things
>>101447871noanime girls are better anyway
>>101447882she's fat, isn't she
i want to make a navi without blue skin, there are a couple loras i found but they dominate the image with blue skin. is there a way around this? can i use control net to force it to change the color?
>>101447889Oh yeah definitely but she’s significantly better to be with than some of the more attractive women in my past. I’m too old to care about that stuff anymore.
>>101447919has she ever seen your outputs folder?
>>101447947Yeah for sure
>>101447797I hate the gf dream. it hooks straight into your emotional core and you really feel it. but only for a moment. then it turns to vapor. all you're left with is a deep loss of something you never even had
Someone said me to post here.I forked Forge to upstream A1111 changes, and well researching the code the backend is like 90% Comfy, so also doing upstream comfy as well.https://github.com/Panchovix/stable-diffusion-webui-reForgeMain branch is A1111 upstream, and dev_upstream is A1111 + Comfy backend upstream changes.It does adds the new things from A1111 and also bug fixes, for main branch (A1111 upstream)- Scheduler Selection, so you can mix any sampler with any scheduler like on A1111.- DoRA support- More small optimizations- Refiner bugfixes- Soft Inpainting- Multiple checkpoints loaded at the same time- NMS, Skip early CFG, NGMS all steps- More Upscalers (COMPACT, GRL, OmniSR, SPAN, and SRFormer)Besides this, dev_upstream that has comfy backend upstream changes:- Let load-unload models from VRAM into RAM and viceversa even if using pin-shared-memory (DISABLE_SMART_MEMORY on comfy)- Updated sd, model managament, model detection, controlnet etc to comfy upstream- New args (ported from comfy, args_parser.py as them)- Preliminar code to support: SD3, SD Cascade, Koala 700M/1B, SV3D, StableAudio, AuraFlow. Need help to add them in forge_loader.py to make it work for A1111 UI, but the code to make these models work are there (thanks to Comfy)- More!Main branch (with A1111 upstream) has about 250 commits, and dev_upstream (with comfy backend upstream) is about 350+ commits.Has someone tested or heard of this? Btw really thanks to comfy for all his code and such being open source, it's like, most of Forge backend. I just updated to get the nice new things from there.
>>101448342nice .. there actually was someone asking just for that earlier >>101446373
>>101443841why does stable diffusion enjoyer trusted picrel so muchknowledge are so centralized now they could just wipe it in an instant
>>101448342>Someone said me to post here.duh, this has the most traffic>Btw really thanks to comfy for all his code and such being open source, it's like, most of Forge backendtoo bad illya had a shitfit when he was called out on it. thanks for continuing the effort.
>>101448418Decentraloozers are funny
>>101448418no idea.. what three letter agency runs that site?
>>101448418all it is is auto and 1.5 shit and that ui is dying because voldy sucks and everyone worth a damn left. civit, youtube and a bunch of odds and sods sites have tuts too
>but what does debo think about this
>>101448476you are really obsessed with him, did you propose yet?
>>101448416>>101448447Nice, yeah I plan to continue to update reForge with new things and such. Applying the thing for multiple checkpoints was painful but when it worked it was so nice IoI. At least I think I understand a lot of more comfy code.But A1111 code, not so much, it's kinda hard to do some changes sometimes.
>>101448458>that ui is dyingso whats in vogue now?
>>101448536NEAT!
>>101448625reforged or comfy... that's it unless you are content auto isn't getting anything new for a long time or ever
>>101448649thank you
if i'm using forge couple to create two distinct characters, adetailer will screw up the faces by just merging the two prompts. is there any way to avoid this while gening?
>>101448798no, adetailer really messes up with two or more chracters that are not anime same face, you better off just skipping it for such gens
>>101447905https://openart.ai/workflows/lion_deserted_68/color-change/45Fj7oWqC1JI8k5QuNM5
>>101448316Nice
>>101448342hey, you're the reforge guy. can i suggest that you do versioned/tagged releases? makes it a lot easier to keep track of what's been done
>>101449139Hi there, yeap. Sorry I'm kinda noob at this at github, applied mostly code. How would I do versioned/tagged releases? At most I did change the versions on modules_forge/forge_version.py
>>101449149tags are basically the same as branches, then you can just make a release in the github ui and select the tag
Would SD or another AI be able to remove the logo and text from this image? (and would an anon spend their precious GPU cycles doing it for me?)
>>101449451Should be doable, I would think. It'd help to know the character names, franchise and the like maybe.Also I'm pretty sure you could do that in Photoshop easily.
>>101449451yesno
Can someone explain why toon crafter doesn't work here?
>>101449451yes, with img2img inpainting, but its more actual work than GPU cycles to get it right
>>101449491Probably be easier given the art style to CS6 it.
>>101449485this fucking guy
>>101449223Oh I see, gonna research more at how to do it, thanks anon.BTW added Euler/a CFG++, DPM++ 2s a CFG++, DPM++ SDE CFG++ and DPM++ 2M CFG. They work pretty good at CFG 0.5-1.Seems DDIM CFG++ implementation of A1111 doesn't work on reforge, so gonna try to do it from the comfy backend
>>101449476It would know to make the proper thickness of the lines and all that? Probably a dumb question, and yeah it could be done in photoshop with some artistic skill I bet, though I lack itThe character is Tesse from Wakuwaku 7 on neogeo, but its a really obscure character, so doubt anything would understand the prompt well
>>101449540Wait NVM, I don't know where DDIM is defined on comfy
>>101449555Looks like the gun from Aliens
What's the best animation model I can try for free?Animatediff and tooncrafter.I need a free space to test animatediff with controlnets.
>Model understands "Hermione's Crotch">But not "Emma Wattson"Hmm...
>>101449665it is.. m41apls pulse rifle lora https://civitai.com/models/345302/armat-m41a-pulse-rifle-aliens-sdxl (there is a pony version to)
Not my problem.
>>101449704Cool
>>101449717kek
>>101449717loool!
>>101449913cute!
>>101449498Whats cs6 it mean?
>>101449980not that anon but he probably means adobe photoshop CS6 (creative suite 6)
>>101446632>I failed to learn to draw in time>he doesnt knowI found out the hard way talent is kinda innate, not nurtured
In case anyone was wondering, you CAN teach SD3 new concepts, but it takes more "juice" than XL. I was able to teach it doggystyle sex and get accurate results from a Lora I just trained over on TensorArt, but with 108 images I had to do 7200 steps at 0.00006 Text Encoder LR and 0.0006 Unet LR (64 dim, 32 alpha, AdamW, cosine with restarts). Anything less was giving results very similar to grass lady, but at a certain point it all just kind of "came together".
brehs is titanX(12g) still the shit or should I swapI dont game
>>101450282is that maxwell or pascal
>>101450282GTX series is basically worthless for diffusion .. the an RTX 3050 will be faster, heck even CPU might be faster
>>101450353>1995, Cable Television, Cinemax, Emmanuelle
>>101450374pretty much what I was going for
What's a good prompt for "hair pulled back into ponytail" without using the words "back" or "ponytail" since SD doesn't do those prompts correctly?
>>101450508high ponytail
>>101450515>>101450508wait thats with ponytail .. but ponytail actually works quite well for me
pic related "high ponytail"
>>101450508maybe try "strict ponytail" if you wanna have the forehead without hair, pic related
>>101450533I'm doing photoreal and high ponytail doesn't work all that well.
>>101450575uuh sd15 photoreal? what version? but honestly ponytail should be such basic tag even SD15 core should understand that
>>101450320first genbut it's got vrams thoit's what sd craves
>>1014506091.5 yeah, and this is the sort of thing I'm getting.
>>101450617doesnt matter in that case .. that Titan X is from 2015! thats before Volta series NVidia, which was the very first with tensor cores .. if it even works it will crawl like you had no GPU at all, its not worth buying at all for SD even if you get it for like $150
>>101450635Awful chaotic nonsense.
>>101450635thats a high ponytail! not sure what kind a hair cut you have in mind else?
>>101450651ya its chaotic .. if you want like combed back hair you probably have to add that as prompt .. somthing like "slick combed hair" or .. idk what SD15 wants .. its probably also a thing with photoreal that most data it was trained on is girls with luscious flowing hair so it will always try to immitate that
>>101450652Those are more like pigtails, but they fail at being pigtails as well.
>>101450635>>101450651>>101450668I downloaded photoreal 2.1 and used this prompt>woman, high ponytail, strict hair, combed back hair, blonde, red dress>negatives: low quality, bad anatomy, bad proportions, extra legs, deformed anatomy, messy color, deformed fingers,I guess thats the best photoreal can do.. not sure if you can get what you want out of it
>>101448659transparent
>>101450646>>101450330damn..
I have a set of photos taken in sequence, what's the best way to turn them into an animated video?
>>101450839you want diffused frames? deforum extension.. if you just want to stitch the pictures you have use ffmpeg, that can even interpolate between the pictures (wont look very good tho)
>>101450839ffmpeg
>>101450828for a 12GB card that actually can do you will atleast have to go for a 3060 12GB .. will set you back ~$/€300 below its not much fun .. but even 12GB aint great for SDXL 16GB on a 4060ti would be wiser
A1111's inpainting by doodling over the image seems primitive, is there anything with a bit more control? I'd take line tools or adobe's magic selection tool
>>101450912Weren't there plugins for photoshop, gimp, paint, krita and so on that integrated with 1111?
>>101450921maybe not THAT much controli'll settle for a fill bucket
>>101450912krita integration maybe?
>>101450912>adobe's magic selection toolsegment anything. microsoft designer has similar built in too. for SA most of it can run with minimal resources in the browser, extracting the embedding is the big part, SA ViT-H is like 3gb vram or something, microsoft and presumably adobe send your images to their servers to extract the embedprotip: the open source demo from meta kinda sucks, it doesn't trace the outline or anything but you can get the source from the official demo which does :^)
94m
>>101451066>>101451066>>101451066
>>101450885thxJust checked again, it's actually maxwellWith all optimization except memory on asuka test says 16~17 secs..Not too bad for playing around I guess
>>101451192>16~17 secs..uuf .. thats pretty bad for 512x512 SD1.5 .. well if you can get it for free I wouldnt reject it, but I still wouldnt pay money for it
>>101451237don't bother, anon. you won't convince them not to buy the white elephant
Fire up those 24gb $250 AMD cardshttps://www.phoronix.com/news/SCALE-CUDA-Apps-For-AMD-GPUs
>>101450234/ic/ was wrong and they will never admit it
Sorry ive been out of the loop.Which is the best AI now to generate photorealistic art in a img2img way? This guy: https://www.tiktok.com/@liberxx0/video/7323891177957149953?q=spirited%20away&t=1721314871326Must be using frames from the anime, and doing Img2Img, but if he isnt using a local setup, can someone please tell me what he coudl be using? Coudl it just be chatgpt with a good prompt? If so how do you do img2img with it, when I try it I get a shitty quality and completely different composition.Can someone please help this oldfag outThanks bros.( I am only talking about generating an image not the video )Thanks again