Previous /sdg/ thread : >>102386563>Beginner UI local installEasyDiffusion: https://easydiffusion.github.ioFooocus: https://github.com/lllyasviel/fooocusMetastable: https://metastable.studio>Local installAutomatic1111: https://github.com/automatic1111/stable-diffusion-webuiComfyUI: https://github.com/comfyanonymous/ComfyUIForge: https://github.com/lllyasviel/stable-diffusion-webui-forgeSD.Next: https://github.com/vladmandic/automaticAMD GPU: https://rentry.org/sdg-link#amd-gpuIntel GPU: https://rentry.org/sdg-link#intel-gpu>Use a VAE if your images look washed outhttps://rentry.org/sdvae>Run cloud hosted instancehttps://rentry.org/sdg-link#run-cloud-hosted-instance>Try online without registrationflux-dev: https://huggingface.co/spaces/black-forest-labs/FLUX.1-devsd3: https://huggingface.co/spaces/stabilityai/stable-diffusion-3-mediumtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest>Models, LoRAs & upscalinghttps://civitai.comhttps://huggingface.cohttps://aitracker.arthttps://openmodeldb.info>Black Forest Labs: Fluxhttps://huggingface.co/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/flux>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysd>View and submit GPU performance datahttps://vladmandic.github.io/sd-extension-system-info/pages/benchmark.htmlhttps://docs.getgrist.com/3mjouqRSdkBY/sdperformance>Share image prompt info4chan removes prompt info from images, share them with the following guide/site...https://rentry.org/hdgcbhttps://catbox.moe>Related boards>>>/h/hdg>>>/e/edg>>>/c/kdg>>>/d/ddg>>>/b/degen>>>/vt/vtai>>>/aco/sdg>>>/u/udg>>>/tg/slop>>>/trash/sdg
how do I use a style model lora in comfyui? the style loader or a lora loader? every time I use a base like pony or autism it doesn't look like an anime.
>>102396957lora loader I think
>>102396979what prompts so it doesnt look like a cartoon?
>>102397009I don't use XL tbqhwyf, sorry
>>102388508What's the technique used to make these? I've tried the online 'image to video' but they only generate absolute garbageI never managed to get this image to animate properly for instance
>>102397357chill but spooky
>>102397156I hate jews more than I love cats.
Mornin' mates.
>>102397658good morning friend
>>102397691Hello fren. How are u.
>>102397731I'm fine, how are you?
>>102397772I am A-OK
Back in the early days of controlnet, I remember seeing a demo for an upcoming feature where you could click on a picture and drag it, for instance, the paw of a dog, and the image would regenerate with the paw slightly moved. Do anyone remember it too? What came of it?
>>102397427Yeah it was a bit darker.>>102397658Morning, and goodnight.
>Year of our lord Sept 15th 2024>still no /ai/ board>the corpse gookmoot rules 4chan and their is only shitposting.
long ago, i called for a General A.I. board.
>>102398404wtf is up with those lines?
>>102398449I did too
>>102398718Hawt but scold her for ruining the bed
>Julien
>>102398718Its weird how you still think adding these bars and noise does anything despite your posts being immediately removed
>>102398455>>102399810
>schizo thread
>>102396906>>102396906>>102396906So I wanted to replace my old A1111 install with forge (I was using Comfy for a long while) and was following steps from https://github.com/Panchovix/stable-diffusion-webui-reForgebut when I'm trying to launch I get:CUDA Stream Activated: FalseD:\SD MASTER\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead. warnings.warn(Traceback (most recent call last): File "D:\SD MASTER\stable-diffusion-webui\launch.py", line 51, in <module> main(). [bunch of other file calls]. File "D:\SD MASTER\stable-diffusion-webui\ldm_patched\modules\model_base.py", line 6, in <module> from ldm_patched.ldm.modules.diffusionmodules.openaimodel import UNetModel, Timestep File "D:\SD MASTER\stable-diffusion-webui\ldm_patched\ldm\modules\diffusionmodules\openaimodel.py", line 22, in <module> from ..attention import SpatialTransformer, SpatialVideoTransformer, default File "D:\SD MASTER\stable-diffusion-webui\ldm_patched\ldm\modules\attention.py", line 21, in <module> import xformersModuleNotFoundError: import of xformers halted; None in sys.modulesPress any key to continue . . .what do?
>>102400307You're in the schizo general anonBetter ask in the real thread (/ldg/)
What happens when you press the key?>>102400307
>>102400322its no longer caturday. catgirls are illegal
>>102400364window closes
>>102400372well you might want to ask in the other thread, I don't know
>>102400442very sloppy
>>102400307>ModuleNotFoundError: import of xformershmm idk, almost as if a module isn't found, like you need to install the module, something like pip install xformers
https://youtu.be/N4eVHjTkdjg?si=19FVYZvrUoFY49kD
>>102400482
Genuinely asking, very new to using AI but it takes like 5-8 minutes just to generate one picture on SD. Is it just better to buy a NAI sub if my computer can't run SD well?
>>102400705Yes
>>102400705well what is the generated picture's resolution, 512 by 512? I have no idea on a nai sub and what it would bring, but with a 2080 one can achieve a lot already
>>102400705Yeah there's a reason /sdg/ usually has NAI pictures as OP, local can't keep up with the quality
>>102400730yeah 512 x 512 and it takes a really long time >>102400753Thank you, anything in general I should know before purchasing NAI? or tips for beginners in general? thanks
>>102400786Yeah, try stuff out for yourself. There are a bunch of guides / hints online of varying quality. Just try out stuff (like prompt style) and never think you have found the one and only way.Oh and keep non local stuff out of /ldg/, you're in the right thread here
https://huggingface.co/datasets/bigdata-pw/Actors6M