Previous: >>108518256>UIs to generate animeComfyUI:https://github.com/comfyanonymous/ComfyUISwarmUI:https://github.com/mcmonkeyprojects/SwarmUIre/Forge/Classic:https://rentry.org/ldg-lazy-getting-started-guide#reforgeclassicSD.Next:https://github.com/vladmandic/sdnextWan2GP:https://github.com/deepbeepmeep/Wan2GPInvokeAI:https://www.invoke.com/>How to Generating Anime Imageshttps://rentry.org/comfyui_guide_1girlhttps://tagexplorer.github.iohttps://making-images-great-again-library.vercel.app/https://neta-lumina-style.tz03.xyz/>Output cleanuphttps://rentry.org/RemovingDiffusionGunkhttps://www.mediafire.com/file/vipr23exc5htmnt (batch processing python script)>Generating Anime VideosGuide:https://rentry.org/wan22ldgguide>Anime Checkpoints, LoRAs, Upscalers, & Workflowshttps://civitai.comhttps://civitaiarchive.comhttps://tensor.arthttps://openmodeldb.infohttps://openart.ai/workflowshttps://www.seaart.aihttps://www.liblib.art/https://rentry.org/adtsampler>Anime MiscLocal Model Meta:https://rentry.org/localmodelsmetaShare Metadata:https://catbox.moe|https://litterbox.catbox.moeImg2Prompt:https://huggingface.co/spaces/fancyfeast/joy-caption-beta-oneSamplers:https://stable-diffusion-art.com/samplersTxt2Img Plugin:https://github.com/Acly/krita-ai-diffusionOnline metadata viewer SD/NovelAI:https://spell.novelai.devCatbox/Metadata Userscript: https://gist.github.com/catboxanon/ca46eb79ce55e3216aecab49d5c7a3fb>Inpainting Guide from an Anonhttps://files.catbox.moe/fbzsxb.jpg>>106520607>Neighbourshttps://rentry.org/ldg-lazy-getting-started-guide#rentry-from-other-boards>>>/aco/csdg>>>/b/degen>>>/gif/vdg>>>/d/ddg>>>/d/dddg>>>/e/edg>>>/h/hdg>>>/tg/slop>>>/trash/slop>>>/vt/vtai>>>/u/udg>>>/r9k/aiwg/>Local Text&Image>>>/g/lmg>>>/g/ldg>>>/vp/napt>Cloud Text&Image>>>/g/aicg>>>/g/sdg/
>>108585471im only here in this thread because i saw the OP image and he is in my /cm/ folderallow me to bless/curse this thread with /cm/-tier anime ai generated pits
>>108585459>stop using latent upscale I am using RealESRGAN_x4plus_anime_6B.
>>108585809It definitely just sounds like your denoise is too high. I usually use 0.2 but it depends on your sampler and scheduler.
>>108585508Can i get tbe prompt for.this
>>108585830Since Anima 3 came out you have been stuck with the same style and that dumb bean mouth. How much longer are you going to keep blessing us?>>108566483>>108552895>>108551815
>>108586117Artist tag sir
>>108586190@racun
>>108586160omg are you... a fan?
>>108586254Thank you sir, did you use anima sir
>>108586117>>108586392>>108587146Nice Anima gens Shame the dev only posts in /ldg/ so anything we post in any anime general will never be seen by him...
>>108586392So cutesy cute with that facial expression and pose, meanwhile she's on her knees bottomless and my cum is running down her thighs. You can post in /hgg/ if you know what I mean.
Chat...
>>108587681what
>>108587681huh?
>>108585471How many image should i used for training Character LoRAI always used 120 images, mostly by using other Model to gen
>>108586160Tragedy! Can you imagine? Poor real artists are stuck with the same style for their whole life...
stole >>108577379
>>108590717now imagine if everyone in this thread commissioned that artist only and nothing else
>>108590961make more please im gooning rn
>>108586392>>108587649OK you didn't seem too enthusiastic about my vision so I did it myself:( >>>/h/8859439Might need another attempt, it's a bit off...
i love my forehead wife, seki hiromi
>>108591543I tried it but I wasn't really happy with the results. I don't think I like the concept that much. I'd prefer instant loss or a suggestive image that doesn't actually show anything.
What I am doing wrong and why does it look all stretched out and smudgy looking? After TXT2IMG and putting it through img2img 0.35 denoise 2nd pass in neo reforge, then putting it to extra with upscaler 4x-AnimeSharp model with 2.0x upscaling. I have never had this problem with comfyui...
i wanna start doing gens, know nothing about it thotried comfyai for amd but i cannot turn it on (kept getting error and after trying many fixes i gave up)sooo, any reccs for alternatives? which one of the listed UIs are best?
>>108591801You are doing it backwards. First you do the AnimeSharp in extras, then you do the img2img.
>>108591838So first is text2img, then extra and then img2img? Oh...
>>108591850The big difference that upscale models give is in sharpness. You can just use lanczos to upscale, but it will give you softer edges unless you crank up the denoise. An upscale model gives you more starting sharpness before the sampling so you don't need to deal with the issues of high denoise.>>108591814I think everyone's using either comfy or forge neo. AMD setup might be more complicated across the board. It's not that bad to set up manually with pyenv+venv if you know what you're doing, but otherwise it's a pain in the ass.
The only thing I find annoying in ComfyUI is that I cant have several versions a prompt, like I wish I could switch back and forth quickly or that at least I could comment out parts of the prompt that I dont want anymore (but may use later) etc.Is there a fix for this im new so there prob is and im just too dumb
>>108592142you can use multiple prompt nodes and connect the one you currently want.
>>108577379It's out :Dhttps://civitai.com/models/2385278/animayume?modelversionid=2798563
>>108592167sweet. i'll run it rn
>>108591925I tried doing it and it has weird patterns and artifacts after img2img.
>>108592230It's anima. Don't go too far above 2 megapixels with it. If you want higher, use mixture of diffusers or ultimate sd upscale.
>>108592142I used to use the styles selector node from comfyui-easy-use, but it broke so I made my own prompt node (that now needs to be reworked for anima). >>108592152 is an easy way to do it if you don't want to rely on a node pack.You can use comments pretty easily with a regex replace node. You can turn this into a subgraph with a clip input and conditioning output and it'll function just like a normal clip text encode node.
>>108592230I used 896x1152 with txt2img process, then 2x for upscale and finally img2img. I'll try x1.5, must be tile issue.
>>108592286>>108592239Ah, it worked. Solid.
>>108592286Yeah, if it's not enough for some images, you can do 2x, but it needs to be denoised in 2 vertical tiles (or horizontal for landscape). Multidiffusion addon has mixture of diffusers method that does it well.
Anima is good because it's trained also on Gelbooru which is full of depraved shit, unlike Danbooru which is relatively "safe"
>>108592849peak
All these years gooning to anime all my anime pics were from the threads almost never set foot on either of thesePixiv is cool though
>>108586117Cute picture, sweetie! :3
>>108594035Yeah but nobody loves Oekaki. 34 years old status? Alone in /adt/ on 8th of january status? "Feel free to add me" status?
>>108591226idk
>>108594316Illya looks breedable, I need to do more gens of her
>>108591925What if I want to do it on arc
>>108593768I wish that ai models had a hope in hell of getting the emblem right, but oh well. Cute wife, here's mine.
advice for anima artist mixing?
>>108594611find an artist that looks like the style you got before
>>108594445I'm using arc right now and I'd say it's about the same (on linux). Same process for the two, use pyenv to pin a python version, install dependencies with pyenv, and install the right torch version for the gpu (rocm for amd and xpu for intel). I prefer the intel card over my old amd card because it's a bit faster and basically silent in operation, but that obviously depends on the card you're using.I'm not sure which is better if you're on windows. Rocm wasn't an option on windows for a long time and I don't know if that's changed. I think XPU just werks, but if something doesn't just work on windows, it's a bigger pain in the ass to set up manually than it is on linux.>>108594464I think illustious/noob handled logos a bit better. I still inpainted/redrew them a lot, but they were closer. I can prompt for Ichika's armband in anima, but then she's turned sideways to show the whole thing.
>>108594632are you using comfy or an a1111 fork?
>>108594699I'm using comfy
>>108594611anima shits the bed once you add more than one artist tag. it seems to be because anima doesn't use CLIP unlike older models>https://huggingface.co/circlestone-labs/Anima/discussions/112#69d3239fdbe185d18ae3d4d4the important part>I do agree that artist blending is different (and worse) than SDXL, but I think this was always a happy accident of how CLIP worked and that the downsides of CLIP are not worth it.
>>108595196ah i see. that explains why i've been having trouble with it compared to illustriousso just sticking to one artist tag would be best, huh?
>>108595272yeah one artist works fine. i usually mix 4 or 5 artist tags in illustrious so i'm a bit underwhelmed. doesn't sound like it's going to get much better in the future either
can anyone tell me how to fix face/eyes with face detailer in comfyuiI have watched many tutorials and I follow them exactly and they still look like shitor can someone share a workflow where the output's eyes doesn't look like shit so I can see what settings you usedit makes me sad generating a great image and then zooming in and seeing all that distortion and ugly artifacts on the face, kills my boner because it reminds me it's just AI slopalso I'm using Illustrious. my GPU is not powerful enough for anything beyond SDXLperhaps the latent image resolution needs to be higher? using 1024x1024 rn
>>108595196I disagree, I think the downsides of CLIP are well worth it, because it's a lot better at conveying fundamental concepts. In an ideal world, we could just use both at the same time.
>>108594611>>108595196skill issue
>>108594611Use prompt scheduling. In webui and its forks it looks something like this [@style1:@style2:0.6]. Which switches from genning with style 1 to style 2 after 60% of the steps are done. You can also do something like [:@style1:0.6] which only starts using style 1 after 60% of the steps are done - this is handy when you have a style lora that you're trying to mix with an already known artist.
one thing i think might be true: the first artist in the list might be the least influential and the last one might be the most
On the topic of how Anima handles artists, does anyone feel that it has a lot of randomness in seed depending on the quality differences seen in the artist's images? Like if an artist has a lot of different quality levels that they draw, the model will randomly generate one of them depending on seed. As opposed to older models that had more of an averaging effect, it felt like. So if you wanted to consistently gen a quality level of an artist on Anima, you need to select an artist that draws more consistently, or you need a LoRA.
>>108597124try score_8
>>108595998I made my own detailer nodes because face detailer has always been slow and shit for me. You can try the FastDetailer node from my pack if you'd like: https://github.com/mudknight/comfyui-mudknight-utils
>>108597489cute uma
>>108598812I need to stop browsing this thread at work, now I have an erection...
>>108598867Nice. Like a gift wrapped up so nicely it's almost a shame to unwrap.
>>108596606appreciate this. all these years and i didn't know this was a thing
How am I supposed to use the anima model exactly? The safetensors file is only 4.7gb which seems to be lower than the expected filesize.
>>108594611>artist mixingI have never done this before.
>>108599980which UI are you using? right now only forge neo and comfy support it
>>108599988Both Forge neo classic and Comfy fail to recognize the model for me.
>>108599980Read the model page bro. TE and VAE are not included and it says where to get them.
>>108599995Already put the text encoder and vae in their respective folders. Maybe my fork is just not compatible or I'm forgetting something,
>>108600093did you actually select and enable them in their respective dropdowns on neo?
>>108600171>AttributeError: 'NoneType' object has no attribute 'unet_key_prefix'
>>108597688I love you anon. Just installed it and it's perfect and really fast. I also managed to kind of make the regular face detailer work by increasing the denoising to around 0.75 but it was still painfully slow (1400+ seconds per image compared to 250 now)
>>108600291Did you switch from SD to Anima?
>>108594632Sometimes I see a pic like that where it does a little weird thing where the neck seems to be setting up for side view but the body is trying three-quarter.
>>108601210Most models struggle with perspective in three-quarter view. Your edit seems like it's twisted her body towards the camera but I'm not sure if that's better or worse. I think the clothes just do a better job of masking it.
>>108601958I dont even look at the neck myself
Which model can I use to edit an image to, say, restore a girl's ripped shorts?
>>108603975I don't either, I was more referring to the head relative to the body. Pic sort of related, but I don't mind weird anime angle perspective most of the time. It usually only bothers me in nsfw gens where a further breast appears larger than the closer one, especially when it shouldn't be visible at all.>>108604446Any should work if you draw and inpaint.
>>108604446If a model can draw it can inpaint. Do you mean which UI do people recommend for inpainting? I just do it in Comfy and can share a workflow if you like. A browser-based option is Llama, on Huggingface. I've heard Krita is good.
>>108604446krita is the best one but it doesn't support all the models and it has a steeper learning curve (mostly related to keeping proper colors/saturation)
>>108605897Oh nice!
@khyle. works way too well lmao
anyone using Spectrum for speeding up Anima? i don't know what settings are optimal
>>108605897If you have the extra time, Can you firx the right hand?
>>108607508nice composition
So I didn't realize until now but Forge Neo can now be installed on Linux.
>>108607798