Discussion of free and open source text-to-image modelsPrevious long dick : >>103181426These Bones Were Meant to be Crushed Edition>Beginner UIFooocus: https://github.com/lllyasviel/fooocusEasyDiffusion: https://easydiffusion.github.ioMetastable: https://metastable.studio>Advanced UIForge: https://github.com/lllyasviel/stable-diffusion-webui-forgereForge: https://github.com/Panchovix/stable-diffusion-webui-reForgeComfyUI: https://github.com/comfyanonymous/ComfyUIInvokeAI: https://github.com/invoke-ai/InvokeAISD.Next: https://github.com/vladmandic/automaticSwarmUI: https://github.com/mcmonkeyprojects/SwarmUI>Models, LoRAs & traininghttps://aitracker.arthttps://huggingface.cohttps://civitai.comhttps://tensor.art/modelshttps://liblib.arthttps://imgsys.org/rankingshttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3>SD3.5L/Mhttps://huggingface.co/stabilityai/stable-diffusion-3.5-largehttps://replicate.com/stability-ai/stable-diffusion-3.5-largehttps://huggingface.co/stabilityai/stable-diffusion-3.5-mediumhttps://huggingface.co/spaces/stabilityai/stable-diffusion-3.5-medium>Sanahttps://github.com/NVlabs/Sanahttps://sana-gen.mit.edu>Fluxhttps://huggingface.co/spaces/black-forest-labs/FLUX.1-schnellhttps://comfyanonymous.github.io/ComfyUI_examples/fluxDeDistilled Quants: https://huggingface.co/TheYuriLover/flux-dev-de-distill-GGUF/tree/main>Index of guides and other toolshttps://rentry.org/sdg-linkhttps://rentry.org/rentrysdhttps://rentry.org/sdvae>Try online without registrationtxt2img: https://www.mage.spaceimg2img: https://huggingface.co/spaces/huggingface/diffuse-the-rest>Maintain thread qualityrentry.org/debo>Related boards>>>/aco/sdg>>>/aco/aivg>>>/b/degen>>>/c/kdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/tg/slop>>>/trash/sdg>>>/u/udg>>>/vt/vtai
Blessed thread of frenship
>>103194152what was the lora used to get the early 2000s webcam girl?
>>103194819just put>worst qualityin the prompt
>>103194819no lora. but the prompt read something like>omegle video chat, crappy webcam, unregistered hypercam, blurry, washed-out tint, screen glowthere was a lot more to it than that and this is just from memoryand it was img2img 0.9 denoising on a webcam still image so that provided some guidance as far as appropriate colors/etc
steady as the 1girl goes
Is there something from this general that can be used to erase text in manga/redraw the background behind the text?(e.g. something like picrel)
>>103195267yeah i use the llama cleaner extension for reforge with a 0.4 denoise on whatever anime model you prefer. i've heard anons mention that there are adetailer models that are trained to detect text so you could automate this
>>103195291you dont even need forge, IOPaint (llama cleaner) is it's own program but that's handy if you already use forge I suppose
>>103194152>armpit hair gen made itstinkbugs we wonned. *sticks nose in hairy armpit* ACHOO!
>>103195291>>103195310Thank you for the advice. I don't have reforge/forge installed, so I'll just be going with the stand alone version.(And is this the right repo? https://github.com/Sanster/IOPaint)
>stinkbugs
I hope Kijai would impliment the memory management optimisations for his Mochi wrapper that native comfy has for Mochi, I much prefer his nodes and experiments but bemoan the resulting low num_frames max values this brings at least on my install.
>>103195367>I hope Kijai would impliment the memory management optimisations for his Mochi wrapper that native comfy has for Mochiwhy not just sticking with native comfy then?
>>103195395I would like his nodes to have more compatibility with comfys native ksampler than i've experienced.I use both, but desire the best of both worlds, unified.
>>103195395I should explain, it's really the ksampler vs the kijai sampler, comfy has better memory management, ideally the configurability of kijai with the mm of comfys version.I've not even checked the code between the two, firstly it's a time issue and then most likely a skill issue for me to do it myself.Also the vae decode on comfy ooms even after trying tiled on it's own higher frames genned latents when run only by itself, Kijai's doing the some decode on that latent does not. Which for me creates more work as i then have to swop over to the Kijai latent decode sheet to produce a webm.It's not a complaint, it's an observation for how it works for me, Im very appreciative of both projects.
thread challenge:low-quality PonyXL 1girlprompt MUST begin with score_4no other scores allowedeverything else is up to you
>>103195609alternatively, noobxl with "worst quality, jpeg_artifacts"
>>103195663noobai*I just woke up
>>103195609
thread theme: large monster girls
>>103195810holy sovl
>>103195810like 90's pc games
Monster Girl General
Went from 3060 to 4070 ti super. Surprised how much faster this is.
>>103196568happy genning, I'll be stuck with my 2080 for some time sill
>>103196582well that's not completely useless
>>103196627even for a miniaturized version squeezed into a laptop, it's already served me more than well enough, especially given some elbow grease
>>103194819many anon believe you need a lora to get Flux to output interesting non-slopped images but that is simply not true (except for animu)
>>103196074>>103196413lol thanks
>download vanilla noobai/illustrious>it outputs barely coherent garbage>download a random finetune of it>it suddenly workswhat's the deal
>>103196883dataset literally filled with shit
>>103196907the fuck is a heart diaper
>>103196918prompt it and tell us
>>103196883takes a lot of tard wrangling desu>>103196907>>103191178>>>/h/8317030>It's hard to give an "objective" answer to that because so many things went wrong with Noob (e.g. training the TE this far into the bake, absurd dropout strategy, fucked noise offset for the last few EPS epochs, bumpy vpred conversion, training vpred on joycaption for some reason) that it's hard to tease out which problem caused which issue.looks like it was more than just diaper data
>they shit it my model!
>>103196932
>>103196932>>103196993How close can you get it to one of those japanese ricers?
>>103196960The whole thing sucks. So close to greatness. Same thing with asp2.
>>103197108>>103195881yo baron I can dig it
Cozy
Sleepy
horny
>>103197198give me your lora
>>103197241no
>>103197108Do these types of women exist IRL?
>>103197250you're not me>>103197241no
>>103197273not anymore, they were all turned into soup
>>103197013kek bosozoku shit, can't get it to those levels, maybe with inpainting in the future
>>103196661nice cock
some NoobAI worst_quality. I also looked at the Illustrious arxiv paper, interesting that their tags don't quite match the conventions of other anime models, e.g. they use "bad quality" rather than "low quality", and they also tagged for recency, any image prior to 2017 was tagged "oldest", 2023 and newer is "newest", etc. Using the tag "old" in this way seems like a bad idea to me because wouldn't that overlap with some booru tags for age? some other odd choices in training which I'm not sure about. e.g.>Fifth, we implemented a simple paraphrasing sequence process to train the model on more diverse texts. Tags like "1girl, 1boy" were paraphrased as "one girl, single women," etc. This process enables the model to understand various inputs, instead of relying strictly on tag-based conditioning.and this one is probably why it's so much less flexible when you forget any particular tag it's expecting:>we implemented a No Dropout Token approach to ensure that provocative or specific tokens are never excluded. In conventional training methods, random tokens are dropped during image pairing to prevent overfitting and enhance model generalization. However, this approach led to the occasional generation of provocative images. By ensuring provocative tokens were always retained and training the model to recognize these concepts with 100% accuracy, we found that controlling the sampling the provocative tokens by CFG, or preventing their use entirely effectively prevented the generation of provocative or inappropriate contentPony was a little bit inflexible like this, but NoobAI is brutal. >>103196883sounds like a skill issue. You need a "finetune" that restricts the range of possibilities to protect you from yourself.Same thing happened with Pony and Autismmix. Pony was strictly better, but Autismmix was easier for tards
>>103197671the obvious solution is simply have nsfw and sfw tags which are never dropped out but you absolutely do need tag dropout especially so the model doesn't become slavish to longform tagging (which Pony is a good example of)
>>103197708I think the no dropout thing works well as long as you're willing to put in all the work on every gen. But it's frustrating that it's so unforgiving. Trade-offs.
>>103197735Everyone is lazy but I think the best solution isn't drop out but having multiple sets of captions per image that go from broad to narrow.ie:1girl, standing, blue shirt1girl, standing, blue shirt, red jeans, outside, sunny day[etc]also no one wants to have to remember 30 esoteric tags to make an image
>>103197756Combination of tags and natural language seems to give best results for loras
>>103197671>e.g. they use "bad quality" rather than "low quality",>Tags like "1girl, 1boy" were paraphrased as "one girl, single women," etc.man wtf
Sadly, sd-webui-regional-prompter does not appear to work with flux. Which was predictable but still a bummer. Are there any other regional prompting extensions that work for flux in forge? I was able to do a basic regional prompting test in comfy but of all the things I like to use nothing was less 'comfy' in comfy than regional prompting.
>>103197671>Pony was strictly betterretard alert, retard alerteveryanon evacuate the bread, it's been infected with mold
>>103197916>>103197863What model?
>>103198022noobxl then upscaled with boleromix
>>103198174horny and curvy, just the way i like my 1girls
>>103198236
>>103197671holy SOVL
>>103196907>25,000+n-no...
>>103196907Good list to put in my negatives ty anon
Can someone bring me up to speed? Been out if it for a while.So new 'base models are sdxl,pony I've seen something called noob and illustrious? And all the models are trained on these 4 while majority is pony and sdxl?What negatives do I use now? Does each base model needs it own embedded negative?And what are these'animation' models? I can create videos on my shitty GPU now or that is a 4090 thing? Thx incase anybody helps
>>103198515Nah bro
>>103198586I hope Santa shits down your chimeny
>>103198771Hail Satan
>>103198818May baby Jesus save your soul
>>103198865Babby Jesus in chimeny??
>>103198515>>103198865>baby Jesus You can prompt that with Noob, check out >>103196907
best quality/most accurate inpainting method currently available? tried getting it to work with Flux a couple of months ago and it was horribly difficult, have things changed at all since then?
>>103198924Seems pretty standard for this board If I'm being honest
>>103199004sdxl, 1.5 finetunes for smaller details
>>103199051Tell me about u're neovagina, why does it wear a mask?
>>103198392Nice. Prompt?
I'm finally conceding that Flux is just better for training than SD 3.5. I don't know why SD 3.5 is so fucked. Flux learns like Pixart, SD 3.5 just blows up.
>>103199376Distilled flux?
>>103199398I'm back to playing with the 8B Flux model since it's the only one feasible for local 24GB training.
>>103199376Because you need to treat it more like SDXL, it seems, according to >>103069346 and it seems like it worked out for that anon.
>>103199419Maybe >>103069346 is correct but holy shit are his outputs the most fried slop I've ever seen. Jesus.
>>103199433Yeah SDXL sucks
>>103199458That's... not what I meant kek.
>>103199004IOPaint. UI is more important than model.
>>103199493No, SDXL is ass because you have to treat it with baby gloves and it takes a billion epochs to train.
>>103199509Sure, sure. I was commenting more on specifically that guys outputs. Utter trash despite the models.
>>103199509How many epochs to train a dolphin? I want it to talk like Flipper
>>10319952815 https://pmc.ncbi.nlm.nih.gov/articles/PMC9909526/
>>103199544https://www.youtube.com/watch?v=13ibG2I44n8
>>103199618>madtv youtube linkhard no on clicking that.
On reForge UI (also on forge) I lost the ability to change the meters using the scroll wheel.I think it has something to do with the change in gradio?Anybody know if there's a way to fix this?
>>103199253Outer space
>>103199842
>>103194152
hibernation mode
>>103197108Nice
>>103200578well if you are really bored you can tell me if creating singletons for my image tracking db is going to be worth it or should I just join the tables. I have tables for base model and for vaes. It seems that load once is going to be better than joining tables.
noobai is frustrating because when the gens are turning out badly it's not always easy to figure out why it's happening, and there's not much randomness gen-to-gen so 100% of your gens will be bad until you fix it
is SD1.5 still the most unhinged uncensored model?Flux is pretty and all but it's been raised by a Christian family or smth. Only heard bad things about SDXL, and I don't know Sana.
>>103200944>Only heard bad things aboutthis thread craps on all models. NoobAI and illustrious are the new hotness. You will either love or hate them there is no middle. Tons of people are still using XL/Pony. Use a pony variant if you want unhinged uncensored. Otherwise look at samples of civitai and pick what you like. I still use SD1.5 because it is fun to pump out 20 images in a very short amount of time. It is showing its age a little as the adherence to prompts can be weaker in many scenarios.
noob couldve been great... we were so close
noobai feels even more dependent on the starting image (at 1.0 denoise) than ponyxl was...
>bloom, text,watermark,bad anatomy, bad proportions, extra limbs, extra digit, extra legs, extra legs and arms, disfigured, missing arms, too many fingers, fused fingers, missing fingers, unclear eyes,watermark,username,furry, mammal, anthro, furry, worst quality, abstract, flexible deformity, signature, low quality, normal quality,low contrast,lowres,bad hands,mutated hands, bar censor, censored, mosaic censoring, username, watermark, artist name, worst quality,old,early,low quality,quality,lowres,signature,username,bad id,bad twitter id,english commentary,logo,bad hands,mutated hands,mammal,anthro,furry,ambiguous_form,feral,semi-anthro,
>>103201116
>>103201039bold claim. I might agree, but would still want to test more. Input latents on the left. Pony on the top, noob on the bottom. Locked prompt/settings. I understand that this is an imperfect test since pony/noob prompt different, but I don't know how to account for that. If you have a general prompt I may try it.
>>103201531Using a hand detailer?
>>103201628Nope
>>103195663>>103201642Nice
what cfg are you guys running in your typical NoobAI gens?
>>103201703Between 5.5 and 7
>>103201703lower than i'd like
>>103195609are we suppose to be trying to overcome the score or live with it?
>>103201846I think the idea is to just have fun with it
>>103202026this one's good
>>103202037Thanks
>>103201911kekd
>Colorful painting of a reading chair in a courtyard garden.heh, the chair is literally reading
>>103202704kek, sometimes Flux takes things too litteraly, I guess that's the fault of the T5 encoder
>>103202377She's in Fallout this time
>make a merge, call it a finetunewhy do they do this? https://civitai.com/models/463163?modelVersionId=1065300
>>103203440because in this fucked world, the "fake it until you make it" motto is not only morally acceptable, but it's also morally encouraged
>>103203440I guess it's unavoidable with all sorts of amateurs going in blind and unaware of the finer details behind a hobby. I guess it should count as a finetune if it's at least custom trained loras merged into it.
>>103194152Are there any sites that generate good coomer stuff?I only got a steam deck rn
>>103203693inpaited?
>>103203702yes
>>103203440it does get worse>AAA(AAAAAAAAAAAAAAAAAAAA) | Finetune mix on whatever model i want at that point which is Illustrious XL right now,but i will keep Pony onehttps://civitai.com/models/353543/aaaaaaaaaaaaaaaaaaaaaaa-or-finetune-mix-on-whatever-model-i-want-at-that-point-which-is-illustrious-xl-right-nowbut-i-will-keep-pony-one-for-on-site-as-well
>>103204547pretty good, is that just a gen or lots of inpainting to get it do a sword properly
>>103204580no manual inpainting, just detailer node
>>103204611nice ,can you catbox that, is it illustrious based
>>103204622the wf has a bunch of custom nodes that i've written to make the detailer a bit less complicated, they arent really necessary so I could clean it up if you want but its pretty simple:noobaiXLNAIXL_epsilonPred10Version-> random loras (using... arknights, dishwasher1910, al13ng0dXLP) at various weights-> sample 864x1536 2m/karras/18 steps/4.0-> upscale 4xAnimeSharp-> downscale 1080x1920-> illustriousponymix_v2 + same loras-> sample, same settings 0.5 denoise-> 3x detailer (person_yolov8m-seg) .3/.35/.35 denoise-> detailer (face_yolov8m) .4-> detailer (eyeful_v1) using combined mask-> upscale 4xAnimeSharp-> downscale
>>103205230Before/after? Pretty cheeky
>>103198313me likey
Can someone help me building a LORA off my own art?I made one but I do feel is too crappy.But I like this gen It made when It was cooking.
>>103205452i think you will have more luck if you ask in /h/hdg for anime style since theres a lot of lora trainers
>>103205556this
A lot of kino from the comfyposter. Could it possibly be? The end of slop era?
>my kinda sloppy armpit hair gen made it into the collagekek why, besides maybe eliciting laughter from the collage maker
>>103205788anon has a particular taste in gens
soul general
Interesting behaviour from Illustrious/NoobAI. It seems to have a bit of a thing for more interesting or complex compositions, trying to squeeze in more than just 1girl.
>>103205968It's very decent for art loras
>>103206014Maybe, maybe not. So far I've found only one finetune which makes it actually usable.
>>103205805i thank him for seeing my very wip style as worthy enough for the collage>especially right on top of boof passing gengar>>103205968noticed that too, its stupid detailed even when you don't prompt for it. I'm impressed at what it can do when all you prompt for quality is "masterpiece" and leave negatives blank.hesitant to keep training my style lora for the time being to wait for more updates though, i'm wondering if more updates will help with composition consistency, there's still maybe 1/4 gens i've noticed that get a bit funny.i don't know if it's illustrious or adetailer but i get weird shit like faces within hands, busted hands when trying to correct them, or picrel instances of odd artifacts in a 9/10 corrected hand.
>>103206028lora can stabilize the output quite a lot
>>103206040>i thank him for seeing my very wip style as worthy enough for the collageFor me it's a bit hit or miss so far, but I wonder what style you're going after exactly. Got any refferances or stuff used in training to share?>i don't know if it's illustrious or adetailerJust started inpainting with it, so I'll probably find out soon enough.>>103206050>lora can stabilize the output quite a lotCertainly seems to be the case. Whatever they merged into the checkpoint I'm using just works.
>>103206084>Got any refferances or stuff used in training to shareIt's a bafflingly schizophrenic concoction that started off from my first times playing around with 2D in stable diffusion, With the ELF-PC lora for 1.5 i tried 1:1 recreating that PC-98 style particularly from that team, but because i was totally new to things and never satisfied with the results, stacking negatives and stupid schizo phrases to "hopefully fix the bad anatomy", it ended up somehow creating this style were the hair was very shiny, the shading looked really dithered and oil painted to an extent while still having a digital look, and everything just had this sort of semi-uncanny yet comfy feel to it. So it's like a "modern interpretation" of 90's PC dithered styles while doing something sorta new and derivative. I hadn't decided to pick up this idea again until illustrious/noob started getting really good.Was like "Oh shit, finally, something more stable and not finnicky like Pony." The lack of need to fuck with score tagging and clearly better anatomy means i can pick this idea up again.1.5 is such a wild schizophrenic monster, picrel is one example of what i got at the time.
>>103206028desu requiring a finetune means you are low-skilled
>>103206336>implying it's requiredSure, you can technically paint with a rock or digital art with a mouse, but why use a dull blade when you can hone it?
>>103206336imho needing to state your random opinion on someone being low-skilled means you are low-skilled
go ahead anon stack a few more loras im sure itll look good this time
Inpainting, negative prompts and loras are pure cope
>>103206078Nice
What kind of workflows do you guys use?Lately I've just been using the Lucifael one from civitai and it seems to work good. But I always feel a lingering sense that I might be missing out on something better with this AI stuff.
>>103206749My own, of course.
>>103206767i also use this guys
>>103206767this
>>103206749>What kind of workflows do you guys use?I'm really a big fan of fancy stuff, but when I decided to go for fancy workflows I realized the output wasn't that different from a simple workflow, I think it's just a cope except for some one of two tricks that works a bit
>>103206749>What kind of workflows do you guys use?I experiment with prompts, pick an image that I like and put it into img2img, refine the composition until it suits me and then inpaint details.
>>103206707>negative promptsShut the fuck up
>>103206707>InpaintingShut the fuck up [2]
>>103206078this one is cute
>>103206078yeah i kinda like it too
>>103203440https://civitai.com/images/40357524https://files.catbox.moe/dc9nge.png
>>103206707>lorasShut the fuck up [3]
>>103207014real shit?
>>103206767Lol, alright, fair.Let me qualify my question better, what makes a good workflow? I've found so far that the biggest thing to me is upscaling, it adds so much more to images and good upscaling seems to be responsible for the best results. Adetailer/face detailing also seems to work very well, though it doesn't seem as necessary for some of the recent fine tunes, to get particular details right.>>103206807Yes, this is exactly what I've noticed. You hook up a big workflow and realize you can just set 99% of it to 'bypass' and have something good.>>103206844Hm, yeah I haven't experimented much with img2img and inpainting. Maybe I should try that. I've been wanting to do some gens of like, old monhun stuff.>>103206707I dunno, these are all just tools in the box. You're coming across like one of those new programmers who console war over languages.
>/g/ - Technology
>>103207014HOLY MUFF
>>103201269did you get any further on this?
>>103207047>I haven't experimented much with img2img and inpaintingIt's a lot of fun, since you can use it for more than just refining details or fixing mistakes. All kinds of weird, freaky and interesting possibilities if you nudge it in new different directions.
>>103207047>what makes a good workflow?organization and simplicity impress me but i find the most interesting outputs come from schizoshit
>>103207211so, what are you using for all this pixel goodness?
>>103207240>>103205187
>>103207273Appreciate it. Fine work while at it, if I haven't expressed myself clearly enough already.
>>103207047>good upscalingSure. I was unhappy with the quality of my images compared to what anon was posting so I asked what's his workflow, replicated it, in the end the difference turned out to be the specific upscaler he used. But after playing with it I was still unhappy, so I went to OpenModelDB and downloaded couple dozens with different architectures to test and find the best ones for myself. Sadly a gpulet so only the small ones are fast enough.>what makes a good workflow?Trial and error, nothing else. Iterate until you find what works for you. But if somebody's ready solution is fine for you as is, great.Like thanks to >>103205187 I got myself 30 new detectors for the detailer, didn't even consider they might exist. Although I don't gen batches anyway.
I mostly use NAI but their upscaling features are pretty dog shitif you enhance an image it's basically 1.5x resolutionif you use the enhance on a standard gen it just ends up having blurry eyesif you enhance and then try to upscale it tries to charge you out the asswould it be feasible to upscale NAI images locally after I've enhanced them or what would the best route be here?
>>103207706yeah why not, just use i2i
Damn flux, I said "fiery gaze" but that's not what I meant
>>103207741that image is fire though :^)
>>103207014jesus
>>103207769Guess I should just lean into it
cogxvideo1.5 is pretty decent for how fast it is, but it can't do anime at all. curious if a finetune could fix this for particular use cases. I find results are best when width>height and height=768. using the comfy wrapper, i set frames to 81 and fps to 16. I use GPT to generate the text prompt for the image using some old code they posted for the previous cogxvideo model
>>103205452If the model is bad at your particular style, no one can help you. All the different lora trainers are just cope.
>>103203440>SDXL + FluxThis isn't possible. This is guy making shit up like the people in those fake DIY youtube videos.
>>103208143Steven Seagal doing martial arts has the same energy
>>103208035Damn looks pretty sad
>>103207347very nice
>>103208035>curious if a finetune could fix this for particular use cases.no one will bother to finetune Cog, Mochi is better and has an Apache 2.0 licence
Do you ever ask yourself what is this all for?
>>103208621Pony: jacking offFlux: game assets for my indie gameUpcoming video models: transforming japan's entire anime industry, as they've always focused on the cheapest possible way to make anime.
>>103208621Pony: jacking offFlux: jacking off1.5: jacking offXL: jacking off Dalle: jacking off Imagen 3: jacking off Pixart: jacking off Kolors: jacking off
>>103208830
>>103208621Expensive hobby.
>>103209406Are you doing these for some project or just for fun?
>>103205919Long stick
putting "generated" in the negatives of noob completely destroys anatomy
>>103209537Just for fun, experimenting with flux
So far the most realistic image I created using SDXL and only with 4gb vram.Also how can I use ControlNet with Inpaint on ComfyUI?
>>103209355This is relatively cheap compared to many other hobbies.
slow sunday
Was out in public in a place full of relatively attractive normie white women. Was shocked how much they looked like my gens. Every single one of them to my eyes had some kind of 'typical face' that I see all the time. Turns out all those 'fluxfaces' I was beginning to associate with imagegen are just real faces.
>>103210326tetris syndrome
>>1032077021.5 kinda style desu
So I just followed the most retard-friendly tutorial I could find for this stuff. Please bear me with me because I am a writer by trade, I just want to make AI to coom.I installed a thing called Stable Diffusion WebUI, and I downloaded a file called Stable Diffusion XL. If I understand correctly, the former is a skeleton to operate this stuff, the latter is like the thing actually doing the work? It is the "model"? And I can change it if I want, right? I see some people use stable diffusion 1.5 and I've been seeing a thing called Animagine. I also see I can use Loras which are good if I want to nail some characters or places.Now, I repeat, I literally only want to make sexy and cute anime girl in an anime style with cool poses. Which parts of this set up should I replace, and how should I start typing prompts? Because so far using the 2 pieces I downloaded they all come out like this and I hate it.
>>103210871stable diffusion webui is the UI/backend that actually runs your stable diffusion XL (SDXL) image model for you. image models are also called checkpoints. i recommend you use forge or reforge instead, they are forks of stable diffusion webui that run better and get updated more frequently, forge supports newer image gen models like flux and sd3.5 while reforge just focuses on the older but more mature sdxl and sd 1.5>I see some people use stable diffusion 1.5sd 1.5 is an older, smaller model that came before sdxl>I've been seeing a thing called Animaginethat's called a finetune, animagine is a finetune of sdxl that's been trained to do anime pictures>I literally only want to make sexy and cute anime girl in an anime style with cool posesyou should checkout noobxl then, it's a finetune of illustriousxl which is a finetune of sdxl. you can find it on civitai>and how should I start typing prompts?it depends on the model and finetune but for noobxl you should use danbooru and e621 tags only. no natural language. browse the example images on it's civitai page and read the model description to get a general idea on how to prompt it
>>103211015Got it, thanks bro, I really want to make my own sexy anime girls, since now it's pretty easy I want to make stuff perfect to my tastes. I'll take a break for now and come back. I've done enough tech stuff for a few days and my brain hurts.
>>103211031there's a large learning curve at first but keep at it, everything will just click eventually. i'm still learning new things everyday
>>103207078the change was enough that I am writing a node to color latents. I have some janky workflow that loads images I made in gimp and crops them to size. It seems that mergers/finetunes destroy any possibilities of a conclusion. Pony seems to change less given a colored latent. Noob becomes less reactive if certain artists are put in, but that really isn't a conclusion.
>>103211068>node to color latentshttps://github.com/Jordach/comfy-plasma ?
>>103211119I am very much borrowing his plasma code. Thanks for the heads up. I was targeting a little less elegant. Solid color, maybe a gradient + variable noise. A image mask input if I am feeling very fancy. I feel like Jordan and I could be friends. Pic related. I love me a good fuck this code comment.
>>103211119Whats the point of this?
>>103211163>gradientwould be cool. you can fudge it with some compositing and blurring but its a hassle to setup all the nodes and not quite a perfect solution
>>103194152So has anyone tested whether or not "nightshade" or "glaze" actually works?
>>103211746they worked. Then people used the most simple pieces of data cleaning.
>>103211796Can you link examples of it working?
>>103211803All examples are all academic. Despite all the bitching nobody important publicly implemented it and was able to follow up if it worked.
>>103211832Academics at the University of Utah claimed they cracked cold fusion. Then people try to replicate their findings and it turns out they were completely and utterly full of shit (not on purpose, they genuinely thought they did but didn't bother to make sure anyone double-checked or could reproduce what they thought was CF). Damping academics means fuck all if they are still capable of making mistakes or flat out lying.
>>103211842it is rough everywhere. Youtuber Pete Judo goes over all the Harvard purposely making shit up stuff. Then there is this shit:https://en.wikipedia.org/wiki/Harvard_morgue_caseThe problem with this tech is that it is more effective and cheaper to allow then sue. The other option is to hide your stuff from the bots which hasn't worked too well. I like the idea of changing the text depending on the user agent. Get that whole remove the french language from Linux thing going on purpose.
>>103211873This is no worse than letting bodies dressed up with humorous clothing rot in body farms in undignified poses.
>>103211915>>103211915>>103211915>>103211915>>103211915>>103211915
>>103211873>The other option is to hide your stuff from the bots which hasn't worked too well. That's counterintuitive to what a lot of artists on social media want to do. They want people to see their art, repost the art, follow their accounts, etc. If they actually make it hard to find by making our accounts private that the beats the purpose of them posting the art in the first place because they want to grow (Yes, literally ALL of them crave attention whether they like to admit it or not. Anyone that says otherwise is full of shit or else they wouldn't even be on Twitter or wherever they post in the first place. They all want attention and crave it to varying degrees). >I like the idea of changing the text depending on the user agent. Can you elaborate? I'm not sure what you're referring to