Discussion of Free and Open-Source Diffusion models.Keys to the Castle Edition Previous: >>103491402>UIMetastable: https://metastable.studioSwarmUI: https://github.com/mcmonkeyprojects/SwarmUIForge: https://github.com/lllyasviel/stable-diffusion-webui-forgereForge: https://github.com/Panchovix/stable-diffusion-webui-reForgeComfyUI: https://github.com/comfyanonymous/ComfyUIInvokeAI: https://github.com/invoke-ai/InvokeAI>Models, LoRAs, & Upscalershttps://civitai.comhttps://tensor.art/https://openmodeldb.info>Traininghttps://github.com/Nerogar/OneTrainerhttps://github.com/kohya-ss/sd-scripts/tree/sd3https://github.com/derrian-distro/LoRA_Easy_Training_Scripts>HunyuanVideoComfy: https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/Windows: https://rentry.org/crhcqq54Training: https://github.com/tdrussell/diffusion-pipe>FluxForge Guide: https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/1050ComfyUI Guide: https://comfyanonymous.github.io/ComfyUI_examples/fluxDeDistilled Quants: https://huggingface.co/TheYuriLover/flux-dev-de-distill-GGUF/tree/main>MiscShare Metadata: https://catbox.moe | https://litterbox.catbox.moe/Generate Prompt from Image: https://huggingface.co/spaces/fancyfeast/joy-caption-alpha-twoArchived: https://rentry.org/sdg-linkSamplers: https://stable-diffusion-art.com/samplers/Open-Source Digital Art Software: https://krita.org/en/Txt2Img Plugin: https://kritaaidiffusion.com/Collagebaker: https://www.befunky.com/create/collage/Video Collagebaker: https://kdenlive.org/en/>Neighbo(u)rs>>>/aco/sdg>>>/aco/aivg>>>/b/degen>>>/c/kdg>>>/d/ddg>>>/e/edg>>>/h/hdg>>>/tg/slop>>>/trash/sdg>>>/u/udg>>>/vt/vtai>Texting Neighbo(u)r>>>/g/lmg
Blessed thread of frenship
Trani free zone
I enjoy these threads.
bog edition
kek >>103492818
reposting the Rapeman intro I made:https://files.catbox.moe/mc3nqd.mp4Lyrics:Rapeman! The crimson savior,A warrior born to face behavior.Through the smoke and fire, he stands tall,A hero answering the city’s call.Rapeman! The primal force,Breaking through with relentless intercourse.No face, no name, just justice's flame,Rapeman, they’ll remember your name!Every scar, every fight,Every rape in the dead of night.He bears it all, he feels their anal pain,But through the darkness, he’ll remain.
rapeman i...
>>103495904Now I want to see the turkish grifter as rapeman kek
HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Modelshttps://openreview.net/forum?id=TwJrTz9cRS>We propose Hadamard High-Rank Adaptation (HiRA), a parameter-efficient fine-tuning (PEFT) method that enhances the adaptability of Large Language Models (LLMs). While Low-rank Adaptation (LoRA) is widely used to reduce resource demands, its low-rank updates may limit its expressiveness for new tasks. HiRA addresses this by using a Hadamard product to retain high-rank update parameters, improving the model capacity. Empirically, HiRA outperforms LoRA and its variants on several tasks, with extensive ablation studies validating its effectiveness. Our code will be released.is this the lora killer?
>>103495800I see loras are here!! nice gonna train a nice one in my 4090 and uoooh ToT the whole night.
>>103496007>uoooh ToT the whole nightIf you train a lora worth sharing make sure to, code is speech after all
>>103496040drugged up, zoned out, drooling, living in decrepit despair
oh look it's merged https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/pull/72
I feel like I'm back to 56k modem days
>>103496064But enough about trani
>>103495932dunno, how many nsfw and anime hira did they make to show it works better?
>>103495932>for Large Language Models
>>103496132the resolution and shortness of the clips is a bit like that, yea
>>103496282and the waiting lol
>>103496132I miss the 2000's aesthetic so much dude... https://www.youtube.com/watch?v=y0zcDFtsw8A
>>103496064No anon that would make me sad :(>>103496239Loras were originally for LLMs too
>>103496303She has a nice voice
>>103495932>is this the lora killer?member b-loras?
in 2025 what can I do with a 3060 12GB and 48GB DDR3 RAM
Can hunyan recognize celebs? Like could I prompt eva green blowing me a kiss?
now that we can do loras on hunyuan I guess that civitai will add it to that list right? can't wait to see how it'll flurish
>>103496408Illustrious, pony, ltx, flux.s, sigma and more, plus their lora controlnets and stuff for the most part. Also low res hunyuanvideo or higher resolution flux.d if you're quite patient/batch a few over night.
>>103496409Hunyuan can do celebs but they have to be pretty famous. I have no idea who Eva green is so I'm confident that hunyuan won't know either
>>103496633https://www.youtube.com/watch?v=acCDliQ3GbY
>>103496365Still don't know what a DoRA is
>>103496409why would she be trained any more than a random bond girl? i'm thinking some porn comic characters would have 1000x more exposure on most datasets
>>103496409prompt your full name date of birth and watch the magic
>>103496715Tried that and I got abducted by aliens, they told me about the future of all of this and took me to their home planet.A year passed, but when I came back here only a few seconds had passed...
>>103496675i think you just load them like a lora anon, don't worry about it, just download and place in your lora directory
>>103496715>prompt your full name date of birth and watch the magicshit you're right, it got me in there, what the fuck?? Dunno why the video is black and white though I'm not that old lol
Does hunyuan understands chinese prompt better than english?
How good is this python diffusers library? Can a retard (someone who knows python but nothing about AI) implement his own AI GUI with it, or is not possible to reach the performance and features of webui?
>>103496825if you mean "the" diffusers library that's been used in various of the common UI in the backend
>>103496805It probably doesn't. When they use the prompt rewrite model one of the instructions is to translate chinese to english.
>>103496805the image gen variant yes. but not generally the video generator as far as I can tell. maybe it's also partly because of the text encoder we use.
>>103496928I'm also guessing they might have trained parts of a booru or western ai generated images with tags or some other english training dataset as-is.
>>103496932>the image gen variant yes. but not generally the video generator as far as I can tell.how do you know that? we haven't gotten the i2v model yet
>>103496921Really? There are examples how to generate an image with just a page of code. It almost looks too simple.Why does webui even use gradio?
>>103496955>Why does webui even use gradio?because the average /g/entleman doesnt read code
>>103496825Yes. New features or models are added slower because it's a professional library and they don't just copy and paste like forge or comfy. Experimentation is more involved, they don't support a plugin ecosystem like forge or comfy, but they are working on that with a modular system similar to comfy. There are already UIs based on Diffusers like InvokeAI or SD.Next. Every UI uses Transformers which is the sibling library of Diffusers.>>103496955Gradio is a UI library, also a HF project.
>>103496950I2v? No - the older imagegen model (HunyuanDit), that one was one where you actually quite possibly wanted Chinese prompts. >>103496955>Why does webui even use gradio?uh... voldy and others wnated to easily have an ui in the webui? gradio is for the buttons and sliders and such
>>103496955There are two kinds of people, frontend developers who want to dabble in the backend, and backend developers who want to dabble in the frontend. gradio is clearly made by the backend folks.
>>103496928>It probably doesn't. When they use the prompt rewrite model one of the instructions is to translate chinese to english.that's weird don't you think? they're a chinese company and they focused on making this model good in english but doesn't care about being it good in chinese, as if they don't want to target their own country or something
>>103497003it's probably capable of both chinese and english prompting, we don't have that actual model yet thoalso reality is that the best tagged datasources are english. in the end the image/video LLM trainings all sucked less with *booru and so on, kek.
>>103496972Sorry for being completely ignorant, I'm just trying to get the big picture before possibly getting into details.>>103496982So it's Transformers versus Diffusers? What's the role of PyTorch and TensorFlow?>>103496983So Gradio is both a toolkit to make webapps and somehow AI specific?
>>103495891What are Rapeman's superpowers?
>>103497051Gradio is a python package that lets you make a web UI with just python code (instead of having to write a bunch of HTML and JS for your UI).It's not that Gradio is specific to AI. It's that AI projects tend to use python. So authors of AI projects use Gradio as an easy way to add a web UI.
>>103497051>So Gradio is both a toolkit to make webapps and somehow AI specific?well yes https://github.com/gradio-app/gradiobut frankly any search engine could have told you that. better fire up comfyui and post some gens.
>>103497048>it's probably capable of both chinese and english prompting, we don't have that actual model yet thowhat do you mean? we got the t2v model
>>103497051>Transformers versus DiffusersThey're not competing, they're for different types of model. PyTorch and TensorFlow are the underlying machine learning libraries.
>>103497080Gradio describes itself as>Gradio: Build Machine Learning Web Apps — in PythonPutting UI controls on a web app doesn't have anything to do with machine learning, so that confuses me.>>103497099That doesn't help.>>103497110So which part is used to generate images from stable diffusion models? They have to do things like parsing the model file and running the cuda programs.
>>103497125Both. For Stable Diffusion models Diffusers implements UNet and VAE, CLIP is implemented in Transformers. forge and comfy also use Transformers for CLIP.
Is it true that Macs are really bad for local image and video AI stuff?They kind of have a lot of Vram shouldn't they be good for the usecase?
>>103497149Macs got dey unified memories and shiet for dat CPP models aight?
>>103497125>That doesn't help.it's starting out with the explanation what Gradio is AND showing what happens if you program it and obviously goes into detail in the linkage. read it as long as needed until you get it.
>>103497149its CUDA what makes speeds tolerable
>>103497149Macs used to have Radeon graphics so if it's still like that they are pretty bad.
>>103497158Yes, that's why I'm asking.They're Okay for LLMs (not fast but at least you can load large models without a problem) but I've heard supposedly bad things about image and video stuff
>>103497162this, CUDA is the only reason Nvdia is dominating the market
>>103497101This is the TE they actually trained: https://github.com/Tencent/HunyuanVideo?tab=readme-ov-file#mllm-text-encoder>>103497149Depends on which one, for some they're not that bad. Generally speaking yes, AI is nvidia-dominated which gives it a software advantage too, and the Nvidia GPU are more powerful anyhow.
>>103497182>This is the TE they actually trained:oh ok, guess that we'll have to test it out on the API to see if it understands chinese better than english then, wonder why they still haven't released it yet
>>103497218dunno either. probably tuning it more or working on software or something?
>>103495891I LOVE YOU ANON
>>103497148So does Diffusers use Transfomers for CLIP?
>>103497149Macs are struggling with compute-bound tasks which is what image and video generation falls under.
Is there any way of knowing if the model you are using was trained for a word in particular? I always wanted to know if a model I am using "knows" what something I am describing is or if I have to describe it in other ways.
>>103497353If only there was some kind of way we could prompt the model to produce images based on words.
>>103497353It's mostly a bit more complicated than that even, you might need to find out if a text encoder triggers a strong reaction of sorts in a video/image model.Or where it triggers a reaction vs not having a token in the output image (generate both...). Pretty sure some wrote heat map something type tools or such for SDXL/Flux but even those aren't making things actually entirely easy to understand many times.
I can smell your loose anus from here trani
Why the fuck did AnythingLLM not include a Gitea data connector? Like what, I am supposed to run local llm and then just push all my data onto github if I can have gitea?
>>103497406Wrong thread, sorry.
>>103497057nobody really knows
>>103497296Yes
>>103497416Prepare your butt for RAPE MAN. You won't make the same mistake twice.
>>103497482oh no
>>103497482He raped?
>>103497482>RAPE MANas long as it's not pig man that's all righthttps://www.youtube.com/watch?v=O2cl1P5HxR0
>>103497510Being Igor Bogdanoff (1999)
https://www.reddit.com/r/StableDiffusion/comments/1hcwu57/i_installed_comfyui_wsage_attention_in_wsl/>I Installed ComfyUI w/Sage Attention in WSL>Generation went up by 2x easily AND didn't have to change Windows environment. ????>1.30h videoOH COME THE FUCK ONNNNNNNNN
>>103497730HAHAHA someone's gonna have to generate a 2ldr for you faeggits
>>103497730I fully expected to see furkan
>>103497730ok in all seriousness, who's gonna try this? if this is a true 2x I'm ok to watch a fucking movie length out of this dude
>>103497785>>103497768>>103497730Temper expectations, this was probably his workaround to get triton working on windows. If you already have it you're not gonna see the gains.
>>103497797I don't remember having too much trouble getting triton to work on windows
>>103497797I don't believe there's a 2x speed difference between spda and sage though
>>103497797I don't remember ever having triton on windows (i dont have a triton compatible gpu)
>>103497826you need triton to make sage work though
>>103497822He's overselling it. It's like a 1/3rd increase.>>103497826There's literally a guide in the op.
vramlets running linux, if your system is crapping out when system ram hits 0 or close to that as you start swapping use these nodes for freeing images and latents https://github.com/ShmuelRonen/ComfyUI-FreeMemoryThe wires must pass through them at every connection for it to be effective. You will then not experience system lock up and be able to move your mouse and shit. I have no idea why this shit isn't just fucking built into comfy. Basically when linux hits a threshold it will shit the bed even if you have an entire SSD dedicated for swap, what I think is happening here is because comfyui does not call garbage collection in some way that data is not considered trash and alot more is going on that causes the system bus to get really bogged down and your cpu will spike like crazy, then the inevitable 20 minutes later you decide to terminate the server after like a 5 minute wait to gain access to the terminal. Then you will coredump and that takes fucking ages, i recommend Disabling automatic core dumps https://wiki.archlinux.org/title/Core_dumpSaved me a fuck ton of bullshit with needing to restart the damn server after some hours, and fuck that bullshit. This will also likely enable you to increase block swap with out your system shitting the bed, also higher resolutions or perhaps more frames = more power more fun. My system would hang if I tried higher resolutions and I increased block swapping to avoid OOM for vram, but I think this will cure it, because ram use continues to increase to the point the system becomes unusable, some something shit is going on here. Something isn't freeing up system resources like it should do. But these nodes really did help with that helping my system ram free up enough that things don't lock up. I'm not good at typing btw, sorry about, but any help I can offer, i'd rather share it.
>>103497730Just use Linux, if you use Windows you deserve the gimped speed.
>>103497730>Be lost, noob, clueless, new thing comes along, decide to google how to install>no definitive guide, only fucking asine ai generate slop videos by the dozen>decide fuck it, challenge accepted>3 mins into the videomfw
>>103497880>Just use LinuxI don't think I will >>103497881>no definitive guide? https://rentry.org/crhcqq54
>>103497891>https://rentry.org/crhcqq54yes I remember when i was young and could be full of energy to explain everything, but i've watched as they've completely destroyed the easy goto answer, just google it bro. Now I have to either scroll through or watch billions of useless shite that does even explain how the settings works on the new nodes i've some how managed to install. But its ok, man i get it, you younger cunts are lazy and all about just read the install bro...
>>103497867>I have no idea why this shit isn't just fucking built into comfylol
>>103497926the fuck is this schizo talking about?
>>103497929watch it buddie i'll hack your grandma's chair life and steal all her mhz
look all i am asking is, when adding new nodes the documentation is provided, not everyone enjoys scrolling through git hub issues for answers.
>>103497830>>103497854youve been epically trolled and didnt even realize it you fools, you absolute buffons, possibly pseuds even
>>103497973Literally nothing useful is built into comfy.
>>103497992>not everyone enjoys scrolling through git hub issues for answers.that's whay the fucking guide exist in the first place, we had to look at a lot of github issues to make this shit work in the first place, it's a collection of all the best findings on those issues and they are all in a single page, ready to be read
and I really appreciate peoples generosity and time but I'm talking about when I used google back in the day i'd find and answer in like 3 minutes. These days every nob jocky on earth wants clicks for a quick buck and i can't be arsed listening to thier intro, the whole fuck they speak is annoying to an autist, its just like get to the fucking point! I'lll give you money if you help me but if you are going to waste my fucking time with your bullshit you get nothing. /rantYes i do donate to people that offer something valuable with out the crap.
>>103498102you don't have to watch that 1.30h video, just use the guide, there's only text in there, zero bullshit, only instructions on how to get it done
>>103498119No, its doing something, these read outs or your system ram not your swap Garbage collector: collected 1565 objects.is likely your swap and that is a lot. just trust me on this, i would not be posting here if I didn't do all this because my system would be shitting all over with lag... It just discards them, thus freeing that space up.
You know a model is good when the vramlets are willing to kill their SSD with all those memory swap and people are willing to watch a 1.30h tutorial on how to make it faster or something kek
>>103498119>GPU VRAM: Initial usage: 6.95 GB, Final usage: 6.95 GB, Freed: 0.00 GBthis is your hardware status report>Garbage collector: collected 1565 objects.This would have been in virtual, thus it reports 0.00GB freedits hard for me to explain, i have both a comfyui resource monitor and I3 window manager, both report more available ram. And that is what keeps my system running smoother and my sanity also...
>>103498118i've had it installed for ages... Stop responding to me, jesus...
>>103498212wait, you think you can make a retarded rant without feedback? you think this place is your blog or something?
i have a PSA to make hereeveryone in this thread is on the spectrum that is all
>>103498210it should freeze a lot less and shorter pauses, you might get a little spike when it starts genning and when it starts the decode but it will not be anywhere near as bad. There are other nodes that do the same thing you might want to look into in the manager, some 8GB card user was using a node called free all models, aye and was using a trick i often use in tricky situations with the decode, batch processing his latents one at a time, then re-batch images before video combine
Are people really falling for the stupid video claiming a 2x increase in speed when all the retard really did was install triton?
>>103498247I'm both on the autism and ADHD spectrum (probably).
>>103498223i do yeah i can just say fuck and and walk away for the next 4 hours... It was just an opinion btw. I already posted what i cam to post fuck head now you go sit the fuck down.>>103497867
>>103498269maybe the speed increase comes from going to WSL?
>>103498275Just because right?
>>103498274>i do yeah i can just say fuck and and walk away for the next 4 hoursand guess what, people can respond the fuck to you and call you a retard when you have retarded takes, isn't this place beautiful?
>>103498279why not? WSL is really different from CMD, maybe triton is more optimised on linux therefore more optimised on WSL
>>103497891>I don't think I willWhy not?
>>103498295have you seen the image? why would I want to support a retard like that? >>103497891
>>103498292Why don't you test both instead of speculating.
>>103498299he should've made a speed comparison first before going for the "trust me bro route" and expect us to go for a 1.30h long tutorial
>>103498298Are you mentally ill? Why don't you care what Bill Gates says? Or the thousands of Indians working for Microsoft? Also with WSL you're already using Linux, just that it runs on a VM. You're already using a shitload of tooling that was developed for Linux anyway.
>>103498308I haven't watched the tutorial but I bet 90% of it is just getting the linux environment running.
>>103498315>Also with WSL you're already using Linux,I'm not using WSL, the guide works fine on cmd
I guarantee the speed boost he got is purely from from sage attn+triton and has nothing to do with WSL. Probably just some retard who couldn't get it functioning on Windows
>>103498328>Probably just some retard who couldn't get it functioning on Windowshis settings to make it work on WSL is long and complex though, he definitely can follow the guide
>>103498321Thank you for supporting Bill Gates' agenda.
>>103498343better than troon's agenda that's for sure, that's why Trump was elected in a 2nd time, people prefer anything than woke communist propaganda
>>103498334>his settings to make it work on WSL is long and complex though, he definitely can follow the guideSometimes you have to be especially retarded to not be able to take the easy path presented before you.
>>103498351Enough of those troons and communists work for Microsoft. You're retarded for not considering an entire operating system just because a cherry picked screenshot made your poopy brain seethe.
>>103498362>a cherry picked screenshotcherry picked? he truly believes that, what the fuck?
So what LoRAs are you training? I've trained 4 so far.Here's my takeawaysMore images are better 30-100 seems to be a good range.1000-1300 steps seems to be enoughNo idea about captioning yet
>>103498368If you don't want to be retarded, read the political opinions of every single developer of all software you use and stop using that software if you don't like the opinions of one of the authors. If you do that, I might stop thinking that you're a retarded fucking moron.
>>103498374and training on images translates into video?
>microsoft vs linux war console discussionBOOOOOOORING, I have a real question there, has someone made a lora porn of hunyuan yet?
>>103498381correct
>>103498391Not even a console war, just plain retardation of the lowest order. Good bait anyway.
>>103498395lmao, I hope he've seen this
>>103498374I would be utilising my vast collection of Flux loran datasets but I can't get deepspeed working
>>103498422I know it's a bad time to say this given the discussion, but just use wsl2It just werks for the training script. Got it up and running within 5 minutes of biting the bullet.And I'm also the same person who thinks the 2x speed improvement in wsl for hyvid is bullshit.
>she's holding a sign that reads: i2v waiting roomWill we get functional text with the proper text encoder?
>>103498482Looks like she's trying to verify for a pornography subreddit.
>>103498482>Will we get functional text with the proper text encoder?I think so, there was some API vs local comparison and the text was better on the API
>>103495800
>>103497939About getting old and and tired and being a cunt mostly was my takeaway.
Guys what if they just... don't release anything more and this is all we get?
>>103498724desu if this is the end of the road, this isn't too bad, hunyuan t2v is an awesome model already
>>103498724That's the default assumption for everything at this point
>>103498724I'm sure we'll be fucked on something, whether it's the MLLM or the i2v model, which one would hurt less?
Sup bros
saw artwork I liked, tried to re-create it in hunyuan, felt like I got kinda close
>>103498752we hack it obviously dur. i want to know what its doing to my prompts, that is what i want to know then i can try intercept it and correct it. Forgive me if I'm retarded because I am but i'm not, its using a MLLM, right so its translating my prompts, their is your limitation... Hence it can't do real porn, because a cucked MLLM will block it, so I want to know how and why and what.We already know what the video model is capable of.
>>103498907>because a cucked MLLM will block itllama3-llava3 (the duck tape encoder) is cucked too, there's no such thing as an uncucked llm model
>>103498752MLLM>Quants>i2v
>>103498918>>103498932why does it need them, sorry i'm not fully clued on these things, why can't it just take prompts?
>>103498947>why can't it just take prompts?because the text we write has to be first transformed into numbers, and to do that we need text encoders, it encode your prompt into numbers so that it can do its matrix math shit
what's its api like and what is it being sent? I'm trying to fully grasp if anything I envision is possible or not, or just really fucking hard. Nothing is too hard for me...
>>103498954>transformed into numbersright, i'm with you on this, ok i think i could figure it out :-0trust me...
>>103498967What's what's API like?
>>103498975a model only works with numbers anon, your whole computer only works with numbers, so for example if you write "A girl walking down the street", it'll be transformed into a set of numbers by the text encoder that means this sentense for the model
How the fuck did we get LoRA training before gguf quants?
>>103499012Will we ever get the gguf quants at some point? That's what I'm worried about
>>103499019GGUF quants are not a closely guarded secret. Just takes a motived person a day or so to figure out what goes where and let it quant.
>>103498954>first transformed into numbers, and to do that we need text encoders, it encode your prompt into numbers so that it can do its matrix math shithttps://youtu.be/JrBdYmStZJ4?feature=shared&t=162mmmm, *brushes documents a side* Tell Mr Hunyuan, what good is a phone call if you can... not... speak? hm?*smirks*
>>103499035I loved that trioly, too bad the film makers became troons though
but in all seriousness we can monitor its i/o and discover trigger words at least and being to build a dictionary.
>>103499051for the moment it's useless to try something like that, we're working on the wrong text encoder, once they'll release the good one then we'll try to crack the code or something....
>>103499051such an attack would consist of identifying its header length, its packet length or data string, then begin by sending a random number not already checked against a random set of seeds and see what it's prominent result is and use a tagging model to id the image and then build a database of what each string of these numbers represents. That is my theory on how to hack it, but we don't know what any of these numbers are yet, so it will be random chance.
>>103499073The sentinels are already being deployed.
>>103495800Linux poster. I suck at prompting but I have useful performance data to share. I will also report the results of changing the encoder and CLIP model, along with the Torch Compile soon.Anyways, you can run 960x544 AND 1280x720@129 frames on a single 3090 and 30-64GB extra RAM. Use sageattention (1 is fine, 2 isn't needed) and blockswap. No need to quant the encoder on 3090.These prompts use 960x544@129 at 50 steps, guidance=6. flow=7, framerate=24. Prompt 1 and 2 has 20 double blocks swap, Prompt 3 has 10 double blocks swapped.1: (realistic, sfw) Woman running on treadmill. prompt from >>103484226https://files.catbox.moe/1ozfsl.mp4Speed/VRAM/RAM: 2831s(47 minutes)/17.9GB VRAM/30GB ish? pushed to RAM.2: (Hentai) stroking dick. good motion bad quality:https://files.catbox.moe/x8x519.mp4Speed/VRAM/RAM: same as above i believe (lost track of it)3: (Hentai) stroking dick. bad motion good quality:https://files.catbox.moe/rgk52h.mp4Speed/VRAM/RAM: 2638s(44 minutes)/20.6GB VRAM/30GBish? RAMExtra Experimental Info:- 960x544@129f might work with just 5 double blocks/0 single blocks offloaded. 23.4GB VRAM + 26.9GB used. might OOM soon though....- All gpus power limited to 250.- PCIE 3x1 at 10 double blocks offladed gives 69.46s/it but PCIE 3x4 at 5 double blocks gives 53.32s/it.- With 4x3090s and 128GB ddr4 ram, it seems you can batch four 544p videos @50 steps in 45mins ish, or 11.25 minutes per video.- 720x1280p@129f uses 21.5GB VRAM allocated and I think 30-40GB RAM (maybe more?). Takes 6564s(109mins) with 20 double blocks/40 single blocks offloaded and 50 inference steps.WAIT!! WE ARE SO BACK!!! As I typed this up two of my gpus finished. I'm adding it above. 1st hentai gen is visually impaired but has ACTUAL STROKING MOTION! Second one looks beautiful but little motion. Any suggestions?
https://github.com/Tencent/HunyuanVideo/issues/8my theory is that they'll release the i2v with the MLLM, because it won't work with the duck tape and they won't have other choice but to release it
>>103499149Google gemini helps make good work safe prompts anon, just give it the custom prompt template and what you want. Be sure it understands things like camera movement and cut scenes etc.
>>103499149>2 isn't needed2 uses a new method to optimize the KV cache, so you get less memory usage with it
>>103499151How many days are we past day zero now? I think we should just be thankful and wait and see, it would be disrespectful to try and mess with their shit this early.
>>103499159>>103499160thanks for the tips lads
>>103499226yeah it really works mate, just get into a nice conversation with that thing, it will make you a complex prompt on what you want. You will see the results are good. I suck at long complex prompts also, i should get to learning it, if you drag from your prompt node, i forget what its called, some custom prompt template it will be select able, just spawn it and copy paste that somewhere. I don't know what that node does, perhaps its a way to change the default directives of the MLLM or someshit. But any copy that out into a note node so you have a template to how a prompt should be structured for a good video. But i'd deleted that custom prompt node because I don't know if it effects the prompt or not.
https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers/discussions/2>i was testing it parallel to llava-hf/llava-1.5-7b-hf, and noticed that the img-to-text wasn't performing as well as the llava-hf/llava-1.5-7b-hf. What if we replace our duck tape with llava-1.5-7b
anybody got any good Christmas card prompts?
>>103499286i really need to learn all this stuff, I'm being a lazy fucking cunt and i do hate myself for that, i'm not a stupid guy i could have a look at what and how etc and devise a strat. if you're talking about hunyuan that is.
>>103499317>i'm not a stupid guy i could have a look at what and how etc and devise a strat. if you're talking about hunyuan that is.there's already a github that lets you choose any duck tape you want on kijai's node, if this what you wanted to dohttps://github.com/esciron/ComfyUI-HunyuanVideoWrapper-Extended
>>103499286So let me get this straight, ok. its generating numbers from tokens that trigger or activate the neural pathways so to speak of a neural network which then causes it to infer the random noise. So what came first (chicken or egg) it had to be the language models first right? And so the image and video models are like a layer on top? I'm I correct in thinking like this so far?
>>103499307just add "Christmas"
>>103499332i will look at that also thanks.
>>103499332>>103499286all right I'll try ithttps://huggingface.co/llava-hf/llava-1.5-7b-hf/tree/main
>>103499342that's too complicated for my smooth brain, the only thing I understood is that it's more efficient than a regular text encoder like t5https://github.com/Tencent/HunyuanVideo?tab=readme-ov-file#mllm-text-encoder
>New commits>rf inversion fixesLame
>MMLM is keeping us back from true nirvana Come on guys. It can't be THAT big of a boost over a regular mmlm
>>103499372what I know is that llama-llava isn't really good at prompt adherance, and the API using the official text encoder unerstands my prompt way better, at some point I suppose they'll release their HunyuanMLLM but we can pass the time by finding a better duck tape, I'll come back with some results after trying llava 1.5b
>>103499362i'm getting the gist of it anon, the language models act as translators and thus filters, very clever NOT! because someone could bypass circumvent, unless they built in censors but its intriguing to think about *keeps falling backwards*
>>103499391>unless they built in censorsthere's none, Hunyuan is completly uncensored
>>103499368yeah, never liked rf-inversion, this shit doesn't work that wellhttps://github.com/kijai/ComfyUI-HunyuanVideoWrapper/issues/131he should focus for gguf quants instead of going for those fucking gimmiks
>>103499394yeah then its the text encoders, i don't a little with them or LLM local and they are pain in the neck but can have the primary system changed, and so they must be taught to accept user input and produce a prompt. There is a whole general on this subject on /g/ but I don't know if I'm barking up the right tree or not. The problem is it require a lot of vram to get these things the way you want because you need context and the bigger that context the more vram. But I don't even know if that is how it works with regards to interfacing with SD models or video or what ever.
>>103499452yeah, I'm a /lmg/ fag aswell, and yeah the llm models we're using are really uncensored, but when it's on decoder mode or whatever, looks like there's nothing we can do to trigger their goody2 mode, which is a good thing really
>>103499452>>103499473>llm models we're using are really uncensored*censored, my b
and if you can't use this model commercially then why would even bother? I never thought it till now actually. can it be used commercially or not?
so what are you creeps using the HY lora for?
>>103499354>all right I'll try it>>103499332>https://github.com/esciron/ComfyUI-HunyuanVideoWrapper-Extended>Support for LLava and mLLama model_type.my fucking ass
>>103499518>the HY lorawhich one?
>>103499534the one people are training privately
>>103499526well if the clip model and precision right for that? I don't know I'm just asking.
>>103499553you won't get an answer, it's private after all kek
>>103499553I wanna see the kino videos that people are making of their oneitis too anon, because that's exactly what I plan on doing
>>103499560went for fp16, still got the same error
>>103499577https://huggingface.co/models?search=llava-llamamaybe there's a better version of llama-llava, that would be safer to load it's the same exact architecture
>>103499565you what sorry? Are you a fed by chance? Don't worry we taught younger anons well, very well... You won't be seeing shit mate, waste of tax payers money you. trying to get anons to do wrong things again with ai officer?
I dread to think what that anon is making poor Furkan do
>>103499617I'm still waiting for someone to use it as Rapeman
>>103499518>>103499553>>103499561can you explain this like I just came into the discussion
>>103499623we can make loras with Hunyuan now >>103497510
>>103499632oh i thought there was some kind of secret code that only a few people had, not that the loras themselves are something kept private im retard
anons beware feds are posting shit to actively encourage you, do not fall for it. They post and then delete their own posts to make it like its serious, its not. they post images and videos of younger girls to encourage you to do the same and you slip up and make it too spicy, this justifies their pay checks...
>>103499660>They post and then delete their own posts to make it like its seriousit's more like the jannies are doing their job of removing this shit in the first place
they have been tasked by government or others to shutdown AI.
>>103499667this is probably one of them, oh joy here we good again... guess I will talk another 4 hour time out because i don't care retard at all. Oh you do not like it, then stop posting faggot.
Golden noise SDXL works, noticeable difference even for Illustrious, so this isn't placebo.But what about Step-aware Preference Optimization / SPO lora https://civitai.com/models/510261 ?Since I feel it might be placebo that means it is? Testing with Illustrious again.
He hasn't posted anything related to image/video AI in months. I doubt he's even aware of HunyuanVideo.
>>103499634Nice
>>103499682>Golden noise SDXL works,you have to keep training the model to use golden noise right?
>>103499682>Golden noiseim more of a perlin or fractal noise man myself >SPO on n00b it functions, i stopped using it desu
>>103499687>I doubt he's even aware of HunyuanVideo.he keeps posting random ass questions on github, there's no chance he hasn't heard of hunyuan
https://arxiv.org/abs/2412.07730>An 8.7B model with 512 resolution achieves 83.1 on VBench T2V, surpassing both leading open and closed-source models like CogVideoX-5B, Pika, Kling, and Gen-3. The same-sized model also achieves a state-of-the-art result of 90.1 on VBench I2V task at 512 resolution. By providing a transparent and extensible recipe for building cutting-edge video generation models, we aim to empower future research and accelerate progress toward more versatile and reliable video generation solutions.>transparent>doesn't share the weights publiclyyeah right
>>103499695You have to train "something" for a particular model to make the thing perform in the best way. But like I said I do see a difference using the provided SDXL model with illust and illust-based. Would be cool if somebody did make it for illust-based.>>103499699Okay, guess I'll stare at the pics with and without SPO some more. There is a subtle difference but it's like give or take a step of denoising, nothing drastic.
>>103499723>ctrl+f hunyuan>zero results
>>103499723How can CogVideoX achieve those scores? this model fucking sucks, they need harder mememarks
>>103499723let me guess, an exaggerated paper of with cherry picked results only to lure investors with some sham ai startup business
>>103499687That's where you are wrong. He is aware of the model but doesn't seem to be aware of the wrapper we use. I don't know how he missed it.The guy who wrote the training script for the LoRA is an anon so I am looking forward to seeing how he balances dealing with furk while also maintaining a veneer of professionality.
Anyone have any idea what the color strip on the side means? This is a very botched (but funny) flux img2img, but i was just surprised by that strip appearing.
https://youtu.be/UD_pcB4AzR0?t=368imagine you could do something like this on Hunyuan>Transform this woman that's on the video into another womanand boom, it only changes the woman and nothing else surrounding her
>>103499817it means they fucked up the model the team is made up of former SAI employees afterall...
>>103499819rf-inversion
>>103499833it doesn't work that well though, and it's changing the background behind her, here we're talking about an editing process that's only changing the woman and nothing else, like you would do when doing some precise masking on i2i
>>103499831It's just weird, because the nightmare fuel i made before this one didn't have it, but after that pic suddenly the next one had it too. Wondered if it was a memory error or something. I'd understand if it happened all the time.
>>103499831>the team is made up of former SAI employees afterallyou're talking about BFL?
>>103499811Here he is pestering on the official Hunyuan repo for a Gradio demoPhD Computer Engineer btw
>>103499861I would assume anon is, because that's who made flux.
>>103499883I don't get those vertical glitches on Flux, the fuck is this anon doing kek
>>103499887NTA it's happened to me at least once and ive seen it on other anons outputs
>>103499880>PhD Computer Engineer btwis he working at all? he spends way too much time on github to have a job
The "original" story with Fagkan was he pilfered code anon posted ITT, right? I believe that anon also posted it to Reddit (after posting here). Trying to find it in the archive but no luck.
>>103499887I have gotten it today for the first time, I don't know what I did, that's why I'm asking. I've never seen this before. Here's the second one that had it, right after the first, same settings except different lora and changed the triggerwords. But I did several variations of this image before this without this issue on those same settings. It's a mystery.
>>103499880>Hello sir please do the needful and implement up the featurekek
>>103499931show us a screen of your workflow maybe?
>>103499931put jpeg artifact in the negs?
>>103499939that guy is turk but he's acting more like an indian than a real indian when you think about it kek
>>103499921I dont remember all the details but I think someone wrote a guide how to set up joycaption locally here and Furkan was selling it as his own on patreon
>>103498374I'm gonna wait for kohyo's implementation, it probably has a better shot with my meagre vram
Does anyone remember a software called DreamTime that was like an open source version of the first deepfake software that went viral? You could upload a girl in bikini and it would undress her and you could even tweak the tits size. Shit was so cash. Of course github deleted it, so now im paranoid of downloading any links. Was all of this deleted off the internet? It was just easy as fuck, all of this new shit seems too convoluted for anyone that has things to do.
>>103499941I was using forge. I re-generated the image, it's there again.>>103499944I'd have to use flux dedistilled, flux doesn't have negatives. Waiting three minutes for an image this size is stupid enough, waiting for 6+ on the dedistilled is worse.
>>103499947What is the best top of the line software for NSFW faceswapping?From what I heard Rope Pearl is the best but it has a NSFW filter. Theres Rope Unleashed which supposedly has no filter but the results are not as accurate.
>>103500030>I'd have to use flux dedistilled, flux doesn't have negatives.yes it can, if you go for anti burner CFGshttps://www.reddit.com/r/StableDiffusion/comments/1eza71h/four_methods_to_run_flux_at_cfg_1/
>>103500039Oh interesting. Thank you anon I'll look into this.
>>103500030the fuck is that name? it's a finetune or something? and don't use nf4 dude, Q4_0 has the same size and gives better quality
>anti burner CFGsbiggest flux cope
Ahh the deep fake face swap jeets have arrived
>>103500056they fucking glow if you ask me>hello saar, please do the needful and give us tool to do nasty things saar
>>103500050https://civitai.com/models/638187?modelVersionId=713648I don't control the naming schemes of madmen. Half the loras i download are named shit like maximar1paw-000003_v2_epoch_5
>>103500067>nf4>hyper meme mergeno wonder you got glitches, you're doing anything to break flux dev at this point kek
>>103500063Indeed. In fact these threads have been glowing a lot since Hunyuan dropped, curious
>>103500100Stop nothing things!
>>103500073I've been using this since flux came out without error. I literally picked this only because of an anon recommendation shortly after flux came out, and just never changed because until today, it worked.Oh well, next time I want to change an anime picture into 90s CGI I'll use Q4.To be clear, I'm not using the hyper version, I'm using the regular one, that page has multiple versions on it. I think If I was using Hyper I'd be meant to be using less steps.
>>103499880He actually baffles me. Like a gradio interface is nothing. I don't do anything programming related for work and even I can set up a gradio interface.
>>103500038the best method is not being a lazy bum and train your own loras
We will never rid ourselves of the sameface
>>103500189You can train loras for video models now?
https://www.youtube.com/watch?v=OMOGaugKpzshttps://www.youtube.com/watch?v=56hqrlQxMMI<3
>>103500225Kinda?
>>103500245How?
>>103500260https://github.com/tdrussell/diffusion-pipe
>>103500225i find it hard to believe people do not understand the fact you can throw a bought of latent's and 100% de noise and not heavily influence the outcome of the result lora or not...
>>103500289>boughtbunch
you don't even need lora, you just need a fucking brain.
This is not what I wanted at all but lol.
the donyuan of loras, what a legend
>>103500304
its a video model ffs, so then just pack 30 frames of what you want and fire it into it, describe the person and boom.
>>103500289I'm not asking whether or not you can achieve a given result, because the video is proof such a result is possible. I don't need to ask "wow can I make this thing that has already been made and clearly exists?" I'm asking if X thing can be applied to Y thing.If I asked "is it possible to use a pencil to trace an image" would you assume I was asking "is it possible to copy an image, by any means, no matter what that means is?"
i'm going to bed
>The helmet was supposed to come off an reveal a bog
>>103500307他拉了?
>>103500329Dump it
>>103500169Then you are smarter than all the computer scientists in all of Turkey
>>103500351Did you test concept bleeding yet with loras?
>>103500371It bleeds heavily.
>>103498770would
>>103500394Damn, expected tho
I don't know what I want to train next. Bog and Furk were the two things on my list and now they're done. I got some porn and shit on the backend but that's not really fun to post.
>>103500441Follow your heart anon. Your heart says you need CIA and Bane.
>>103500441What do you think would happen if you train it on say a collection from a specific artist rather than a face? Obviously a famous director is the first that comes to mind, but I'm more thinking along the lines of traditional paintings. Or something like a regular style lora.
>>103500470holy shit finallynow you know what you must do
>>103500501Cmon make the girl Turkish too and give her a beard
>>103500512I'd probably need to crank up the LoRA strength to the point of deep frying it.
Concept bleed isnt too bad so long as the other concept is well established, ie miku.
>>103500501 >>103500532is that a decent lora... so soon?
>>103500484Is this idea retarded
>>103500545it's the test lora, v3 I think, it was made last night through images alone
>>103500569No nobody has ever attempted to put an artists work into the form of a LoRA before. You're the first one.
>>103500576No ones done it with a video LoRA before, correct.
>>103500545https://gofile.io/d/1MU6lBFeel free to test it if you want. No trigger words or anything. Just like 30 images of his face and no captions for 1000 steps.I set the LoRA to around 1.2 strength when I run it.
>>103500574> it was made last night through images alonefew attempts, worked with images rather than video? even better.
I will buy a 5090 just for Hunyuan.Convince me not to buy it.
>>103500589More vram is never a bad idea.
>>103500496Lemme try again lol.
God damn it lol
>>103500589You need to gen for 4 hours a day for 2 years to justify a 5090 just for hunyuan If you're on an amd or 1000 series shitcard or a pedo this is a fantastic idea but otherwise just rent it or a 4090 for a dollar an hour, who knows what will exist in July 2025
and remember the future of AI isn't GPUs but NPUs (neural processing units) so in two years we might have discrete NPUs 10x cheaper and 10x faster at inference than SOTA GPUs now
>>103500607>>103500614>>103500632lmao hunyuan is spiting you directly with this guy
>>103500614borderline incomprehensible meme value i need to invest
>>103500441>I don't know what I want to train next.Unfortunately you can only do images because I would have asked for some animation style lora like claymation or Lego stop motion from YouTube videos etc Would probably need captioning
Can hunyan not do anime style?
>>103500642so enterprise graphics accelerators?
>>103500673check previous
>>103500642We live in a world where companies intentionally develop and maintain expensive and clunky hardware for the expressed purpose of making sure consumers rely on them to maintain it. These NPUs may as well be cars that run on water because the moment they get close to entering consumer hands the factory that makes them will mysteriously blow up.
>>103500752this isn't pizza dough karate
>>103500767You posted the png.
>>103500767fuck. wrong file
>>103495800What's the use case for parquet datasets?
Prompt?
throw some rutkowski in it ffs
>>103500581thanks anon! gonna see if I can gen him
>>103499012>How the fuck did we get LoRA training before gguf quants?That's how it's been thus far, anon
>>103500834this is a lora right? is that why it keeps skipping?
>>103500849It is a LoRA yes. Not sure if that's why it skips though, but it probably is desu. Needs captions. The porn shit I captioned doesn't skip.
>>103500859if your framefrate isn't matching your training number of frames, it will ghost or skip like that. reminds me of the loras I used to train when there wasn't any vid models
>>103500879>if your framefrate isn't matching your training number of framesNow I gotta do math??
>>103500879>stroke victim vtuber and fat asian man in the backgroundkino
>>103500894>framefratesorry, meant number of frames.
>>103500905I inferred. So if we are all choosing 24fps, that means there are some frame numbers we cant use if we want to avoid ghosting?
how do hunyung Lora? how mayli into video?
>>103500939https://github.com/tdrussell/diffusion-pipe/commits/main/
>>103500939in op as well
>>103500879>the loras I used to trainWhat the before or after your once in a lifetime opportunity to blog post about visiting a lactating whore?
>>103500918if you inference for 3 second clips it would be ~73 frames right? so the dataset vids should be that many for training. less would have skips and more wouldn't inference out the whole clip>>103500974man we get it, you are a no life pathetic loser. cry piss and shit about my life more kek
>>103500985>vidsbruh I use still images for all my training.
>>103500992well that explains so much more kek
>>103500501miku would never have tits like that
>>103500995Are you the guy that wrote the script for training?
Anime hours
>>103501010no but loras are notoriously destructive
>>103501021Okay so of the four LoRAs I've seen so far, I've trained four of them. Why are you pretending to be so sure that the framerate and ghosting have anything to do with each other?
>>103501028>framerate and ghostingframe count not rate. it will hallucinate the in-betweens and has no sense of duration if you use singular images for anime. the realistic motion training will be in the prediction which doesn't work for anime
>>103501059Ok sure, go train a LoRA on some anime clips and figure out if that's what's causing it.
>>103501076I'm busy inferencing and I need a small amount of vram to test rendering some crap in the c++ app. I don't have the time to curate a dataset on top of all the other shit I have to do. you try doing clips instead
>>103501089>you try doing clips insteadI am also using my GPU. I don't want to spend time training a LoRA I don't want to test a hypothesis that might be bunk.
>>103501028>Okay so of the four LoRAs I've seen so far, I've trained four of them. Why are you pretending to be so sure that the framerate and ghosting have anything to do with each other?
>>103501106I don't get it. the guy who trained dozens of models prior has less experience than a guy who trained four loras and can't draw any conclusions?
>>103501126schizo anon isn't very smart
What are we arguing about
>>103501131training clips over images. hmmm I wonder which would be better for capturing motion hmmmmm
>>103501140You're making all kinds of crazy assumption dude.1. Look at the issues on the github. Initial results from the measly 512x512x33f videos that fit on 24gb of vram do not seem to be turning out well2. Hyvid itself was trained on still images first before video was added3. We can demonstrably see still images in the LoRA dataset translate to the subject in the videoAm I saying that training on video data is impossible? No. Not at all, but your assumption that the number of frames and any skipping being linked was pulled out of your ass.
>>1035011631. just look at the op vid. it demonstrates the bad parts of training on images clearly2. the range of motion is compromised in all your vids so far.
>>103501175Yeah it was completely baked. I posted it to give an overtrained example. All of them?
>>103496049couldn't play with it, but I got datasets already
>>103501183pretty much all of the anime ones. the grifter ones work a little better since it's mostly trained on realistic
>>103501193>since it's mostly trained on realisticYou don't know why it turned out better. Why are you so sure? That's what's annoying me right now.
>>103501203>You don't know why it turned out better.yes I do because I've seen it thousands of times before with anidiff, prompt travel, loopbacks, interpolations and training models specifically for those use cases. I trained foundational 3d models too and they also require basically photogrammetry data. also sorry it took so long to reply, I crashed four fucking times because these goddamn nodes can't offload memory properly holy shit
>>103501309If you say so, but I'm not gonna be the one to test it.
>>103501322then at least have a counterpoint before you start calling people's observations bogus doofus
>>103501341I did and you tried to squash them with your resume. Like dude, I'm not convinced it's entirely to do with the number of frames in the training data. Take a look at the other anime from hyvid. It's all kind of stiff.
>>103500939>how mayli into video?Real shit?
>>103501131Can this model do the thing with the colorful background streaming past whilst the character remains still to save animation budget but give the illusion of motion?
>>103501350>I did and you tried to squash them with your resumefor a good reason. I spent three years understanding the relationship between images, temporal models and 3d models>Like dude, I'm not convinced it's entirely to do with the number of frames in the training datayou can;t improve motion estimation with still images, it reduces the range of motion unless the model is heavily trained on certain videos. in this case, mostly realistic footage probably scraped off youtube>Take a look at the other anime from hyvid. It's all kind of stiff.I don't think people are trying hard enough. I've seen enough good ones to know it's possible to do something more
>>103501385I would be happy to be shown what a good dataset for hyvid looks like and I'll leave it there.
>>103501397>I would be happy to be shown what a good dataset for hyvid looks likecool. don't have the time because takes way too long and I'm passing out. gn
>resident pedophile starts throwing a tantrum when challenged
>on images only, in well under 24GB of VRAMhow well under
>>10350160420-22gb18 one day (lol)
Guys can we take a moment to appreciate how far we've come?
>>103501671No, I'd rather complain and wait for the next model
>>103501671considering you can't do that on any modern base without lora copium, nah.
>>103501385>I spent three years understanding the relationship between images, temporal models and 3d models
>>103501682fuck we were good at prompting back then
>>103501671for every step forward we take two steps back in sovl
>>103501671imagine one day they just let us have that model
fuck I saved heaps of these, thank god, these are great
>>103501696unironically this is a better greta than Flux can do
>>103501724>>103501728>has a bigger and more diverse dataset than fluxman, what the FUCK!
>>103501671LK
>>103501731I said on day 1 that Flux was garbage the moment I tried it out against Dall-E 3. No character recognition, forced plastic aesthetic, no style recognition. Datasets have regressed thanks to local self-sabotage and synthetic captions erasing copyright. Data is king, and recent models have absolutely lobotomized their datasets.
Is what animanon said about anime datasets true? Should I start collecting anime videos?
Page 9, you know what they say about page 9
>>103502003give her armpit hair
>>103502003paggee ninny....?oh, page nine
This is the end of /ldg/
should have given her armpit hair
>>103502048yeah im up at the ass crack of midnight genning fat ugly ginger girlsits so over
>>103501766>forced plastic aestheticnot quite true>No character/style recognitiontrue
>>103502073May I see it?
>>103502077sure since realism engine is aids and can't understand my prompt this is the most sfw one i've got so fargonna have to go pony realism or redownload lustify which is barely any better since it at least followed my prompt last time..
>>103502086Wait, it's not moving.
>>103498395Im going to make gore videos of this grifty third wordler
>>103502089please don't hit me with that, i'm saving for a gpu that can do vidgen.
>>103501848I've seen uglier.
>>103502184is that real?
>>103499634I love this.
>>103501724>>103501671ah... i remember the old days...
>>103502255a classic
>>103502255great you guys are making me wanna watch the whole oneyplays a.i playlist from back during a.i dungeon and that discord bot im assuming used 1.5.i remember fucking with a greta gen in late 2023 but i have no clue were that shit is on my drive (or the sfw stuff for that matter)
>>103501809Nothing he says is true.
>>103501671This is what a billion dollar company was doing not long ago.
>>103501671and I forget what this weird morph thing was
>>103501671this was late 2022 early 2023
>>103502307Stoners and people who make shit music videos loved this
>>103502315oh yeah didn't a bunch of people sperg out when an ai video won some Pink Floyd contest?
>>103502321Reddit freaks out at anything AI.
>>103502334nah it was real world people getting madhttps://guitar.com/news/pink-floyd-slated-after-ai-created-video-wins-dark-side-of-the-moon-animation-competition/the video wasn't very good
>>103502344It was true deforum slop
Someone make a new thread
Baking
>>103502450>>103502450>>103502450
>>103501671we downgraded desu, we can't do greta thumberg with modens models anymore
30 more epoch before I find out if I have wasted a day, so excite