[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/r/ - Adult Requests

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


[Advertise on 4chan]


File: comfyui_apprentice.png (1.42 MB, 1328x1328)
1.42 MB
1.42 MB PNG
:::Apprentice pack (v0.3.60):
::::::::https://mega.nz/file/3h4y2KgC#ZQsoH7JHyzb5StA64kiUTVByMkhkc5Byro24DP5M548
::::::::https://mega.nz/file/j5pD2bwQ#cbqilnmiePksu4XSIM6t91EJ7XlLemQ4yAYb6ldrPZE (1staidbag.bat fix)

Updated workflows since initial pack release:

::::WAN 2.2 v1.12 workflow (blockswap added):
::::::::https://mega.nz/file/fsZVAapK#T1g8SUlgELHpSitWr6iVqfLXTZpjIadmsX32ExL8aHo

::::WAN 2.2 v1.13 workflow (MOE/Auto-split sigmas sampler):
::::::::https://mega.nz/file/zoQWGJ5B#zcQojnO1IDUOB2xWSvn0fdf_MdAtJce91yaohMbhX2w

::::WAN 2.2 v1.14 workflow (StartFrame/Endframe option, NOT VACE):
::::::::https://mega.nz/file/GsgD0Krb#400s829w2xrjK5KpjI0w4mYTFDscc57nAvLJf_nDwKE

::::WAN 2.1 v1.13 workflow (based on 2.2 one but single sampler):
::::::::https://mega.nz/file/ilhmARTL#ZvV93hRQA7XojDhpcTBSkC70BClHI2ARd8auQTe3GNE


Old thread(s)
vol1 https://archived.moe/r/thread/19910536/
vol2 https://archived.moe/r/thread/19938024/
vol3 https://archived.moe/r/thread/19987607/
vol4 https://archived.moe/r/thread/20011173/
>>
>What is it?
Modified version of comfy portable
Aimed for new apprentices who wanna jump in AI world. In the package
----Kijai's WAN-Wrapper nodes pre-installed
----Pre-installed nodes needed for my wan2.2/wan2.1 workflows (wf included)
----Pre-installed nodes needed for pisswiz's wan2.2 workflow (wf included)
----ReActor preinstalled, NSFW patched (note that updating the node might fuck it up)
----Sage & Triton installation
----Simple batch file by me that downloads all needed files (vae/clip/lightx-lora/ggufs) to get you started
----Incl. program called LosslessCut to join your clips together
----Couple upscale models included

>How
----Download the package
----Extract it on some fast drive that have good amount free space (100gb at least)
----Run the run_nvidia_gpu and see if you get your comfy running
----....if so, good, close it up. It won't do anything as you don't have models yet
----Install VC_redist.x64.exe and reboot (most likely you have this installed already, needed for triton)
----Download (1staidbag.bat dload fix into your comfyui root, replacing the old one, fixes lighx lora dowloads)
----Run 1stAIdBag.bat
----....Download options 1&2 and pick one of the GGUF models (Q8, Q5 or Q2)
----....Downloading these models take time as those are several gigs
----....Install sage and triton (also updates flash attention)

>Already got old install?
----Backup (cut) models and user folders somewhere (pref same drive's root where your comfy is)
----del ComfyUI_windows_portable folder
----read how section above
----paste folders back into your fresh comfyui portable

Once all downloaded. Launch Comfyui using run_nvidia_gpu_sageattention.bat, RESELECT ALL MODELS, to make sure
they are all there in your computer. Load image, type something in the prompt, run.
>>
If expecting path/string(NoneType) errors ---> you didn't reselect the models
If sage related error = you didn't install it correctly/ or at all --> install sage, or disable the nodes
If out of memory / allocation to device --> lower output resolution, shorten vid length, lower steps, enable blockswap

If/when you need help....
A) take a screenshot of your whole workflow (and log console window if possible)
B) state what workflow are you using
C) what specs on your rig

>Where to get loras?
https://civitai.com/search/models?baseModel=Wan%20Video&baseModel=Wan%20Video%202.2%20I2V-A14B&baseModel=Wan%20Video%202.2%20T2V-A14B&baseModel=Wan%20Video%2014B%20t2v&modelType=LORA&sortBy=models_v9%3AcreatedAt%3Adesc
(you need to make account and enable show adult content)
https://civarchive.com/search?type=LORA&base_model=Wan+Video+14B+i2v+480p&is_nsfw=true&sort=newest&page=2
(old 2.1 loras that got removed from civitai due to their nsfw bullshit policy)
>>
People on this board should help up this thread if it gets to the 10th page

Last thread got archived prematurely
>>
>>20035627

I just want to Thank 1st.AId.bag and all the people that helped me.
I was able to get some good images going but regretfully I can't anymore. Stuff happened.

I do, however, want to share the little I know to the AMD Radeon people.
If something's off it's (almost)ALWAYS incompatibility. All tools are designed to work with an NVidia and they are not professionally tested. So you need to go in and change settings within files and environmental variables and hacky folders.

EVEN THEN you won't get all of the tools, a lot of modules won't work, you need to find alternatives for whatever the tutorial tells you to use until you identify what each module does and you can do it yourself.

ALSO keep an eye on your system security, antivirus, Windows protector defender, whatever. Very often they identify files as "evil" and they straight up delete them immediately without giving you notice. Open your System security recent issues when you encounter a roadblock.

If everything else fails, copy your Console output to Chatgpt and describe your issue. You'll get there eventually, you just need to be patient.

I think I've would have gotten way further but alas, it is what it is. I'll come back later and I'll finish what I started.

----

Something that was NOT clear to me 1st.AId.bag is, Comfy comes with the ability of doing lewd images off the bag? (provided you have the correct checkpoints and Loras?)

Is there something's specific that you "need" to hack to allow adult generation? I did a lot just reading tutorials early on so I'm not sure what I "activated" and "hacked". This point was never clear to me.

Anyways. AMD.newb out. May you generate some hot bitches brothers.
>>
hey bag u think u have time to make t2v workflow
>>
>>20036030
>Is there something's specific that you "need" to hack to allow adult generation?

Simple answer: No

More detailed answer: No (generally). Depends on your model. Most models are uncensored and react to prompt the best they can, but may not be optimal for adult content. It depends on training data and other factors.
Certain nodes may be censored themselves, the popular example being FaceReactor, which does need to be modified to work with nsfw content.
>>
>>20035627
downloaded your "WAN 2.1 v1.13 workflow (based on 2.2 one but single sampler):"

and simply added my photo + post/neg prompt and hit run to only be encoutered with an error

"Cannot execute because a node is missing the class_type property.: Node ID '#63'"

Ran that through GPT and it gave me a multitude of new fixed workflows but they all did not work. Any idea why?
>>
>>20036030
like anon >>20036303 said. No.....most common and "default" models on image side, like juggernaut or schnell on flux wont do NSFW. they don't do even nudity that well as there is no training-data for that on them.

On sdxl side there are plenty good nsfw models which you can boost with loras. On flux side, most of the nsfw thing is related to loras.

tldr;
>Comfy comes with the ability of doing lewd images off the bag? (provided you have the correct checkpoints and Loras?)
yes....there is no filter in the program itself if that is what you were asking
>>
>>20036473
did you remember to re-select every model/vae/clip etc
>>
>>20036507
I'm going to say no I did not as I'm not sure I follow, can you define "re-select"?
>>
>>20036511
click on these and re-select the models....so they point to your computer, not mine

if that doesnt help, might need screenshot of the console and/or workflow
>>
File: 2025-10-21 203459.png (109 KB, 556x728)
109 KB
109 KB PNG
>>20036586
forgot the pic
>>
File: 123123123123123.png (63 KB, 683x638)
63 KB
63 KB PNG
>>20036586
>586▶>>20036587
>>>20036511
In the attached for example, what am I supposed to do with this clip node? It's just a text box and clickin gon the bottom two icons do nothing. Must I delete this node and re-do it?
>>
>>20036601
did install the pack or just get the workflow? (i think the later option and therefore you are missing some nodes, gguf loader in this case)
>>
>>20036603
I downloaded the
"ComfyUI_Apprentice_portable_0360" which is a .RAR file. My dumbass probably didn't do what I should with this
>>
>>20036638
well i assume you unpacked it since you got it running.....which confuses me as all the needed nodes should be preinstalled. What shows if you go to manager and and click install missing nodes in the workflow
>>
>>20036643
I installed all the missing nodes so the red bars are gone.

The .RAR file, I don't believe I did much of anything but drag the downloaded .rar file into a folder so doub I extracted it.

That said, I ran the workflow and got this error

Prompt outputs failed validation:
ImageResizeKJv2:
- Failed to convert an input value to a INT value: per_batch, <tr><td>Output: </td><td><b>1</b> x <b>640</b> x <b>640 | 4.69MB</b></td></tr>, invalid literal for int() with base 10: '<tr><td>Output: </td><td><b>1</b> x <b>640</b> x <b>640 | 4.69MB</b></td></tr>'
>>
File: 1761069088070535.webm (2.77 MB, 1866x1648)
2.77 MB
2.77 MB WEBM
So, I saw buddy guy on another thread do this vid, and I'm curious what you think here.

So, img2img nudify then running whatever prompt he used seems like it got a pretty good darn result. I never got into img2img, difficulty finding good resources on how to get a workflow like that running, but im pretty sure it'd be useful because it looks optimal quality wise

maybe thats the combination of no interp/huge resolution, but im pretty sure this is better nipple quality than i can gen
>>
>>20036687
potentially
>>20014121
>>
>>20036687
i'm so confused now...how did you get your comfyui?
>>
>>20036687
>>20036874
specifically the post that follows that, talking about recreating the KJ node, give the referential posts a read
>>
>>20036856
in some old thread there is link for general inpaint guy's workflow. might wanna start there if you want to paint girl nude before the video gen
>>
>>20036856
>>20036882


>>20008245
....but if anyone had any tips for upgrading it to an SDXL version, that'd be appreciated. Any fiddling I do (changing model type, output sizes) just doesn't look as good as SD1.5 inpainting. And I have been too unmotivated to configure ADetailer nodes which might be the fix to make a meaningful upgrade.
>>
>>20036887
i thought it was sdxl, never tried it. are you sure you are using inpaint checkpoints, not the main ones?

example there is cyberrealisticXL_v70, and based on that cyberrealisticXL_v53Inpainting checkpoint
>>
File: join_00043_.png (2.42 MB, 2834x960)
2.42 MB
2.42 MB PNG
>>20036937
From left to right:
Original, SD15 (Cyberrealisticv80), SDXL (Cyberrealisticv53 inpainting)

Same settings otherwise

I will admit this isn't the best example, but SD15 has smoother output and less watermarking.
>>
File: ComfyUI_00297_.png (928 KB, 944x960)
928 KB
928 KB PNG
>>20036945
well, in my general habit of realizing things 2 months later than I should, by increasing the growth mask that gets rid of the watermarking (internet keep saying play with the denoise and cfg, but then I used critical thinking skills, again, 2 months later than I should have.)

But I still find the level of detail inferior to SD15, but we'll see
>>
>>20036878
>>20036874
>>20036876

I fixed that problem following the guy's result but now I this probolem when it runs

"UnetLoaderGGUF
expected str, bytes or os.PathLike object, not NoneType"

As for how did I get comfyui, I had originally downloaded it before then saw your guide and wanted to try this. I assume I don't need the .rar file then?
>>
>>20036945
well i think this more like personal pref. sd1.5 looks better at glance, but when you look closer, it looks too smooth compared to rest of the image. sdxl seems to sample the noise/jpeg artifacts better in to make look more realistic, thou the color/saturation seems bit off in your example
>>
>>20036977
well its much hard to help you as i dont know what version you are using etc.....i doubt you have updated it, so if you face errors, cannot know if its because too old comfyui version or something else. You kinda went all in and now looking for manual. Reason why i made my pack and this thread

>expected str, bytes or os.PathLike object, not NoneType"

re-select(click) the models
>>
File: quick and dirty testing 2.png (5.45 MB, 4712x1496)
5.45 MB
5.45 MB PNG
>>20036954
Haven't done lora testing, but it turns out I might just have one of them learning disabilities. Modifying the growth mask has resolved 90% of my issues inpainting with SDXL, and I bet the other 10% could be solved by changing the model or putting effort into the prompt as this was testing.

The resizing is still valid though, considering not reducing size to proportional 1024x1024 outputs somehow doubled the gen time and since they weren't standard size inputs resulted in deformities.

>>20036978
Yeah, I acknowledge, but my eyes see to hone in on:
Color matching
Square edges of artifacts
Shitty nipples
So whatever does those things right is what looks good, as opposed to objective realism.
>>
File: 2025-10-22 012508.png (1.1 MB, 2115x904)
1.1 MB
1.1 MB PNG
>>20036987
if you would allow some freedom to the ai, you could forget the inpaint models and use controlnet and normal sdxl models
>>
File: Untitled.png (256 KB, 1276x680)
256 KB
256 KB PNG
>>20037061
that's my other workflows, though I never thought to use the same photo as the input and the control.

Thanks, tips.
>>
File: ComfyUI_temp_crkoe_00032_.png (2.73 MB, 1024x1496)
2.73 MB
2.73 MB PNG
>>20037088
;)....i guess you could use mask in addition to keep more of the og pic
>>
Bump
>>
any t2v workflow?
>>
Bump
>>
>>20036197
>>20038159
totally forgot about this.....this should work. Note to use t2v models and and loras

Workflow:
https://mega.nz/file/a44zHCDB#sRn-mz9md2JSjsn8J-PmG_MwJ6bMJpcjJ6s5HUvbsmU

t2v lightx loras:
https://civitai.com/models/1585622?modelVersionId=2261165

wan 2.2 t2v q8s:
https://huggingface.co/QuantStack/Wan2.2-T2V-A14B-GGUF
ALSO NOTE for i2v users, LightX loras has been updated today (v1022):
https://civitai.com/models/1585622?modelVersionId=2337903
>>
Does anyone know how to make videos with scene changes/transitions where it cuts from a still image to a blowjob video?
Is there a new Lora that does that or just prompting magic
>>
Quick question to the wizards here. What are your go to Lora's for BJ's?

I'm struggling, getting spastic movement or very little penetration.
>>
>>20037088
Where can I find this workflow? Can anyone share it?
>>
File: dddd.png (134 KB, 1019x725)
134 KB
134 KB PNG
>>20038758
>>
>>20038758
oral insertion is pretty safe bet.....also jfj-deepthroat is pretty good for deep bj. I usually start by oral insertion alone, then on next clip i add deephroat lora and and "deepthroat blowjob" to the prompt

both of these loras lack black cocks, might want to add bbc_service or bbc blowjob lora into the mix to get bbc

>>20038753
more like video editing.....make clip like you would normal do, cut the first part of it away (aka getting into position / insertion part)
>>
>>20038782
you are using two versions of lightx lora at the same time.....its hard to see your settings (weird font you got there)
>>
File: ARCANE_MAGE_00014.mp4 (2.08 MB, 720x624)
2.08 MB
2.08 MB MP4
>>20038758
>>
File: YYU.png (63 KB, 880x911)
63 KB
63 KB PNG
>>20038806
EDİT PİC
>>
>>20038809
get the latest high and low lightx:
https://civitai.com/models/1585622?modelVersionId=2337903

use only that...also start with one lora at the time that do the same thing....example you got oral_ins and facefuck both on 1.0, that might be also the cause of the jerkyness
>>
>>20038822
Is there a workflow where I can do face swapping without reactor?
>>
My videos are janky, what exactly am I doing wrong here?
>>
File: 123123123123123.png (258 KB, 1305x657)
258 KB
258 KB PNG
>>20038924
Forgot pic
>>
>>20038766


>>20016136
>>
>>20038859
The only other faceswap tool I have heard mentioned is facefusion, though it is a seperate program.

https://github.com/facefusion/facefusion

Never used it so cannot comment. I guess the idea is to generate a video or image and then run it through facefusion.
>>
>>20038859
Quick googling brought up Ace++ (for Flux models specifically?) and PulID, EcomId....but again, things I am ignorant of.

Github for EcomID seems informative and has example workflows if you want to play around and report back

https://github.com/alimama-creative/SDXL_EcomID_ComfyUI?tab=readme-ov-file
>>
>>20038698

woo thank you
>>
hey bag. why do i have so many pictures in gallery when i finish with generation? it used to be just the last frame but it's showing every single frame done in the gallery
>>
>>20039033
>inb4 he realizes prompting is actually really hard and the eldritch horrors he inadvertantly creates haunt him forever before he gives up and goes back to img2vid
>>
>>20039036
this so that you pick better endframe yourself....feel free to delete that node if you dont need it (those images are not autosaved...only lastframe is)
>>
>>20038927
anyone?
>>
>>20038927
>iv2 model -- t2v lora
>cfg set to "1" without lightx lora loaded
>none set to "1"...if there is no lora, remove it

for starters....
>>
>>20039037

it's not hard. it's easy
>>
>>20039059

thanks dear
>>
UP
>>
Bump
>>
Am I supposed to mess with the update folder things?
I updated something and I can't tell if i'm crazy, but the generation time is up from 140s to 250s. Did I brick it?
>>
General questions:

I am looking to get a RTX 5090 - 64 GB RAM setup

How long would it roughly take to generate those 5 / 10 second good quality videos that some wizards do here?

How come most wizards do only 5 second videos?
>>
RTX 5070TI 6 s = 220s
>>
Best way for me to find loras that work for my model? Also, how do i determine strength levels for each Lora or must I play around with it?

Lastly, if I trained a Lora on Kohya, can I use it in Wan I2V?
>>
>>20040526

My 5080 can do 5 sec videos in like 2-3 minutes
so i guess ur 5090 might be 1-2 minute

anything 5 seconds or more can produce weird abominations
>>
File: 15773654858815782_00001_.png (2.17 MB, 1920x1088)
2.17 MB
2.17 MB PNG
>>20038698


why so many arms ahhhh fuck so scary
>>
File: ssss.png (655 KB, 2221x907)
655 KB
655 KB PNG
>>20041388
>>
>>20041390
>>20039037
>>
>>20039175
you try that triple ksampler node that was mentioned in the new lightning loras? I'm finding it to be slow, compared to a triple ksampler setup
>>
What LORAs and prompts do you prefer for softcore i2v, things like titdrops and and stripping?
>>
>>20041283
>>20040714

Sounds good
>>
So how come some wizards can do !0 - 15 sec clips (not stanching at least not to my eyes)?
>>
>>20042133
>>20041283

*
>>
>>20042133
>19943166
>19943169
>19943177
>>
>>20042403

>>20042133

>>19943166
>>19943169
>>19943177
>>
What about lower end rigs, can they make a decent img2vid? Also, any idea on how long it would take for 5 secs on a 3060 with 12GB?
>>
>>20042414
takes me roughly 10 mins with that gpu, not sure what others get
>>
Before i start messing with shit and downloading many gb of files, does any of this work on M series silicon Macs? I see you guys saying it was designed for NVIDIA gpus, and I see there are RAR files involved, is it possible to do this with my comfyUI for mac?
>>
>>20043191
I dunno...seems like comfuyi works on M-series mac (google info) so i guess. How good, no idea. also....who the fuck buys macs in 2025? If you already have comfyui installed, get some of the workflows and install the missing nodes manually
>>
>>20043288
Ive been using macs for 15 years and the latest M series laptops are fucking incredible. I edit 4k video on them and they dont even get slightly warm and have never lagged once since i bought the m2 (i have m4 now) they dont even have a fan because of how efficient they are. I could edit 4k video or audio all day without even plugging it in
>>
What are the bypassed fp16 diffusion models on the upper left of 1.13 wf? the nodes are next to the unet nodes?l

Maybe the problem was q8 gguf models I'm trying q5 now, anytime I make a bunch of generations like 10, my conputer will start crashing. I've tried so many things to fix, been in 4 of 5 of these threads l. Hopefully this works.
>>
>>20043450
my workflows have option to use either diffusion models or GGUFs, just pick one.....helps if you clear the vram cache after a each run (there is node for that in my workflow)....also don't make huge batches at once
>>
Thanks for the reply, just finally found the solution to my GPU crashing my computer... It was overheating, so I followed the yt tutorial on undervolting a 5060ti 16gb, now my card is running 1000% quieter and 15 degrees cooler from 80 c down to 65 c. This is after trying dozens and dozens of fixes.
>>
AI fags are subhuman
>>
>>20043639
I remember that error....sorry, I didn't realize when you said COMPUTER crashing, you meant hard reset.
For some reason I read that as COMFY crashing, and thought it was some unknown error.

My first GPU had that issue and I had to cap all seettings at -10%
>>
>>20035627
Thoughts on lightx2v 1022 update? Is it better to use the TripleKSampler with it?
>>
bump
>>
>>20043489

bag please help me with this man

>>20041388
>>20041390
>>
>>20041402

im laughing so hard right now!



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.