::::Apprentice pack (v0.3.60):::::::::https://mega.nz/file/3h4y2KgC#ZQsoH7JHyzb5StA64kiUTVByMkhkc5Byro24DP5M548Updated workflows since initial pack release:::::WAN 2.2 v1.12 workflow (blockswap added):::::::::https://mega.nz/file/fsZVAapK#T1g8SUlgELHpSitWr6iVqfLXTZpjIadmsX32ExL8aHo::::WAN 2.2 v1.13 workflow (MOE/Auto-split sigmas sampler):::::::::https://mega.nz/file/zoQWGJ5B#zcQojnO1IDUOB2xWSvn0fdf_MdAtJce91yaohMbhX2w::::WAN 2.2 v1.14 workflow (StartFrame/Endframe option, NOT VACE):::::::::https://mega.nz/file/GsgD0Krb#400s829w2xrjK5KpjI0w4mYTFDscc57nAvLJf_nDwKE::::WAN 2.1 v1.13 workflow (based on 2.2 one but single sampler):::::::::https://mega.nz/file/ilhmARTL#ZvV93hRQA7XojDhpcTBSkC70BClHI2ARd8auQTe3GNEOld thread(s)vol1 https://archived.moe/r/thread/19910536/vol2 https://archived.moe/r/thread/19938024/
>What is it?Modified version of comfy portableAimed for new apprentices who wanna jump in AI world. In the package----Kijai's WAN-Wrapper nodes pre-installed----Pre-installed nodes needed for my wan2.2/wan2.1 workflows (wf included)----Pre-installed nodes needed for pisswiz's wan2.2 workflow (wf included)----ReActor preinstalled, NSFW patched (note that updating the node might fuck it up)----Sage & Triton installation----Simple batch file by me that downloads all needed files (vae/clip/lightx-lora/ggufs) to get you started----Incl. program called LosslessCut to join your clips together----Couple upscale models included>How----Download the package----Extract it on some fast drive that have good amount free space (100gb at least)----Run the run_nvidia_gpu and see if you get your comfy running----....if so, good, close it up. It won't do anything as you don't have models yet----Install VC_redist.x64.exe and reboot (most likely you have this installed already, needed for triton)----Run 1stAIdBag.bat----....Download options 1&2 and pick one of the GGUF models (Q8, Q5 or Q2)----....Downloading these models take time as those are several gigs----....Install sage and triton (also updates flash attention)>Already got old install?----Backup (cut) models folder somewhere (pref same drive's root where your comfy is)----del ComfyUI_windows_portable folder----read how section above----paste models folders into your fresh comfyui portableOnce all downloaded. Launch Comfyui using run_nvidia_gpu_sageattention.bat, RESELECT ALL MODELS, to make surethey are all there in your computer. Load image, type something in the prompt, run
If expecting path/string(NoneType) errors ---> you didn't reselect the modelsIf sage related error = you didn't install it correctly/ or at all --> install sage, or disable the nodesIf out of memory / allocation to device --> lower output resolution, shorten vid length, lower steps, enable blockswapIf/when you need help....A) take a screenshot of your whole workflow (and log console window if possible)B) state what workflow are you usingC) what specs on your rig>Where to get loras?https://civitai.com/search/models?baseModel=Wan%20Video&baseModel=Wan%20Video%202.2%20I2V-A14B&baseModel=Wan%20Video%202.2%20T2V-A14B&baseModel=Wan%20Video%2014B%20t2v&modelType=LORA&sortBy=models_v9%3AcreatedAt%3Adesc(you need to make account and enable show adult content)https://civarchive.com/search?type=LORA&base_model=Wan+Video+14B+i2v+480p&is_nsfw=true&sort=newest&page=2(old 2.1 loras that got removed from civitai due to their nsfw bullshit policy)
is there a way to add a reference image so the face stays consistent? I'm chaining together clips using the last frame, but after 3 or 4 clips the subject sometimes looks completely different
>>19987748I personally would use an external tool like FaceFusion
>>19987748i wouldn't try more than 3 clips....and try to make sure on last frame subjects face is clear and sharp, eyes open, facing camera.
Hi bag. I haven't been here since the 2nd thread since I've figured it out. We're all still learning with each release though so I like to drop in and troubleshoot.Just wanted to say thanks. Carry on.
Is there anything i can use that's decent with an AMD Ryzen 5 4500U? I am trying to figure it out lol. CPU is 2.38 Ghz, 8GB RAM. Probably not, but i figure i'd ask. Thanks
>>19987960Well cpu plays pretty little part in AI....speed comes from GPU, more ram allows faster loading of bigger modelsI'd say minimum is 8gb rtx card, 16gb ram.... But on those specs you would have to use pretty low quality models, not really worth the time.I have 4070super (12gb) and 64gb ram... It's not superfast but does the job. RAM is rather cheap. I got 2 x 32gb ddr5 under 200bucks
>>19988022Thank you sir, i appreciate the information. I gotta upgrade. Some of these request vids are insanely real.
Comfy taking longer than usual for others after the last update? I swear I'm taking 5 minuts longer out of nowhere. Maybe I'm just a faggot.
>>19988171Patience is a virtue.. I'm probably a fag as well
>>19987607Anyone's got recently an AMD GPU to work?Could you talk about how it went incorporating it to these modified apprentice pack?
>>19988186there was one anon who installed zluda comfyui on last thread. Dunno about the speeds on that....models, nodes, etc are the same in amd and nvidia, so there is no difference....(apprentice pack just comes with node/depencies stuff preinstalled), nothing you couldn't do manually on your zluda installbasically once zluda comfyui installed, first thing is to install comfy manager, then open the workflow and install missing nodes via manager.I dunno if sage works on amd card. Maybe some wiser can help you with that
I got "Power Lora Loader (rgthree)Error while deserializing header: header too small" repeatedly when running the wokflow with the sample image and sample prompt. I'm using 1stAIdbag_WAN2.2_(v1.4) with a 3070.anything I'm doing wrong?
>>19988279that's the wrong screenshot
>>19988279Same for me. Had to download the lora manually on huggingface
>>19988203yes im slowy making this working..having problem installing TRITON right now
Anyone have a good nudify website for a simpleton?
>>19988282>Error while deserializing header: header too small"did you try downling via my bat?....if so there might be something wrong with it.try dloading the loras manualy into /models/lora/ folderhttps://civitai.com/api/download/models/2090458?type=Model&format=SafeTensorhttps://civitai.com/api/download/models/2090481?type=Model&format=SafeTensor
>>19988542I always wonder what the low end for simple image generation is. I was getting sub 1 minute times for SD1.5 on a 6gb GPU, though SDXL was problematic.If you forget video gens, and go pure single image, say 1024x1024, what is the low end requirements?
almost glad the update bonked my workflow ,I was getting kinda addicted lmao,did like all anygirl i thought was vaguely attractive I know suck a dick lmao
>>19988620I'm more so referring to websites where you can upload images and have it done for you?
>>19988279i am also getting this error, does anyone have a fix?
>>19988561>>19988641
>>19988620I've 8GB GPU and 32 ram with an i5. I can do image stuff no problem.Currently using gwen-image-edit-2509 with ggufs (Q5 works fine) and it's doing wonders. there's a workflow in comfyui for it already. Just swap the unet loader in and grab the gguf you need
>>199886861.5 to 2 minute generation times I presume?
>>19988691seems to greatly depend on the image, the edits, and how many images I'm feeding the workflow. Anywhere from 1-3 mins I'd say
>>19988656downlaod the file from herehttps://huggingface.co/Kijai/WanVideo_comfy/blob/d4c3006fda29c47a51d07b7ea77495642cf9359f/Wan22-Lightning/Wan2.2-Lightning_I2V-A14B-4steps-lora_LOW_fp16.safetensorshttps://huggingface.co/Aitrepreneur/FLX/blob/main/Wan2.2-Lightning_I2V-A14B-4steps-lora_HIGH_fp16.safetensors and place in ComfyUI_Apprentice_portable\ComfyUI_Windows_portable\ComfyUI\models\loras
I'm just starting out. Can anyone list out their negative prompts? Just looking for a baseline to go by. Thanks.
videos are coming out blurry as fuck and it literally looks like the person turns into an alien. Anything you notice bag?
>>19989220turn off teacache......not ment to be used with lightx loraalso: you need to load that deepthroat lora into low noise lora loader as well (if that is wan2.1 lora). If that is wan2.2 lora....high lora part on high, low lora part on low loader
>>19989241so it looks like....
>>19989241>>19989220It's the 2.1 lora, it needs to be replaced in both high and low with the 2.2 version
>>19989252you can use wan2.1 loras but then you need to load the same lora in both loaders, high and low. But there are good oral loras for wan2.2, i suggest you prefer thoseexamplehttps://civitai.com/models/1874811?modelVersionId=2122049https://civitai.com/models/1874153/oral-insertion-wan-22
>>19989002>>19989220bag is it necessary to have a bunch of negative prompts to smooth out the final product? I'm just not sure what to put in unless it's something specific I don't want in the video
>>19989286no.....i keep negatives kinda minimal. "speaking, talking"....positives and loras weight more. Example in oral insertion lora there is loads of camera action, zooms, pans.....you really can't get rid of them via negative prompts
>>19988171on my workflow....i've set high noise lightx to 0.85 and high sampler cfg 1.5, this helps bit with the movement but also makes high noise part sampling bit slower.....you can set them back to 1.0 values for faster 1st pass
up
On AMD GPU which nodes can remplace the sageattention one ? I have to disabled this node because seems like triton can only work on NVIDIA... I don't understand everything on this part... ?Actually i made this working and generating very good quality content but it's taking 6 minutes to generate 5 seconds.Config AMD 7800x3d + Radeon 7900xtx + 64gb ddr5
>>19989931That's not bad speed..... On the sage node, rather than setting it auto.... Select Triton option and give it a try.
>>19989949*Not Triton but other options. Can't remember if there is romc option in it
>>19989931Reading some github pages....some suggest installing sage attention 1 when amd gpu>pip install sageattention
>>19989965Thx man ill try this tonight.Would like to thank you to made into this im starting to enjoy making some AI content!
>>19988686what kind of content are you making.
Does anyone have the issue where slider nodes show up as blank in the workflows? I can edit them via the properties panel, but the actual slider and content of the node does not appear on the node itself
>>19990572>slider nodes show up as blank in the workflowsi dunno if you have custom node called mixlab installed.....but this has caused some issues in the past (and seems like not fixed to this date)https://github.com/Smirnov75/ComfyUI-mxToolkit/issues/28#issuecomment-2603091317
Generally speaking, do wan 2.1 Lora's work with Wan 2.2? Curious in trying to mix and add a few together. Thankyou kind Sir's
>>19991201generally short answer; yes, they do.....you need to load 2.1 lora in both lora-loaders (high & low).Some loras work better than others and you might want to use them in bit higher weights than 2.2 loras. It's trial and error
>>19991214Sweet as thanks man, I've noticed the NSFW Wan 2.2 Lora's are still being made here and there, is it recommended to try and train your own Lora's or does that take alot of effort to tweak the results to what you want?
>>19991079hey bag how did this anon get sound into their gen?
>>19991168Dude, you are awesome. That fixed it. I searched all around google and couldn't get this solution to pop up in the results. If there is ever a way to support your efforts, let me know!
>>19991778....how about, ask him?could be wan animate or s2v model